CO2 residence time discussion thread

by Judith Curry

There was some discussion of this topic in the context Murry Salby’s talk, but it has been suggested that this topic deserves its own thread.


This post is motivated by the following email from Hal Doiron:

Hello Dr. Curry,
.
In my review of climate change literature related to atmosphereic CO2 sources and sinks, I have run into a wide range of opinions and peer reviewed research conclusions regarding the following specific question that I think is central to the CAGW debate.
.
How long does CO2 from fossil fuel burning injected into the atmosphere remain in the atmosphere before it is removed by natural processes?
.
Sources of confusion in answering this question:
.
1.  In responses to one of my comments at Climate, Etc., Fred Moolten has claimed the answer to this question is about 100 years.  http://judithcurry.com/2011/08/18/should-we-assess-climate-model-predictions-in-light-of-severe-tests/#comment-101642     “To focus on the most relevant element in this situation, CO2, the salient feature is the exceedingly long lifetime of any atmospheric excess we generate from anthropogenic emissions – there is no single decay curve, but the trajectory of decline toward equilibrium concentrations can be expressed as a rough average in the range of about 100 years, with a long tail lasting hundreds of millennia. In other words, the CO2 we emittomorrow, or refrain from emitting, is not something we can take back if we later decide we shouldn’t have put it up there. It will warm us for centuries.”
.
2.   From an ESRL NOAA website http://www.esrl.noaa.gov/gmd/education/faq_cat-1.html#17 , I found:
·   What will happen to Earth’s climate if emissions of these greenhouse gases continue to rise?

Because human emissions of CO2 and other greenhouse gases continue to climb, and because they remain in the atmosphere for decades to centuries (depending on the gas), we’re committing ourselves to a warmer climate in the future. The IPCC projects an average global temperature increase of 2-6°F by 2100, and greater warming there after. Temperatures in some parts of the globe (e.g., the polar regions) are expected to rise even faster. Even the low end of the IPCC’s projected range represents a rate of climate change unprecedented in the past 10,000 years.
.
3.   I believe in listening to Dr. Murry Salby’s audio lecture at Climate, Etc., his research led him to the conclusion that the atmospheric residence time of CO2 from fossil fuel burning emissions was only a few years.  This was related to investigation of the trends of the ratio of Carbon 12 to Carbon 13 isotopes in the atmosphere.
.
4.   There is previous published literature, also based on the ratio of Carbon 12 and Carbon 13 isotopes that Dr. Salby discussed, that concludes the atmospheric residence time of CO2 from fossil fuel burning is about 5 years.  This literature is reviewed and cited by my former NASA colleague, Apollo 17 astronaut, and former US Senator, Dr. Harrison “Jack” Schmitt, in his essay on CO2 at: http://americasuncommonsense.com/blog/category/science-engineering/climate-change/4-carbon-dioxide/#r4_14

.

As I monitor the debate on CAGW, it seems to me that if this particular recommended thread topic could be settled with high confidence, then much of the CAGW alarm could be moderated and refocused on a broader range of climate change issues.  I suggest it should also be a key research topic for further investigation in an attempt to answer the posed question with high confidence.

Sincerely,
Hal Doiron
.
JC comment:  I don’t have a good answer to the question Hal raises.  Below are some online references that I’ve spotted, from across the spectrum.
And finally an exchange between Freeman Dyson and Robert May in the NY Review of Books:
.
I don’t have time to dig into this issue right now, so I’m throwing the topic open for discussion, hoping for some enlightenment (or at least confusion) from the Denizens.

1,192 responses to “CO2 residence time discussion thread

  1. John Carpenter

    Judith,

    The Freeman Dyson/Robert May links are not working.

  2. Only a few years. For the most part it’s even shorter – the most of it is removed locally and immediately. Atmospheric CO2 is driven/determined by global climatic factors, whatever it means (SST, sea ice extent…). Think H2O.

    • Edim – I don’t see the relevance of that article to CO2 residence time. I read the full article, not just the abstract, and found it interesting in its analysis of ion-mediated nucleation rates involving sulfuric acid. It did not address the question of how many nuclei could be induced to grow to a size of climate significance for low cloud formation, although earlier data have shown this probably to be relatively small and the results reported here are consistent with that possibility..

  3. Time to read up on Cosmic rays and Climate see NATURE latest edition. The game is up

  4. Hal Doiron has written:

    “As I monitor the debate on CAGW, it seems to me that if this particular recommended thread topic could be settled with high confidence, then much of the CAGW alarm could be moderated and refocused on a broader range of climate change issues.”

    I find the abbreviation CAGW to be somewhat objectionable in its misrepresentation of mainstream views, but that is a minor quibble and peripheral to the main topic here. Rather, I would like to ask Hal Doiron a question, because I’m not sure how to interpret his statement.

    Hal – At what duration for the residence time of excess CO2 would you perceive that interval to be long enough to be of concern? How many years specifically for an interval representing an average residence time for the excess? For clarity, I’m referring to the time necessary for an excess over an equilibrium concentration to return to baseline, where this can be approximated as a “half-life” although that would not be accurate in the formal sense because there is no single exponential decay curve.

    If it takes X years for the excess concentration to decline halfway, what value of X would be worrisome for you?

    • Harold H Doiron

      Fred,

      I believe there are clear, proven and well-known beneficial effects of CO2 in the atmosphere with regards to increased rates of plant growth, better crop yields, etc. needed to support the growing population of the planet. I don’t know what a harmful level of CO2 in the atmosphere would be. I have read that US submarines are allowed to have 8,000 ppm CO2 before there is any concern about health related effects (more than 20 times current levels). I don’t know what the optimum level of CO2 in the atmosphere would be, all things being considered, but it is somewhat likely that the level would be higher than it is now, and I have factored this into my thinking about the current situation.

      If the human activity related CO2 atmospheric residence time is closer to 5 years as some scientists claim, and not the 100 or so years that you and NOAA apparently believe, then the growth rate of CO2 in the atmosphere from human related causes isn’t at such a high growth rate (compared to natural causes of CO2 sources and sinks that I don’t know how to control) that I should need to take immediate, potentially harmful action with unknown and unintended consequenses to restrict human related CO2 emissions, (your medical triage example suggested in a previous thread on decision making with limited data) as many climate scientists have called for. That is, if we can answer the question of the current thread closer to the 5 year residence time mark, then we can all agree we are not in a triage situation regarding control of CO2 emissions and that we have more time to work the problem of climate change with perhaps different decisions and action plans. After suggesting some “first steps” to take in response to your suggestion for defining such “first steps” in a previous thread http://judithcurry.com/2011/08/18/should-we-assess-climate-model-predictions-in-light-of-severe-tests/#comment-101642 , it occurred to me that if we could answer the question of this present thread, then that would change the crisis atmosphere that many climate scientists believe they are working in, and that makes them worry so much about inaction in the face of such dire, but uncertain predictions of their unvalidated models.

      • “If the human activity related CO2 atmospheric residence time is closer to 5 years as some scientists claim, and not the 100 or so years that you and NOAA apparently believe”
        Again this is the elementary misunderstanding that Judith seems to make no effort to dispel. Both figures are correct. Individual molecules are exchanged on a timescale of five years or so. And the increase in total CO2 takes a century or more to go away.

        It’s a bit like worrying about the Government printing money (OK, in pre-electronic days). In fact a huge number of notes are printed and destroyed each year. Any excess printed is small compared to circulation.

        But circulation, like exchange of molecules, is just that. What counts is change in the aggregate.

      • Marlowe Johnson

        I share your surprise Nick as this is a fairly common and relatively easy-to-dispel source of confusion. Aren’t you a climate scientist Judith? If you can’t take the time to answer simple questions like this, then what is the point of your blog?

      • Like I said in my main post, I suspect that this whole issue is much more complex than you are making it out to be. Your question, while simple, does not have a simple answer, IMO.

      • Marlowe Johnson

        While the particulars of estimating equilibrium response times are governed by multiple processes (as noted by Fred and Chris Colose downthread), it’s nevertheless trivial to point out that the atmospheric lifetime of an individual molecule isn’t the relevant issue…something which you bizarrely failed to do. Is your goal with this blog to sow confusion or dispel it?

      • The goal of the blog is to discuss scientifically relevant issues, which we are doing.

      • There’s again a mixture of simple things and less simple things.

        That we have two different residence times belongs to the simple things. Estimates on the level sufficient to conclude that the increase in the atmospheric CO2 over last 50 years (and also over the last 100 years) is predominantly of human origin belongs also to rather simple things.

        More precise estimates of the uptake of carbon from atmosphere to the other reservoirs is not anymore simple, because none of the subprocesses is accurately known. The balance between surface ocean and the atmosphere is the best understood of all, but even that is influenced strongly on details of the buffering of pH in oceans.

        One question that I have not found data on the level that I have been looking for is the role of deep oceans as a reservoir. The total amount of carbon in deep oceans is typically given to be of the order of 40 000 GtC, which is 50 times the amount in the atmosphere. The role of that in the uptake over long periods has not been discussed in papers that I have found. Archer skips the discussion in his papers stating only the the overall conclusion that 20-35% of CO2 remains until removed by sedimentation and weathering and presents the 1990 model analysis of Maier-Reimer as reference, which is not convincing. A Revelle factor of 10 would lead to the value 17%, when a balance has been obtained with the ocean, but is the Revelle factor for deep ocean 10. The value depends on the present pH and on the nature of buffering. Reaching full balance with deep oceans takes also a lot of time, but Archer seems to include that in the faster processes.

        On the other hand I don’t see either the importance of the level of the very long tail. I would rather think that, what happens after the excess CO2 concentration has dropped to one half of it’s peak value, is not likely to be a problem for the further future of the Earth and people of those periods. They may very well think that the decreasing trend is then a new problem and the lesser problem the lower it is.

      • Search under the term “biological pump” and “carbon cycle”

        It is really interesting stuff.

      • It is indeed.The partitions between the pumps (solubility and biological) are asymetrically distributed around 1/3 for the former and 2/3 for the latter.

        The effects in peturbation experiments are interesting if we shut the Biological pump off completely, we can observe from the preindustrial an increase from around 280ppm to 450 ppm over similar timescales eg Sarmiento et al 2011.

      • A study of decay of bomb created radioisotopes
        http://nzic.org.nz/CiNZ/articles/Currie_70_1.pdf
        suggests a primary decay half life of 13 years.

      • A. E. Ames, It was only about four months before the background radiation levels moved to a new, sustained level (see: gamma energy range 10)

        http://epa.gov/radnet/radnet-data/radnet-billings-bg.html

        & does ‘who’s or what’s, make any difference when considering ‘half life’?

      • Tom, I don’t know for sure, but I expect that the “non replacement removal time” for all CO2 isotopes should be the same within the accuracy of any experiment to measure it. (Molecular weight can matter in packing effects and substrate interactions.) Thanks for the interesting radnet site.

      • Interesting answer as Marlowe’s questions were

        1. Aren’t you a climate scientist Judith?

        Obviously a more complex question than Marlowe thought and

        2. If you can’t take the time to answer simple questions like this, then what is the point of your blog?

        We HAVE been wondering about that.

      • Eli, you and Marlowe seem to be confusing a climate scientist with someone who parrots the IPCC consensus.

      • Eli was just pointing out that you are avoiding the questions. This appears to be a sensitive point.

      • I get 500 comments here per day. I try to do a post per day. Not to mention my two day jobs. I only answer comments/questions that I can respond to within 60 seconds, and I have to be selective in which ones I answer. If someone raises something really interesting, I might do an entire post on it. People that are trying to play “gotcha” with me and want to tell me how I should be doing my job (here or more generally) typically get ignored by me.

      • Judith,
        The point at issue has nothing to do with any IPCC concensus. It is a matter of elementary physical chemistry. Freeman Dyson, hardly a IPCC parroter, set it out simply and explicitly in your second link:
        “He says that the residence time of a molecule of carbon dioxide in the atmosphere is about a century, and I say it is about twelve years.

        This discrepancy is easy to resolve. We are talking about different meanings of residence time. I am talking about residence without replacement. My residence time is the time that an average carbon dioxide molecule stays in the atmosphere before being absorbed by a plant. He is talking about residence with replacement. His residence time is the average time that a carbon dioxide molecule and its replacements stay in the atmosphere when, as usually happens, a molecule that is absorbed is replaced by another molecule emitted from another plant. “

        If FD finds it easy to resolve (and it is) why is it so hard here?

      • Nick, atmospheric residence time is NOT a simple issue of physical chemistry! I didn’t see any physical chemistry at all in your argument.

      • Nick, if you agree with Freeman Dyson’s skeptism and it’s already resolved by him, then why do you keep asking?

      • Judith,
        The elementary physical chemistry concept involved is dynamic equilibrium. The nett result of a forward and a back reaction. The time period quoted resulting from isotopes is the one-way process of CO2 molecules being absorbed by plants or whatever. As Dyson points out, there is a back reaction – that CO2 gets back into the atmosphere. If you want to know how total CO2 will diminish, you have to consider the result of both forward and back. That, as Dyson spells out, is the simple basis for the two different figures.

      • Nick, you are forgetting about the dynamics of the system, that is where the complexity lies

      • couldn’t the same be said about the greenhouse effect re complexity of the system

      • Nick, you are thinking of it as a black box system. Some want to consider what is happening inside it, some may know it as a clear box. Why is this a hard concept for you?

      • Kermit,
        I am not talkin g about any sorts of boxes. The issue is clear. Two different figures have been quoted for CO2 residence time. Hal and others ask whether the IPCC is wrong. The answer, as Dyson explained, is simple. It has nothing to do with dynamics, boxes or anything else. They are talking about different definitions. If someone wants to make an issue of it, they need to explain how the figures are comparable.

      • Which “dynamics” would those be, exactly?

        Nick, you are forgetting about the dynamics of the system, that is where the complexity lies

        If you’re talking about the carbon cycle, those would be dynamics about which, in your own words, you do not “have any expertise!”
        Judith Curry: “The Earth’s carbon cycle is not a topic on which I have any expertise.”
        Or has Murry taught you all about the carbon cycle, within the month of August?

      • Yes Nick, we understand there are 2 definitions being used for residences time. One is large and one is small, both are valid. Which one sounds better to the IPCC? the larger scarier one or the smaller one? This is exactly why you are going on and on about it. Its just another shell game “trick” environmentalists use.

      • 1. You expect people to answer rhetorical questions?

        2. If this blog has no point, then why are there so many commenters?

      • Not that your opinions about the carbon cycle count for anything.

        Judith Curry: “Your question, while simple, does not have a simple answer, IMO.”

        With all due respect, you have already begged ignorance on this question and should “remain silent and thought a fool” rather than assert things you just don’t know, “and remove all doubt.” Really. Because by your own admission, you’re not qualified to make that judgment, as your opinion is not expert.

        Judith Curry: “The Earth’s carbon cycle is not a topic on which I have any expertise.”
        (emphasis added due to your repeated refusal, despite numerous requests, to offer any scientific justification of your assertion that “it is sufficiently important that we should start talking about these issues.“)

        All that you can truthfully say now about the carbon cycle is that you do not understand it. Or, you can revise your previous statement that you do not “have any expertise.” That of course is up to you. But what you cannot do, at least not consistently (in case you care about that), is claim 100% ignorance while you’re promoting Salby, but then within the time of a month (hardly enough time to have become expert if you weren’t already!), declare with any authority that anybody else’s analysis of same is wrong.

        You didn’t have the expertise to articulate any reason then that Salby’s analysis is right or even likely right, therefore you also don’t have the expertise now to say whether anybody else’s is wrong.

        Not having “any expertise” on this question, it’s just the analysis you can provide right here, right now, versus theirs. To prove Eli, Marlowe and Nick wrong, you must show that at least one term they neglect, or the sum of terms they neglect in their analysis, have magnitude equal to (at a minimum) or greater than the terms that they include in their analysis. This is not a matter of opinion. As a scientist (former?), you know that “IMO” doesn’t cut it. Your opinion counts for nothing. Either you can show that other terms invalidate their analysis, or you cannot and in that case you have no basis to fault their analysis.

      • All humanity is divided into two classes; those who don’t understand the carbon cycle and those who don’t understand that they don’t understand the carbon cycle.
        =============

      • No. It isn’t complex. Let me list a quick series of proofs:

        1) The ice core record clearly shows that the recent increase in CO2 concentrations is, to use an overused word, unprecedented in the Holocene period, and, indeed, in the last 800,000 years, and non-ice core approaches show that the current CO2 may exceed levels seen for the last 20 million years (Tripati, Science, 2009).

        But, maybe you choose to throw out paleoclimate records because… well, because they don’t fit your preconceived notions that human emissions are too small to be meaningful. So…

        2) The Revelle factor: straightforward bicarbonate buffer chemistry known since 1957-1958 (Revelle & Suess 1957, Bolin and Eriksson 1958) shows that “a 10% increase in the CO2-content of the atmosphere need merely be balanced by an increase of about 1% of the total CO2 content in sea water” (Sabine et al., Science, 2004 estimate that the uncertainty of this factor ranges from 8 to 16). This is the key element that means that the ocean CANNOT absorb all the CO2 proportionally the way that Henry’s law might suggest, and that therefore a decent percent of any CO2 emission will stay in the atmosphere for thousands of years until sedimentation processes have sufficient time to react (See Archer et al., Annu. Rev. Earth Planet. Sci. 2009, for a review of millenial processes). This is a key concept that many short residence timers have never figured out – they just look at the total gigatons of carbon in the ocean and figure that it can easily soak up any increase in the atmosphere, but they’re wrong.

        3) Carbon cycle models have done a pretty good job of explaining the difference between “residence time” and “adjustment time”, dating back at least as far as Rodhe and Björkström in 1979 (turns out that this decoupling is a result of this bicarbonate buffering). This (among other things) is what trips up people like Essenhigh and Segalstad.

        4) Carbon cycle models also do a decent job of explaining trends in isotope levels in the atmosphere. Eg, Stuiver et al. Earth and Planetary Science Letters 1981, or Stuiver et al. GRL, 1998. See also, “Suess effect”. People like Salby and Spencer do cute little regressions, but they don’t have real physical models that actually show how and why concentrations and isotope levels have changed the way they have: nor do they test their regressions against existing carbon cycle models – if they did, they’d see that the existing models produce the right signatures. Essenhigh has a model, but first he handwaves the increase in concentrations based on a simplistic understanding of the CO2 solubility-temperature relationship (Revelle and Suess realized that the magnitude of this relationship was insufficient to explain observed CO2 changes back in 1957), and Essenhigh derives lifetimes for 12C and 14C that are different by a factor of more than THREE!!! I still can’t believe that any competent reviewer with a basic knowledge of chemistry could have let that pass – isotope separation is HARD (see Uranium separation): kinetic isotope separation for seawater is on the order of a couple tenths of a percent, and for plant matter is maybe a couple percent at best, so where does a factor of 300% come from?! (also, the carbon cycle field realized that the ocean wasn’t perfectly stirred at least 35 years ago: see Oeschger et al, Tellus, 1975 for an early example of switching to a more realistic diffusive model).

        5) These things have been debunked before. I recommend O’Neill B, Measuring Time in the Greenhouse, Climatic Change, 1997.

        Does this mean that we understand the carbon cycle, or that our models are perfect? Not hardly. I recommend Doman AJ; van der Werf GR; Ganssen G; Erisman, JW; Strengers B, A Carbon Cycle Science Update Since IPCC AR-4, A Journal of the Human Environment 39(5-6):402-412, 2010 as a review of what the actual interesting questions in carbon cycle science these days are.

        So, to review: your suspicions are totally off-base. If this was a biology blog, this kind of post would be the equivalent of wondering whether the “missing link” disproves evolution. If it was an astronomy blog, it would be equivalent of giving press-space to the guys who claim that the double shadow from a flag prove that the Moon landing was faked. By sticking to this position that this is a scientifically relevant topic of discussion, your status as a “meta-expert” is cast in doubt since you are apparently not “able to distinguish a genuine expert from a pretender or a charlatan.” I grant you that you at least have figured out the Sky Dragons are charlatans – but, to go back to my astronomy blog analogy, that’s like figuring out the guys who claim the Moon is made of green cheese are charlatans. Given your impressive publication record, you should be able to do a lot better than this. And maybe this should make you wonder whether you are a little too hasty to give credence to the “not-IPCC” crowd. And yes, the IPCC is certainly not perfect, but that doesn’t mean that if the IPCC says that a clear mid-day sky looks blue, you should go around believing people who claim that actually it is more like a purple.

        -M

        (and yes, I do get frustrated at having to debunk stupid myths over and over and over again. There are plenty of real, interesting uncertainties regarding an issue as complex as climate change, and especially with regards to appropriate mitigation or adaptation measures, so it is really frustrating that the debate keeps getting bogged down in questions that were solved 50 years ago)

      • M,

        The Appeal to Evolution is always a curiosity, but it’s irrelevant to questions of climate, even the 1,000,000th time it’s tried.

        Andrew

      • It may be irrelevant to questions of climate, but it is very relevant to questions of self-delusion. I note that you did not address a single one of my technical arguments. I could also keep going:

        6) The northern-hemisphere/southern-hemisphere gradient, and the relationship of that gradient to increasing CO2 emissions (see graph in IPCC AR4 Chapter 7).

        7) The mass argument that many here have used before.

        8) Surface ocean pH is increasing faster than pH at depth (indication of diffusion, and of which direction the CO2 is going).

        I’d also point out Engelbeen’s webpage.

        Anyway, have fun with your delusions,

        -M

      • M,

        would you like to point out the empirical data that were used to show the Revelle Factor??

        OK, how about the experiments that were done??

        Well, what have you got other than assertions??

        Please copy from the papers copiously as I am very ignorant.

        (and yes, I do get tired of having to debunk the same old tired Junk Science over and over again every time someone doesn’t actually read and COMPREHEND the papers they link.)

      • Good job, kuhnkat – recognizing your own ignorance is the first step to wisdom!

        If you were to read Sabine et al., you’d find out that they used data from the World Ocean Circulation Experiment (WOCE) and the Joint Global Ocean Flux Study (JGOFS) to measure inorganic carbon. The Revelle factor can be calculated as proportional to the ratio between DIC and alkalinity. Therefore, Sabine et al. were able to produce a global map of the Revelle factor, ranging from 16 in cold Antarctic waters to 8 in warm tropical basins.

      • M,

        you just made the mistake of using bald assertions again. I told you, copy a lot. I do not expect people to remember stuff I wouldn’t if I were on the other side. I DO expect them to actually support their assertions with more than more arm waving. You simply state they say you can do it.

        SHOW ME!!! Show me why THEIR assertions are meaningful!!

      • Sigh. I’m not a carbon-cycle expert myself, but I have read enough of the literature to be able to give you somewhat of a tutorial.

        First: in pure H2O, CO2 is not very soluble, but it would follow Henry’s law. But seawater isn’t pure, there’s a lot of buffer in there, and that buffer allows seawater to dissolve a _lot_ more carbon than pure H2O would be able to.

        Second: The total carbon in seawater is equal to the carbon in the dissolved CO2/H2CO3 plus the HCO3- plus the CO3=.

        Third: Adding CO2 to the solution increases the acidity, driving the equilibrium towards CO2/H2CO3.

        Fourth: Therefore, the ratio of CO2/H2CO3 to (HCO3- plus CO3=) increases with added CO2.

        Fifth: The CO2 in the atmosphere is in Henry’s law equilibrium only with the CO2/H2CO3 in the solution. So, if we were to add a strong acid to the solution, increasing the ratio in part 4, that would decrease the total carbon in the ocean. Of course, when adding CO2 we’re adding both an acid and CO2 at the same time, and so the increased acidity merely means that the increase in total oceanic carbon is _less_ than the 1:1 you’d expect by pure Henry’s law rather than leading to a net loss of carbon.

        Okay: so that’s the theory. We can measure DIC (that’s Dissolved Inorganic Carbon, see part 2) experimentally. Bolin and Eriksson found that [CO2] = 0.0133 mmol, [HCO3-] = 1.9 mmol, and [CO3=] = 0.235 mmol. We can also measure alkalinity experimentally, which is important because alkalinity = [HCO3-] + 2[CO3=], and Bolin and Eriksson found alkalinity equal to 2.37 mval. You can combine that with the disassociation constants of H2CO3 and the solubility of calcium carbonate – k1/[H+] = 143 and k1k2/[H+]^2 = 18, the calcium concentration of seawater [Ca++]=10 mmol, and with Henry’s law you can solve for the Revelle factor and you can get about 12.5.

        And now I’m tired of typing stuff in, but I’ve found a non-paywalled reference for you: http://ocean.mit.edu/~mick/Papers/Omta-Goodwin-Follows-GBC-2010.pdf. I will note one caveat to my above data, which is that Egleston (2010) referenced in Omta (2010) show that inclusion of borate in the alkalinity equation changes the answer somewhat, especially as the pH of the solution approaches 7.5. If you want more experimental data, Egleston uses alkalinity and DIC from the GLODAP project and temperature and salinity from the World Ocean Atlas (because disassociation constants are functions of temperature and salinity).

        Sadly, I suspect that you are a troll, and that my work here is therefore meaningless, but perhaps others can learn something…

      • Thank you, M!
        It’s good stuff, what we are calling multi-physics these days.

      • “The quantity O has turned out to be particularly useful for a
        number of reasons. First of all, it is much more constant than
        the Revelle buffer factor that has been extensively applied in
        theories of the ocean‐atmosphere carbon partitioning.”

        So, this paper that finds the Revelle buffer factor to be less useful you think I should accept as proving the Revelle buffer factor??

      • Ah, yes, the subtle* difference between “this approach is better” and “the old approach is not useful”.

        Omta et al: “Unfortunately, R is not constant: it varies between approximately 8 and 15 at the ocean surface [Watson and Liss, 1998]. Furthermore, the globally averaged Revelle buffer factor depends strongly on the total amount of carbon in the ocean‐atmosphere system [e.g., Goodwin et al., 2007]. We now derive an alternative index that is more constant than R.”

        Translation for the purposes of my original argument: The Revelle factor is large (in this case, greater than 8). That’s all that’s needed for the use of the Revelle factor in support of the fact that CO2 has a long residence time in the atmosphere. You’ll note that in my original post I cited Sabine et al who also discussed this range of 8 to 16. Omta et al. also point out that the Revelle factor will increase with increasing emissions, up to a factor of 19 at 1000 ppm CO2, which just increases the proportion of CO2 which stays in the atmosphere. Nowhere does Omta say that the Revelle factor is wrong, merely that they prefer their new “O” factor because its behavior is less sensitive to changes in emissions and other factors (and that behavior is monotonic, even past 1000 ppm).

        So yes, this paper supports my point quite well. It also demonstrates the difference between real science (“let’s figure out how the Revelle factor changes over space and different scenarios, and whether there might be other factors that could be useful descriptions of the system”) and junk science (“the Revelle factor doesn’t exist, and the historic CO2 concentration increase might be due to magic fairies rather than human CO2 emissions, despite the fact that several completely independent methods all demonstrate otherwise”).

        -M

        *This is meant to be sarcastic, by the way. I realize you might not do well with subtle.

      • M,

        sorry, your sarcasm doesn’t work very well.

        Let’s start with the basics. Where is the definition of the Revelle Factor/ That is, what are the components, the constant(s) if any, the equations, relationships?

        Next, what are the obnservations or experiments that these are based on?

        Sadly, you are typical of the knowledgeable types who deal with Climate Science regularly and never question the basics. They do not exist in your explanations and do not exist in that paper. Whether they actually exist in the papers referred to by you and the paper you linked I do not know.

        Here is a discussion about the same issue by Jeff Glassman and Pekka Pirilla:
        http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99490

        http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99650

        Again, there are no basics establishing that the Revelle Factor was EVER fundamentally established as a part of science. Pekka, like you, points to many issues that very well could have been a part of a Revelle Factor IF IT HAD EVER BEEN ESTABLISHED!!!

        It wasn’t. It is one of several myths of Climate Science that modern scientists are working around and filling in. Notice that Pekka states that the Revelle Factor could actually range to 1. So we have a mythological factor that can range from 1 to over 16 that is a kind of buffer effect for Co2 and water. Gee whiz. Color me impressed by all the hard science going on!! Any idea what the temperature curves are like? How about what actually causes the buffering and its curve or linear relationship? Yup, I am just overwhelmed by all the data you have inundated me with about the Revelle Factor.

        This is sarcasm in case you didn’t notice.

      • Let’s try a little reading comprehension here, Kuhnkat:

        You say: “Notice that Pekka states that the Revelle Factor could actually range to 1.”

        Pekka states: “Thus Henry’s law remains valid for fixed pH. The Revell factor is 1.0 in that case.”

        Um. Note the conditional in that sentence (you do understand conditionals, right? I don’t want to overtax your small brain, as you have previously admitted that you are “very ignorant”). If we keep pH fixed, then the Revelle factor is 1.0. But, in the real world, pH is not constant, and adding CO2 will make a solution more acidic. If you want to test that, extract some red cabbage juice (which makes a good pH indicator), and you can show that adding dry ice (or even just waving a juice-soaked towel in the air to absorb CO2) will increase the acidity. If you read my post starting at “First:” you’d see that the chemical theory is clear. Yes, the Revelle factor depends on the buffering, dissolved carbon, and temperature of the solution. That doesn’t make it “mythological”, it just makes it dependent on conditions. Is gravity mythological because it is 9.8 m/s^2 here, but 0 m/s^2 in interstellar space far from any massive bodies? I think not. I actually gave you every single constant and equation you needed (assuming you know what a disassociation constant is, but then, I’m not here to teach you high school chemistry, though you could clearly use a refresher). So the Revelle factor can be experimentally measured, and it has been, with the answer being “between 8 and 16 depending on where in the ocean you look”.

        Heck, you could dissolve some sodium bicarb in solution and measure Revelle factors yourself.

        You, my dear troll, are “very ignorant”. And this is why Curry’s attachment to her “e-salon” is of little value. It is saturated with people like you. Which is why the best blogs use moderation to keep the signal to noise ratio at some reasonable level.

      • You, my dear troll, are “very ignorant”. And this is why Curry’s attachment to her “e-salon” is of little value. It is saturated with people like you. Which is why the best blogs use moderation to keep the signal to noise ratio at some reasonable level.

        Curry’s E-salon is of high value because people here are very willing to learn from experts who drop by and impart knowledge, and occasionally the experts acknowledge something they pick up here which they were not previously aware of.

        Of course, the ‘experts’ running blogs which censor anything which threatens their apparent intellectual superiority won’t gain in this way, because experts who behave like arrogant pricks are generally unlikely to create an ambience in which they can teach well, or indeed learn.

      • M,

        how many other laws do climate scientists deal with that say they are only applicable at a fixed temp or at equilibrium, say Stefan-Boltzman, yet, they are used anyway KNOWING that the temp and pressure and flux are continuously changing?? Sorry, don’t want to tax your brain too much. You obviously have much more important things to do than try and educate this ignorant person.

        Now, do you think that every point in the ocean is changing quickly enough 24/7 that there is NEVER a time that the Ph actually stays constant for a measurable length of time? Even if it doesn’t, wouldn’t the 1 be a LIMIT of the range??

        It was a nice try though, distracting from the issue that there is no support for an actual Revell Buffer Factor presented here or in the papers linked other than many scientists using or referring to it without definition or derivation. I will accept this as your acquiescence that YOU and Pekka do not have this to hand. Maybe you can research it and provide the information?

      • M

        I am humbled.

        Thank you.

      • M wrote “and yes, I do get frustrated at having to debunk stupid myths over and over and over again.”

        I have submitted a comment to Energy and Fuels explaining the error in Prof. Essenhigh’s paper; I did so in the hope of limiting the spread of this particular error, which does neither side of the debate any good. Prof. Essenhigh is right that residence time is about five years, but this is entirely uncontraversial, the IPCC put the figure at about four years. However the rise and fall of atmospheric CO2 is not governed by the residence time, but the adjustment time, and hence the conclusion is incorrect. My paper uses a one-box model, essentially identical to that used by Essenhigh, to explain the difference between residence time and adjustment time (amongst other things) and demonstrates that the observations are completely consistent with the generally accepted anthropogenic origin, but not with a natural origin. The paper has been conditionally accepted, I am just working on the corrections at the moment.

        This particular aspect of the carbon cycle is not straightforward and it is perhaps not surprising that this confusion of residence time and adjustment time should occur. I wouldn’t say this was a stupid myth, the solution wasn’t immediately obvious to me before I looked into it. Part of the reason why it has peristed is perhaps it has been deemed to basic to have been discussed in detail in the peer-reviewed litterature (until now).

      • The paper has been conditionally accepted, I am just working on the corrections at the moment.

        Congratulations on your paper!

        I am glad that you and am M have used “adjustment time” as the proper name to give to the impulse response settling time. As a part-time researcher, I haven’t run across this before, and didn’t realize that Rodhe defined this in 1979.

        Do you give a single value to the adjustment time or do you give it a range of values?

        This particular aspect of the carbon cycle is not straightforward and it is perhaps not surprising that this confusion of residence time and adjustment time should occur. I wouldn’t say this was a stupid myth, the solution wasn’t immediately obvious to me before I looked into it. Part of the reason why it has peristed is perhaps it has been deemed to basic to have been discussed in detail in the peer-reviewed litterature (until now).

        This is exactly what I continuously discover. Casual readers see the results being presented and they assume that some unverified software program is spewing nonsense because they don’t get the fundamental explanation. Once you start apply a compartment model (or box model as you refer to it) to the problem space, the the answer becomes obvious.

        Take a look at the comment in this thread I made a couple of days ago concerning my own attempt at a box model for sequestration:
        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-106310
        I bet that it supports your findings, and I did the analysis because I was having a hard time coming up with a fundamental understanding of the fat tail response curve. My only disagreement is that I do think it is straightforward, because it is the same thing I would do to model diffusion of dopants and other low concentration particles in a semiconductor material. Electrical engineers and material scientists consider that a straightforward problem, and this understanding is what enables all computers that we are using today (as we proceed to type away).

      • Mr. Marlowe Johnson, solve this problem, with the IPCC product? Where: (GI+GO)=0

      • That’s easily solved with a simple change of variables:
        Tom = G[arbage]

      • settledscience, We all know that AGW science has shown absolutely no problem changing the variable, to suit themselves. Once again, your crowd has no intention to address the question I put forward: How may we ‘solve this problem, with the IPCC “product”:) which is= 0.’

      • Stupid analogy. The it’s not just the difference that is important. Knowing where the money is going and what it is doing provides insightful.

      • Kermit says, “Knowing where the money is going and what it is doing provides insightful.” Sad to know that there is nothing left & that’s not right. What differnce does it make now, thats the question, we don’t know? What a problem a day makes…

        http://www.kitco.com/ind/willie/aug252011.html

        Hope, this will help.

      • I’ll waste a little bit of time on some basic chemistry here. Hopefully nobody else raised these points.

        The “residence time” is a pretty nebulous concept and doesn’t have much to do with the chemistry of the CO2 in and on the earth. The climate concept is that humans have injected(are injecting) a significantly large amount of CO2 into the atmosphere over a relatively short period of time. This excess CO2 does react and does not all stay in the atmosphere. So the concept of residence time would be “how long will it take for all the human-induced CO2 to be removed from the atmosphere and the level go back to the 280 ppm or so equilibrium”.

        The first question is “has CO2 EVER been in equilibrium in the atmosphere? The paleo records all indicate that it has not been. The level has varied widely in different eras, generally lagging behind changes in temperature. Equilibrium chemistry is a very chancy thing. The best way to determine equilibrium is by determining the free energies of the reactions involved, the amounts of reactants, and calculate what the equlibrium values are in a phase diagram. This is totally impoosible to do because we do not know the reactions involved or their rates- simple solution/dissolution mass flow in the oceans, uptake of CO2 as carbonate into plants and animals, rate of release from decay, rate of carbon accumulation in sediments, rate at which sediments are being subducted at the plate boundaries, etc.

        The second question is “who cares?”; The amount of CO2 in the atmosphere fluctuates quite a bit on many time scales. Any notion of a residence time for CO2 in the atmosphere will depend almost entirely on the assumptions used to calculate it. Given the small numbers involved(CO2 in the atmosphere<<CO2 in the oceans<<CO2 and C in mantle and surface rock) it is only a small blip in the noise. Part of the who cares is that the climate doesn't distinguish between molecules or atoms, only the concentrations involved, and to a tiny extent the isotopes involved. Once it is there, it goes where it goes and does what it does.

      • George M

        Wouldn’t your 280 ppm equilibrium be more properly 230+/-50 ppm steady state, seasonally and geographically normalized, if we’re talking paleo (to six sigma, and then allowing for uncertainty)?

        Though I doubt this much alters your argument.. whatever it is.

      • Hal – Since it will become clear to you from the various responses below that the actual decline of excess CO2 toward baseline occurs over centuries, rather than 5 years, with a long tail requiring many thousands of years, I would be interested in your response to the original question – how much should that concern us?

      • can you please not sidetrack the discussion before it starts

      • Stephen – I don’t think it is a sidetrack to refer back to Hal’s email that served as the reason for this entire post. The final paragraph of his email captured the essence of his point – if the residence time is very short (5 years), there is little reason to be concerned. He never answered whether there was a reason for concern if the residence time is much longer, which it is. That is a legitimate question to ask in my view.

      • Correct. Your comment is clearly on topic, not a sidetrack.

      • By “long tail” you are referring to a probability distribution, aren’t you?

        Some subsequent comments make reference to that phrase without seeming to understand it. Or maybe you’re using it differently than I expect. Please clarify.

      • That is nothing but unscientific hand-wavy claptrap.

        I believe there are clear, proven and well-known beneficial effects of CO2 in the atmosphere with regards to increased rates of plant growth, better crop yields, etc. needed to support the growing population of the planet.

        That does not address the perfectly clear, direct question you were asked, nor in your rambling, off-topic comment, did you ever directly, responsively answer that perfectly clear, direct question you were asked.

        Fred Moolten asked you:

        At what duration for the residence time of excess CO2 would you perceive that interval to be long enough to be of concern? How many years specifically for an interval representing an average residence time for the excess?

        “I don’t know” would have been a perfectly respectable, honest answer, Harold.

        Faking it is not respectable.

        In fact, you did even state categorically that you don’t know.

        I don’t know what the optimum level of CO2 in the atmosphere would be…

        But you should have been honest and just stopped there instead of trying to pretend you know something relevant to the subject, by babbling on about meaningless factoids.

        I have read that US submarines are allowed to have 8,000 ppm CO2 before there is any concern about “health related effects.”

        You mean “poisoning.” The level of CO2 that’s considered poisonous is 100% irrelevant to informed discussion of the CO2 greenhouse effect. It’s just a means of faking knowledge, which you don’t have.

        I don’t know what the optimum level of CO2 in the atmosphere would be…

        No, you don’t, nor do you know anything relevant to making informed estimates of the probabilities of longer or shorter residence times.

        … all things being considered, but it is somewhat likely that the level would be higher than it is now, and I have factored this into my thinking about the current situation.

        Oh, is it really “somewhat likely” Harold? Based on what scientific expertise can you make that assertion? None. Can you quantify “somewhat likely?” No, you cannot. You don’t even know what quantities to compute, so you really have nothing of value to contribute — except for name-dropping, which obviously serves Curry’s interest in increasing her notoriety.

        This literature is reviewed and cited by my former NASA colleague, Apollo 17 astronaut, and former US Senator, Dr. Harrison “Jack” Schmitt …

        Funny that “scientist” is nowhere on his résumé! Less funny is that neither you nor Curry care that he has no scientific credentials whatsoever.

        That is, if we can answer the question of the current thread closer to the 5 year residence time mark, then we can all agree we are not in a triage situation regarding control of CO2 emissions and that we have more time to work the problem of climate change with perhaps different decisions and action plans.

        You just advocated fudging the science (“closer to the 5 year residence time mark”) in favor of a particular policy outcome (“different decisions and action plans”), the exact thing that you climate science deniers are always falsely accusing all the legitimate scientists of doing. I’m very sure that from his political days, your old pal “Jack” can explain to you what “unintentional truth-telling” means. :-)

        Amateur.

      • Harold H Doiron

        I believe there are clear, proven and well-known beneficial effects of CO2 in the atmosphere with regards to increased rates of plant growth, better crop yields, etc. needed to support the growing population of the planet. I don’t know what a harmful level of CO2 in the atmosphere would be. I have read that US submarines are allowed to have 8,000 ppm CO2 before there is any concern about health related effects (more than 20 times current levels). I don’t know what the optimum level of CO2 in the atmosphere would be, all things being considered, but it is somewhat likely that the level would be higher than it is now, and I have factored this into my thinking about the current situation.

        That’s quite the credo. Been worshipping at the temple of Idsos, have we?

        I’ve considered your opinion for a week, as I wanted to give it careful thought.

        I see Chief Hydrologist has punctured and lampooned your faith, and he needs no help from me.

        However.

        For on the scale (we have good cause to believe by the paleo record, extrapolations, and SWAG) of ten million years, the CO2 level of the atmosphere has been ergodically steady at 230 ppm +/- 50 ppm. We’re over 44% above that mean now, which appears to be unprecedented on the span of a geological epoch.

        Certainly the ice core record of the past 800,000 years indicates this range to a high degree of certainty, as confirmed by the stomata count of plant fossils and myriad other evidences.

        Where we substitute our own judgement of ‘best’ for what has been the dominant mode of a principle component of our complex, dynamical, spatiotemporal world-spanning climate for a span of time an order of magnitude longer than the existence of our species, we exhibit what can only be called arrogance.

        Where we do it based on spinmeistering and public relations, we display profound folly.

        CO2 at levels above 200 ppm up to about 2500 ppm functions as an analog to plant hormones, not as a ‘nutrient’ or fertilizer.

        It’s not so very different to plants from steroids in professional athletes. It alters primary and secondary sexual characteristics, modifies structures, and results in additional mass in some parts of the plants.

        Over seasons and generations, plants adapt to higher CO2 levels in some ways, gradually tapering off in the ‘benefits’ realized, but retaining negative effects longer than they keep the benefits.

        That’s why plants as a group flourished quite as well at 180 ppm as at 280 ppm and at 380 ppm.

        The purported benefits of CO2 elevation have only one medium term experiment in the field that I know of, and although it demonstrates selective benefits in field conditions in terms of favoring some species over others, it doesn’t match the levels seen in hothouse conditions with unlimited nutrient and ideal growing conditions.

        In short, there is nothing clear, proven, or — if well-known — especially correct in mad schemes to profit from higher CO2.

        It won’t end world hunger, as Lord Lawson suggested in his book based on nothing more than speculation and wishful thinking.

        It’s a silly opinion. You’re welcome to hold it, but please don’t claim it’s clear or proven.

        Please factor the Uncertainty of your beliefs into your thinking.

        Because to my thinking, a Perturbation on unprecedented scales in a Chaotic system will tend to disturb ergodicity in unpredictable ways, which increases the cost of climate-related Risks to me.

        Those costs are real, and translate into money taken from me.

        And I don’t recall consenting to your CO2-worship picking my pocket.

  5. And let us not over look the role of freshwater bodies as carbon sinks.
    This is apparently much larger than previously recognized by the climate science community.
    I would sugest that before we start diverting the topic, however nicely, into predictions about “X”, we should define the behavior of CO2 in the atmosphere more clearly.

  6. Tim Ball: “Pre-industrial levels were 50 ppm higher than those used in the IPCC computer models. Models also incorrectly assume uniform atmospheric distribution and virtually no variability from year to year. Beck found, “Since 1812, the CO2 concentration in northern hemispheric air has fluctuated exhibiting three high level maxima around 1825, 1857 and 1942 the latter showing more than 400 ppm.” Here is a plot from Beck comparing 19th century readings with ice core and Mauna Loa data…

    “Elimination of data occurs with the Mauna Loa readings, which can vary up to 600 ppm in the course of a day. Beck explains how Charles Keeling established the Mauna Loa readings by using the lowest readings of the afternoon. He ignored natural sources, a practice that continues. Beck presumes Keeling decided to avoid these low level natural sources by establishing the station at 4000 meters up the volcano. As Beck notes “Mauna Loa does not represent the typical atmospheric CO2 on different global locations but is typical only for this volcano at a maritime location in about 4000 m altitude at that latitude.” (Beck, 2008, “50 Years of Continuous Measurement of CO2 on Mauna Loa” Energy and Environment, Vol 19, No.7.) Keeling’s son continues to operate the Mauna Loa facility and as Beck notes, “owns the global monopoly of calibration of all CO2 measurements.” Since Keeling is a co-author of the IPCC reports they accept Mauna Loa without question.”

    (Time to Revisit Falsified Science of CO2, December 28, 2009)

    • Wagathon, the Beck paper is a joke. What they measure at Mauna Loa is a consistent record, same place year on year, clear of urban influences. Beck’s results are from multiple records, often close to industrial CO2 sources, using multiple analytical methods. Notice how the variability suddenly falls when more precise methods are adopted.

      • Consistent?

        From May 15th to 21st CO2 went up 1.84ppm

        From July 17th to 23rd CO2 went down 1.34ppm

        ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_weekly_mlo.txt

        There was a even a 1.93ppm jump in 7 days recently.

        No consistency there.

      • wow thats an entirely new class of retarded argument

      • Your lack of curiosity never surprises me.

        If CO2 goes up 1ppm in a year it is a sign of man-made catastrophic climate change.

        If it goes up almost 2 ppm in 7 days it is a sign of the natural in and out breathing of the earth …

        “The sawtooth pattern represents the natural carbon cycle. Every summer in the northern hemisphere, grass grows, leaves sprout, and plants flower. These natural processes draw CO2 out of the air. During the northern winters, plants wither and rot, releasing their CO2 back into the air. This sawtooth pattern shows the planet breathing.”

        http://www.terrapass.com/blog/posts/science-corner

        The above explanation is a joke when you look at the weekly data.

      • Bruce,

        You miss the point entirely. While there may be short term fluctuations of the same order as the yearly increase, those fluctuations do not scale over time. So while you may indeed get an average reading on one day which is equal to the lowest reading from a year or so before, you are not going to get one which is equal to the lowest reading from a decade earlier, still less from several decades earlier, and it is this which reveals the trend, clearly and unmistakeably.

      • Your point is to ignore the explanation that explains the sawtooth component of the Mana Loa graph because it is a joke.

        Why are their short term fluctuations? Science would explain them. Propaganda explains them away with joke “causes”.

        If the “natural carbon cycle” can be 2ppm over 7 days, why not 2ppm over 365 days?

        CO2 follows temperature historically.

      • Chief Hydrologist

        wow that’s a not entirely a new class of numbnut argument

      • Bruce,

        Take those variations, and divide them by the average atmospheric concentration over the relevant period, and tell me what sort of variation you get in percentage terms. It is going to be well under 1% in all cases.

        This would seem more that sufficiently consistent for the purposes it is being used for.

      • OK, here is the graphical representation of that data.
        http://img197.imageshack.us/img197/9555/co2.gif

        Read again what jimbo said above: “… tell me what sort of variation you get in percentage terms ..”

      • “tell me what sort of variation you get in percentage terms.”

        Ok.

        AGW = .5% change in ppm per 365 days

        Jubany = .5% change in CO2 per 1 day

        But more importantly, this “sawtooth” pattern explanation seems awfully bogus since daily changes can be 2ppm.

        ““The sawtooth pattern represents the natural carbon cycle. Every summer in the northern hemisphere, grass grows, leaves sprout, and plants flower. These natural processes draw CO2 out of the air. During the northern winters, plants wither and rot, releasing their CO2 back into the air. This sawtooth pattern shows the planet breathing.”

        I mean … really! If you only looked at the yearly graph you might be naive to believe such an explanation.

      • Bruce, Why can you not just look at the data impartially? If that curve charted my pay rate in $/hour based on working commissions, I would not be complaining about the fact that there was a ripple and some noise in the overall upward slope.

        You may be having a problem with understanding signal processing mathematics, and in particular its digital counterpart. What oscillations you see and their amplitude are really a matter of the impulse response caused by a forcing function. Depending on the frequency response, daily fluctuations can be filtered out, yearly fluctuations filtered out less, and the longest periods filtered out the least. The fact that samples are taken at the same time each day points out the stark reality of the Nyquist criteria.

        The Nyquist criteria states that digital sampling at the same rate as a naturally occurring frequency will fold that value over to look like a constant value. You actually have to sample at twice the natural frequency, i.e. the Nyquist frequency, or measure the numbers twice per day to see the daily fluctuations. We electrical engineers had this burned in our skulls during school so really see nothing weird about the data. I am not sure what your background is.

      • If your pay changed daily, and Human Resources explained it was because of yearly variations in the business cycle you would be very, very suspicious.

      • If your pay changed daily, and Human Resources explained it was because of yearly variations in the business cycle you would be very, very suspicious.

        Bruce, Do you have a problem with reading comprehension too? Note that I crafted my analogy to state that I worked based on commissions. Do I have to explain to you that commissions are susceptible to random fluctuations?

      • Paul B – perhaps the Beck paper is a joke – I haven’t read it. But Wagathon has touched on a real difficulty with the Mauna Loa data. The site is not ideal for assessing baseline CO2 in the atmosphere as it lies on the flank of a volcano which itself emits large and uncontrolled amounts of CO2. The only way I can think of to allow for such sporadic local contamination is to use the minimum values recorded within each time period, presumably daily, and dump the rest. Perhaps the resulting data are a reliable and meaningful guide to global CO2 levels. Indeed I’m fairly happy to accept that they are, although I would prefer to see a demonstration of this.

      • This oft-repeated canard about Mauna Loa being an unsuitable site for CO2 measurements is dealt with here:
        http://www.skepticalscience.com/mauna-loa-volcanoco2-measurements.htm

      • Could you check your link FiveString. I could not get it to work. Thanks.

      • The link is missing a dash between volcano and co2. Perhaps WordPress software removed it. It should be this

      • Wish this site had an ‘edit’ function… Thanks for the correction Pekka.

      • “It is true that volcanoes blow out CO2 from time to time and that this can interfere with the readings. Most of the time, though, the prevailing winds blow the volcanic gasses away from the observatory. But when the winds do sometimes blow from active vents towards the observatory, the influence from the volcano is obvious on the normally consistent records and any dubious readings can be easily spotted and edited out”

        They edit the data based on supposition? Not good.

        When you said the “canard” “is dealt with” I thought you might be talking scientifically not craptastically.

      • Now that your whining that by your speculation they might use some seemingly spurious measurements has been debunked, you’ve started whining that they should use known spurious measurements.

        You do know that 1984 was not intended as an instruction manual, don’t you?

      • CO2 to from underwater volcanoes

        http://nwrota2009.blogspot.com/2009/04/getting-gas.html

        Estimated number of underwater volcanoes? 3 million+

      • Where?

        If this was anything but Plimmer’s pipe dream we would see interesting profiles of various stuff like HCO3- in the oceans. We don’t

  7. Arfur Bryant

    Hal,

    Lets have a closer look at the 100-year lag idea…
    .
    The IPCC states that humans started to introduce CO2 and other dry GHGs ‘markedly’ in 1750:
    .
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-human-and.html
    .
    ["Global atmospheric concentrations of carbon dioxide, methane and nitrous oxide have increased markedly as a result of human activities since 1750."]
    .
    So, by 1850 (100 years later) we should have been feeling the ‘full effect’ of the initial input. Thereafter, every year would see an increase in the ‘full effect’ of CO2. This would, logically, lead to an acceleration of the effect of CO2 and, according to the cAGW theory, an acceleration in global warming. So now we can look at the temperature data since 1850:
    .
    http://www.woodfortrees.org/plot/hadcrut3gl/from:1850/to:2011
    .
    The question is: Do you think that shows an acceleration? If you apply enough smoothing you may do so but then you are adjusting how you interpret the data. The raw data graph shows no warming at all in the last 13 years (if anything a cooling) which itself would cast serious doubt on any interpretation of ‘acceleration’. In addition, any reason given for the cooling – such as ‘natural variation’ – has to consider that any ‘natural variation’ has to be powerful enough to combat not only the initial CO2 effect, but also the acceleration in the initial effect. In that case, natural factors easily outweigh any CO2 effect.
    .
    Personally, I think a more important question is: What is the contribution made by CO2 to the Greenhouse Effect? Until we can answer that question correctly, any theory/hypothesis/assertion regarding the supposed warming effect of CO2 is flawed as being based on an assumption.
    .
    I agree with hunter that the behaviour of CO2 needs to be ascertained more clearly.
    .
    Thanks for an interesting post.

    • Arfur – There is no reason necessarily to expect an acceleration simply because CO2 is increasing. That is because any warming reduces the radiative imbalance caused by the increased CO2 and thereby reduces the tendency for further warming. Depending on the rate of CO2 rise, we could see a steady warming, an acceleration, or warming at a reduced rate, although as long as CO2 is rising, the temperature trend averaged over multiple decades can be expected to be upward. Over shorter intervals, the effects of other, short-term climate drivers will modify the long term trend, as can be seen by examining the behavior of global temperature anomalies over the past 100 years, with their fluctuating ups and downs overlying an upward trend.

      The relevance of the residence time function is that it tells us how long a given CO2 concentration will continue to exert warming effects if the planet is out of balance. In general, that will be centuries.

      • Arfur Bryant

        Fred,

        Not according to the IPCC…
        http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter3.pdf
        See FAQ 3.1 Fig 1 and note:
        ["Note that for shorter recent periods, the slope is greater, indicating accelerated warming."]
        .
        So its not just the CO2 increase, but the slight acceleration in the increase that supports my argument. If the increased CO2 leads to a radiation imbalance which leads to warming which leads to a reduction in the radiation balance which leads to a reduction in warming – as you state – then what is the problem? The trend you speak of is 0.06 C per decade since 1850 and is not increasing from 1850! That size of trend is not even mildly threatening, let alone ‘catastrophic’ (IPCC term, not mine)!
        .
        CO2 may ‘exert warming effects’ but these effects are not significant, as demonstrated by observed data. I repeat, until you know what the CO2 effect is, you are basing your argument on an assumption.
        .
        Off to bed now, will have to leave any further reply until tomorrow…
        Regards,

      • The recent temperature record is consistent with continued warming of about 0.15C/decade. Which in turn is consistent with AGW.

        That’s all there is to it. And where precisely has the IPCC used the specific phrase “catastrophic”?

      • So if the temperature falls 4C to the depths of the LIA, thats natural, but if it recovers by .8C, it is mans fault?

      • Arfur Bryant

        And using any other start point other than the IPCC chosen start date of ‘accurate data’, ie, 1850, is misleading. What is your definition of ‘recent’? How about this one…?
        http://www.woodfortrees.org/plot/hadcrut3gl/from:1998/to:2011/plot/hadcrut3gl/from:1998/to:2100/trend
        .
        That’s ‘recent’, so where is your 0.15 C per decade trend there? If you move the start date to the right of 1850, you can get all sorts of trends but not one of them is the ‘overall trend’. So I just stick to the overall trend. That way, if the temperature starts to increase in an accelerative fashion WILL increase. In fact, the trend today is much lower than it was in 1880. Give me a call back when you can find the trend increasing above that one!
        .
        Oh, and the IPCC used the term ‘catastrophic’ here:
        http://www.ipcc.ch/publications_and_data/ar4/wg3/en/ch2s2-2-4.html

      • The 0.15C/decade trend is still there even in HadCRUT past 1998, it’s just masked by variation (solar cycle and ENSO). See: http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/

      • trend is still there
        it’s just masked

        Lol. Wot?

        how-fast-is-earth-warming?

        It isn’t. And hasn’t been since 2003.

      • There have been cooling impacts since about 2003 (2003 was solar max! there’s a been a low solar minimum recently!). Plus 2003-2007 was pretty much el nino after el nino whereas since 2007 there have been 2 quite strong La Ninas.

        The earth is still warming, it’s just that since 2003 the above-mentioned cooling impacts have suppressed it.

      • Arfur Bryant

        lolwot,

        Faith is a truly powerful debating tool. If you say…
        ["There have been cooling impacts since about 2003 (2003 was solar max! there’s a been a low solar minimum recently!). Plus 2003-2007 was pretty much el nino after el nino whereas since 2007 there have been 2 quite strong La Ninas."]
        …Does that mean that there were no La Nina events between 1910 and 1940 or 1970 and 1998? What caused those warmings? You want to use CO2 to explain a warming but that it is overcome by natural forcings during a cooling period. That denies the likelihood that natural forcings can work both ways.

      • Arthur, any period of time which starts with El Ninos and ends with La Ninas (eg 2005 to 2011) will have a cooling bias due to ENSO that has nothing to do with the longterm warming trend.

        I mean what you are doing is little more sophisticated than saying temperature dropped from 1998 to 2000 and claiming this contradicts global warming. It does not. 1998 to 2000 was El NIno to La Nina. That’s short term noise able to overwhelm the longterm trend.

        2005 to 2011 has about a 0.1C cooling impact from falling ENSO. It also has a cooling impact from the solar cycle.

        In short that’s why the period 2005-present is kind of flat. It’s not because the longterm warming has stopped, it’s because ENSO and the solar cycle happen to line up over that period to cancel it out.

      • “any period of time which starts with El Ninos and ends with La Ninas (eg 2005 to 2011) will have a cooling bias due to ENSO that has nothing to do with the longterm warming trend.”

        Excellent point, so how much did the 30 year run of positive PDO starting with more la ninas and ending with more el ninos 1975-1998 contribute to the longterm warming trend?

      • About 0.1C warming

      • Arfur Bryant

        So. lolwot, what caused the other 0.5 C warming in that period? You are condensing your ENSO argument into a short period and appear unwilling to consider that the same natural causes you claim work against the cAGW theory in the short term can also exist during the warming periods that you appear to attribute to CO2. If, as you state above, ENSO only contribute 0.1C in that period, are you suggesting that CO2 is powerful enough to contribute another 0.5 deg C?
        .
        If that is your argument, what caused the warming from 1910 to 1945, and what caused the subseuquent cooling?

      • “If, as you state above, ENSO only contribute 0.1C in that period, are you suggesting that CO2 is powerful enough to contribute another 0.5 deg C?”

        It maybe even more powerful than that. Without aerosol emissions the amount of warming would have been greater than 0.5C
        .
        “If that is your argument, what caused the warming from 1910 to 1945, and what caused the subseuquent cooling?”

        Solar activity increased in the early 20th century. I think that plays a significant part of it. Between that and the reliability of the records back then I am not sure there is a problem there. Global temperature went just about flat after the 1940s until it started rising again during the 70s.

      • lolowt,

        If the records were so unreliable back then, how do you know the increase since 1970 are unusual? The HadCRUt dataset goes back to 1850 and clearly shows three distinct warming periods. You want to discount the first two and concentrate on the latter one. The you want to discount the levelling/cooling after the latter one. How many goalposts do the warmists want to move?

      • http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf

        “El Niño–Southern Oscillation is a strong driver of interannual global mean temperature variations. ENSO and non-ENSO contributions
        can be separated by the method of Thompson et al. (2008) (Fig. 2.8a). The trend in the ENSO-related component for 1999–2008 is +0.08±0.07°C decade–1, fully accounting for the overall observed trend. The trend after removing
        ENSO (the “ENSO-adjusted” trend) is 0.00°±0.05°C decade–1, implying much greater disagreement
        with anticipated global temperature
        rise.”

      • Arfur – The IPCC site you linked to reinforces the points I made above,but if you have a question about a specific item, I’ll try to explain the reasoning behind it.

      • Arfur Bryant

        Fred,

        You could start with the bit that explains why the slight acceleration in CO2 can be construed as both leading to a reduction in warming and an acceleration in warming depending on how you feel at the time.
        .
        I repeat, the overall trend of 0.06 C per decade is NOT increasing. What is the problem?

      • How does a line “increase”? If you stick a line of best fit through date 1850 onwards then of course the line is going to be straight. That is what a line is.

      • the overall trend of 0.06 C per decade is NOT increasing

        That’s t-r-e-n-d Lolwot, not l-i-n-e

      • Of course it’s not increasing. It’s a straight line you’ve fit through all the data. What do you expect when you apply a line of best fit? That the end part will curve upwards?

        The fact is that recent warming is greater than the “overall trend of 0.06C” anyway.

      • What recent warming?

        Oh, you mean that warming, last millenium.

        :)

      • If you correct for ENSO and the solar cycle the warming has continued with no pause. You should correct for these things as they are noise not signal.

      • Arfur Bryant

        Ok lolwot, I’ll explain.

        Draw a hockey-stick curve (one that shows an acceleration in warming). Draw a straight line from the origin of the curve to a point a short way along the x-axis. Then draw successive lines, each starting from the origin, to successive points further along the x-axis. Each successive ‘line’ will be showing an increasing ‘trend’ because each successive line will be steeper than the last. Got it?
        .
        In terms of global temperature, IF the radiative forcing theory was correct, then the overall trend (drawn from the 1850 origin) would show a relatively consistent INCREASE.
        .
        It doesn’t. Its really that simple. The observed data does NOT support the theory. The overall trend from 1850 to 2011 is 0.06 C per decade. The overall trend in 1998 was 0.067 Cpd. The overall trend in 1944 (another peak) was 0.06 Cpd and the overall trend in 1888 (the first peak) was 0.17 C pd!

      • Where you are wrong is that climate models don’t show your acceleration. Look at 20th century hindcasts. They simply do not show the acceleration you claim and yet I am 100% sure they show AGW.

      • Arfur Bryant

        Fine lolwot, you believe in your models and I’ll believe in observed data.

      • Fine lolwot, you believe in your models and I’ll believe in observed data.

        Humans have an intuitive sense of the way a model works. Take the place of an outfielder who is trying to catch a fly-ball. He will instantaneously model the trajectory based on minimal information. If the fielder followed your advice, he would wait for the baseball to land on the grass and then run to it. Lot of good that will do.

        And yo can laugh at this analogy but that’s the way we work. The only difference is that the practical human will innately use a model based on heuristics and acquired knowledge, whereas the scientist will use math and the corpus of scientific evidence.

      • Arfur Bryant

        You’re kidding me, right? How many fly-balls would the catcher have caught if he’d used the IPCC models instead of his own models based on his own historical evidence (ie. practice)?

      • You’re kidding me, right? How many fly-balls would the catcher have caught if he’d used the IPCC models instead of his own models based on his own historical evidence (ie. practice)?

        Well they wouldn’t catch any if they used Postma’s or the skydragon’s models.

      • Arfur Bryant

        Precisely!

        Which is why I don’t believe either set :)

      • They would be looking at where the average baseball lands and then they would try to catch the average every time. They would also overestimate the hitters ability due to the amount of co2 in the air

      • I know why they call them deniers. At every chance they try to deny another person from actively thinking and advancing an argument. The majority (with a couple of exceptions from the truly skeptical POV) will never actually help you and offer constructive criticism. Instead they just stomp around.

      • Arfur Bryant

        WHT,
        .
        [" The majority (with a couple of exceptions from the truly skeptical POV) will never actually help you and offer constructive criticism. Instead they just stomp around."]
        .
        If that is directed at me, then I beg to disagree. I have engaged with several warmists on this thread alone and I think I have done so in a respectful and reasonable manner. I will be happy to discuss CO2 (the thread subject) with you if you so wish.
        .
        How do you think it is possible for a trace gas at less than 0.04% concentration to significantly affect global temperature?
        .
        If you don’t wish to discuss this basic premise of the cAGW debate, just let me know. I will assure you of my respectfulness as long as you do the same.
        .
        Regards,

        Arfur

      • Arfur, so if the 2000’s was 0.15 C warmer than the 1990’s and the previous average was only 0.06 C, you don’t count that as an acceleration? Could you clarify your definition of acceleration?

      • Arfur Bryant

        Jim D,

        You are averaging out in decades? Yes, the 2000s were averagely warmer. But cAGW was sold to Joe Public as an impending catastrophe on the basis of not-averaged data. If the cAGW theory is correct, the MBH98 curve would still be increasing today. The FACT that the temperature has not increased above 1998 by itself effectively disproves the theory.
        .
        You can hide all sorts of significant data if you smooth out enough. Look here:
        http://www.woodfortrees.org/plot/hadcrut3gl/from:1990/to:2010
        .
        Do you see an acceleration between the 90s and the 00s? Imagine you were a climber who walked from the start of the graph to the finish. You would have had to climb to the highest peak in 1998 but you would have spent a longer time at an averagely greater height in the 00s. By your argument of using decades, the climber would say he was ‘higher’ in the 00s, whereas history will show that he climbed the ‘highest’ in 1998.

      • We have just had the warmest decade by 0.15 degrees. Why would that not be a sign of warming faster than your average 0.06 degrees/per decade. It is an acceleration by any definition, and the projections have it accelerating further to 0.3 or more degrees per decade if the CO2 increase continues to accelerate.

      • Arfur Bryant

        Jim D,
        .
        Please read what I wrote.
        .
        Or try this… If the temperatures stay flat for the next two hundred decades (or more), you will still be able to say that each decade ‘is the (equal) highest decade’! Unfortunately, there will have been no temperature increase and no acceleration. The climber will be walking across a very large, flat and high plateau…
        .
        Of course, if the temperature increases, we may reach a point where the overall trend increases above what it is today. That will show an increased trend but not necessarily an acceleration. For an acceleration each successive overall trend has to be increased over the previous..
        .
        Using short-term, intermediate trend lines (as the IPCC did), is irrelevant, as each short-term trend can change greatly. Only the overall trend counts.

      • Arfur, my advice is to not look at anything less than a decade average when talking about climate. You can easily get confused by the up and down fluctuations of internal and short-term solar variability that are meaningless as they cancel out on the climate scale.

      • Arfur Bryant

        And my advice to you is not to believe in fairies, little green men from Mars and the genuinely ridiculous notion that a bunch of trace gasses existing at a combined total of less than 0.04% has the potential to significantly effect the global temperature.
        .
        Its a shame that no-one on the MBH98 team didn’t mention that the data should have had a ten year smoothing when they sold the ‘rapid and accelerating’ idea to the politicians. Maybe a caveat that the next thirteen years would be likely to show no further warming would have given them a little more credibility?

      • I don’t understand the obsession with MBH98 here. That was in the AR3 report ten years ago now, and since then we have had an AR4 report in 2007 that overrides their time series, but anti-AGW people seem to cling to AR3 as being easier to criticize than AR4. Can you update your criticisms to AR4 at least, or don’t you have any?

      • “the genuinely ridiculous notion that a bunch of trace gasses existing at a combined total of less than 0.04% has the potential to significantly effect the global temperature”

        A 3.7wm-2 forcing per doubling is not insignificant. And CO2 levels being so low just makes it a hell of a lot easier to double it.

        When so many of your number so often make these ridiculously garbage arguments no wonder climate “skeptics” have no credibility outside their own community.

      • Jim D, it is one thing when people can’t agree on the facts. It is quite another when they can’t agree on the logic by which they reason about the facts. It is obvious that Arfur uses different rules of reasoning from you. As long as that remains the case I predict no resolution of your differences.

      • lolwot and Vaughan,

        The reference to MBH98 was solely for the purpose of arguing against your use of ten-year smoothing. There was no suggestion of ten-year smoothing when the graph was used to ‘sell’ cAGW to the public. There was no suggestion of ENSO cycles, there was just hype. To further your point of smoothing, it is interesting to note that warmists are now using the ‘smoothing argument’ to avoid the lack of warming since 1998. I wonder, if there is not further warming for the next eight years, whether you will insist on the use of a twenty-year smoothing?
        .
        As to my logic and the effectiveness of the radiative forcing theory, lets have a closer look, shall we?
        .
        ["A 3.7wm-2 forcing per doubling is not insignificant. And CO2 levels being so low just makes it a hell of a lot easier to double it."]
        .
        Ok, so first of all, is W/m-2 a unit of heat? No. Is radiation the same as heat? No. Is the subject we are discussing called the cAGW or cAGR? well, its cAGW. Ok, so unless you can prove that the 3.7Wm-2 figure is translated into some units of heat, and quantify it, your point is a pathetic attempt to whitewash the real problem. So, go on, tell me how much HEAT is attributed from CO2. I’ll ask again the question that NO warmist want to discuss - what is the contribution of CO2 to the Greenhouse Effect?
        .
        Instead of Vaughan chipping in with his usual and puerile use of snark, why don’t you ask yourselves why you don’t want to answer this simple question? What is it you’re afraid of? While you’re at it, maybe you could answer this question, since you seem to like the use of radiation so much – what was the Wm-2 figure in 1850 before the CO2 addition of 3.7?
        .
        Vaughan, if you can’t play nice, don’t play at all.

      • Arfur Bryant

        That last post should have started with ‘JIm D, lolwot and Vaughan…’

      • This notion of acceleration in AR4 was a blatant bit of manipulation by the IPCC. You simply can’t compare trends over different periods. Begging the question that they shouldn’t be drawing linear trends for nonstationary data.

      • Arfur Bryant

        George M,
        Agreed.

    • How many humans?

      The IPCC states that humans started to introduce CO2 and other dry GHGs ‘markedly’ in 1750

      How ‘industrialized’ was society, prior to 1950? 1900?

      Come on science deniar, think. Just this once. Criminy!

      • Arfur Bryant

        settledscience
        So, complain to the IPCC, not me!

        And I think you’ll find its ‘denier’, not ‘deniar’…

      • Have you forgotten the context of your own comment?

        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104326

        The IPCC states that humans started to introduce CO2 and other dry GHGs ‘markedly’ in 1750:
        .
        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-human-and.html
        .
        ["Global atmospheric concentrations of carbon dioxide, methane and nitrous oxide have increased markedly as a result of human activities since 1750."]
        .
        So, by 1850 (100 years later) we should have been feeling the ‘full effect’ of the initial input.

        Wrong. Obviously, there has been more anthropogenic warming since 1850 because more anthropogenic greenhouse gases have been polluted since 1850 than from 1750 to 1850.

      • Well, it’s nice of you to try to sneak a post in after this thread has gone quiet, but you are the one who is wrong.

        The point IS within the context. It doesn’t matter if the contribution of humans is greater later in the century, the fact is that – using the 100 year lag theory – by 1850 we should have been seeing all the lag which started in 1750. Every year after that, and every year the anthropogenic contribution increase, we should be seeing an increasing warming effect (ie accelerating) because of that lag (if it exists).

        Unfortunately for you, and the other warmists, the lack of acceleration in the temperature datasets is a strong indication that the 100-year lag theory is wrong! If the lack of increased waring is due to ‘natural variation’, then this natural variation is not only capable of overwhelming the ‘radiative forcing’ of CO2 but the hypothesised increase in the ‘CO2 effect’ caused by the lag!

        That was my point and it IS in context. I politely suggest it is YOU that needs to think…

        And it’s still ‘denier…’.

  8. “Stomata data on the right show higher readings and variability than excessively smoothed ice core record on the left. The stomata record aligns with the 19th century measurements as Jaworowski and Beck assert. A Danish stomata record shows levels of 333 ppm 9400 years ago and 348 ppm 9600 years ago.

    EPA declared CO2 a toxic substance and a pollutant. Governments prepare carbon taxes and draconian restrictions crippling economies for a completely non-existent problem. Failed predictions, discredited assumptions, incorrect data did not stop insane policies. Climategate revealed the extent of corruption so more people understand malfeasance and falsities only experts knew or suspected. More important, they are not rejected as conspiracy theorists. Credibility should have collapsed, but political control and insanity persists – at least for a little while longer.” ~Dr. Tim Ball

    • Tim Ball’s credibility collapsed long ago

      • AGWers do like smearing people.

      • “Climategate revealed the extent of corruption so more people understand malfeasance and falsities only experts knew or suspected.”
        Right on, man!
        “Tim Ball’s credibility collapsed long ago”
        AGWers do like smearing people.

      • ClimateGate did show corruption. When people get caught hiding declines, or making one tree in Yamal a global treemometer or keeping papers that are inconvenient out of the journals, that is corruption.

        And we get to point it out. And you get to try and cover it up as per your usual modus operandi.

        What has Tim Ball done to you Nick?

      • Right, because we don’t hear about “The Team” or “Mike Mann” on every occasion?

        The difference is Tim Ball actually deserves to be smeared

      • Chris,
        You keep missing the chance to treat your betters with civility.

      • Ha you’ve got to be kidding. I for one am sick of the likes of Ball talking utter crp and getting a free pass from you guys either because you are all so damn ignorant you don’t even see the blatant errors of your “betters” or because you guys are applying one of the most brazen double standards.

      • Mike,
        Can you delete any emails you may have had with Keith re AR4? Keith will do likewise… Can you also email Gene [Wahl] and get him to do the same? I don’t have his new email address. We will be getting Caspar [Ammann] to do likewise.
        Cheers, Phil

      • I actually don’t give a crp about emails.

        The thing is skeptics are fine at understanding errors that are mundane like Al Gore getting the Sun the wrong temperature. And they are fine at understanding complex errors in details of paleoclimate studies.

        But amazingly, and this is very hard to believe it isn’t deliberate, they utterly fail to understand the errors in stuff midway between the two, eg the errors throughout people like Tim Ball’s gishgallops.

      • lolwot,
        The great thing about true believers is how they can pretend that calls to hide data and act corruptly is OK.
        Chris,
        Why does anyone deserve smearing? Smearing is when someone is damaged by people lying about them.
        I guess smearing makes sense for believers.

      • the thing is I understand why they did those things. And it wasn’t because they were corrupt. It was because they were dealing with malicious fools and they didn’t want to give them an inch.

      • Yes Lolwot, they probably were thinking along the lines you so vivdly describe.

        How scientific.

      • Dealing with “skeptics” is not about science. The skeptics dabble in rumour and spin. It’s more like politics than science. Skeptics couldn’t even interpret the CERN cloud paper correctly without spinning it for example.

        The scientists who realize how anti-science “skeptics” are simply cared not to give them any time and that’s all there is to it.

        Of course as a “skeptic” you are blinded about that.

      • As evidenced by the Climategate emails, they also ‘cared’ enough about their cause to support it with dodgy statistical methods, manipulated data and a determined effort to control the journals and the peer review process. Followed by a determined effort to delete the evidence of their malfeasance and slander the people exposing their nefarious activities.

      • The biggest skeptic lie about climategate is the idea that it was the scientists abusing the journals, when in actual fact it was the “SKEPTICS” who abused the journals and subverted peer review.

        I think in psychology they call it “projection”. Or at least it is a matter of blaming scientists for the crimes of the “skeptics”

        I mean seriously some of the crud that skeptics save through the peer review system and promote (G&T, the Beck paper) would be embarrassing if skeptic’s double standards on the matter weren’t so reckless to human civilization.

      • Simple yes/no question for you, bloke.

        Did Stephen McIntyre have any right to the data for which he enjoined all the readers of his ‘blog to inundate CRU with FOI requests? If “yes,” then on what basis did Stephen McIntyre have any right to those data?

      • settledscience | August 28, 2011 at 12:38 pm |

        Simple yes/no question for you, bloke.

        Did Stephen McIntyre have any right to the data for which he enjoined all the readers of his ‘blog to inundate CRU with FOI requests?

        Yes

        If “yes,” then on what basis did Stephen McIntyre have any right to those data?

        On the same basis that the information commissioner used to force the CRU to release the data to another party who requested it: Freedom of information.

        I kinow there are lots of reasons why *some scientists* resist the principle, most of them run contrary to the scientific method. However, the law is on the side of those who wish to examine their claims.

      • tallbloke,

        you left out the part where the research was paid for with public funds extracted from us through taxes. Remember that minor bit where Jones received money from the US too?? Then there are the EU Regulations that make ALL climate data public.

      • Wrong, tallbloke. McIntyre has no right to data that is protected by a legally binding confidentiality agreement. He is also not a scientist and has no right to be treated as one.

        Regulation 12(5)(f) applies because the information requested was received by the University on terms that prevent further transmission to non-academics
        Regulation 12(1)(b) mandates that we consider the public interest in any decision to release or refuse information under Regulation 12(4). In this case, we feel that there is a strong public interest in upholding contract terms governing the use of received information. To not do so would be to potentially risk the loss of access to such data in future.
        I apologise that not all of your request will be met but if you have any further information needs in the future then please contact me.

        http://climateaudit.org/2009/07/24/cru-refuses-data-once-again/

        This all has to do with intermediate computations which McIntyre pitched such a fit about not having given to him, because he is not competent to perform the same computations on the raw data. He was allowed access to the raw data all along, but he just isn’t smart enough to figure out what to do with it.

    • Yes, Wagathon, we can all agree that Tim Ball talks nonsense a lot. I assume that was your point?

  9. Dyson makes a lot of sense, though I may have picked some other name than carbon eating plants. Land use changes have an impact that may be underestimated. After reading the reference links I can confidently say the residence time is between a few years and a few hundred years. Based on the IPCC science by consensus methodology, their estimate is “likely” on the high side :)

  10. Judith Curry 8/24/11, CO2 residence time

    The question of CO2 residence time reaches into raft of IPCC errors, already well covered in Climate Etc.

    The residence time of CO2 in the atmosphere is taught in 11th year public school physics as the leaky bucket problem. IPCC’s formula in its TAR and AR4 glossaries are correct, but, alas, used nowhere in those IPCC Reports. It’s 1.5 years if you include IPCC’s leaf water, but if you ignore leaf water as IPCC does, it’s 3.5 years. Jeff Glassman response to Vaughan Pratt, “Slaying the Greenhouse Dragon. Part IV” thread, 8/15/11, 3:56 pm.

    See my response to Joel Shore, who teamed with Eschenbach on WUWT, to mistakenly propose that the residence time of a slug or pulse of CO2 was somehow different than their freshly minted lifetime of a molecule of CO2.Id., 6:24

    For IPCC’s other formula, the Bern formula, see my response to Bruce, “Time-varying trend in global mean surface temperature” thread, 7/16/11, 12:39 pm.

    Or my response to Fred Moolten, “Energy imbalance” thread, 4/20/11, 12:33 pm re the physically unrealizable model for the uptake of CO2.

    Or my response to Pekka Pirilä, “Radiative transfer discussion” thread, 1/9/11, 11:20 am, and the ensuing, embedded discussion.

    IPCC needs a slow uptake of CO2 to make it well-mixed in the atmosphere, thereby justifying its shift of the MLO CO2 record from regional to global, and thus to attribute the 50 year CO2 bulge to man’s emissions. A fallout beneficial to IPCC’s plan to scare the public was that CO2 would acidify the surface layer according, all to the chemical equations with equilibrium stoichiometric coefficients. IPCC reported the chemical equations (AR4, Eqs. 7.1, 7.1, p. 529), and used the equilibrium coefficients for its approximate ratio CO2:HCO3^-:CO3^– of 1:100:10, id., a solution given in the literature by the Bjerrum Plot, but along with the coefficients, never mentioned by IPCC.

    The Bern formula outlined in these references (AR4, Table 2.14, p. 213, fn. a.) is an attempt to justify a slow uptake of CO2 in the ocean, quantifying four fates for ACO2: (1) some to the Solubility Pump, (2) some to the Organic Carbon Pump, (3) some to the CACO2 Counter Pump, and (4) the remainder remaining in the atmosphere. AR4, ¶7.3.1, pp. 511 ff., Figure 7.10, p. 530. The formula implies the existence of partitions or channels to feed the three different processes and the null process. Partitions or channels don’t exist in the real world. Instead, all atmospheric CO2 is accessible to the surface layer via the solubility pump until Henry’s Law (omitted by IPCC). Contrary to the belief of IPCC and its supporters posting here, the surface layer is never in equilibrium. Also the two biological pumps can’t work from CO2_g but needs ionized CO2_aq. The Revelle Buffer didn’t work when Revelle and Suess first tried it in 1957, and its resurrection by IPCC is a scientific failure, but may prove a political success. In attempting to measure the Revelle Buffer, IPCC accidentally measured Henry’s Law, which the lead author in review concealed so as not to confuse the readers.

    And looking into IPCC’s model with a little more depth, one finds that IPCC has a flux of ± 90 GtC/yr or so between the atmosphere and the ocean of natural CO2, a net of zero including the terrestrial flux, while only ACO2 is subject to IPCC’s residence time bottleneck. This is another physical impossibility. The two species of gases differ only in their isotopic mix, 12CO2:13CO2:14CO2, and no assignment of absorption coefficients for these three forms of molecules can begin to satisfy IPCC’s uptake model.

    To get an intuitive feel for how long it takes water to absorb CO2, see Marshall Brain’s video, at blogs.howstuffworks.com/2010/09/17/diy-how-to-carbonate-your-own-water-and-save-big-bucks-on-club-soda/.

    Henry’s Law is instantaneous on even weather scales, much less climate scales. Henry’s Coefficients depend on temperature and pressure first, and salinity a distant third. IPCC’s notion that the coefficients depend on the carbonate state or pH of the surface layer is novel physics.

    • I have described the relationship between Henry’s law and Revelle factor in this comment

      http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99650

      The claims made in the message of Jeff Glassman are not correct. The Revelle factor is a result of well known chemistry and not in contradiction with Henry’s law, which applies to undissociated CO2 in seawater, not to the total solubility including bicarbonate ions, whose share is more than 99% of the total solubility. The value of pH of the oceans is really essential and salinity has its effect through its influence on pH. The Revelle factor is not present if pH remains constant, but additional CO2 leads unavoidably to some decrease in pH and the Revelle factor is a manifestation of this change.

      His criticism of “the Bern formula” is also erroneous and appears to tell of not understanding the basics of the approach.

      It’s unfortunate that I don’t know good comprehensive descriptions of all essential arguments. Many sources contain valid partial descriptions, but not the whole argument. One reason for that may be that the basics have been known so long that repeating the arguments has appeared unnecessary and scientifically uninteresting. Another reason is that the knowledge of the subprocesses remains highly inaccurate. Recent publications tell clearly that very little is known with high accuracy.

      On the annual level the uncertainties are tens of percents of the average increase in atmospheric CO2, but over longer periods, the relative uncertainties are reduced, when constraints based on knowledge on the reservoirs of carbon get gradually tighter. Absolute proofs may remain unachievable, but the reasons to believe that the overall picture is well understood are strong. That means in particular that the impulse response to a sudden pulse of CO2 to the atmosphere has been estimated with fairly good accuracy for periods up to 100-200 years and for levels of concentration within a factor of two from the present.

      What has been published on the longterm response over hundreds of years is less convincing as there are less constraints on that from the empirical data and as it remains dependent of poorly known dynamics of oceans. Similarly are also arguments on the reduction in uptake to oceans with increasing CO2 amounts based on less well known phenomena.

      • Pekka Pirilä 8/25/11, 4:52 am CO2 residence time

        Revelle and Suess (1957) postulated that a peculiar buffer mechanism existed in sea water that created a bottleneck against dissolution of CO2. It was not based on chemistry but was a mere assumption. R&S applied it to the annual amount of industrial CO2 added to the atmosphere, that is, the brunt of anthropogenic CO2. It was the conjecture needed to justify the Callendar effect, now AGW, by having ACO2 accumulate in the atmosphere. This was desirable in 1957 to support the Keeling Curve, just getting underway by Charles Keeling, Revelle’s protégé.

        R&S found,

        It seems therefore quite improbable that an increase in the atmospheric CO2 concentration of as much as 10% could have been caused by industrial fuel combustion during the past century, as Callendar’s statistical analyses indicate. It is possible, however, that such an increase could have taken place as the result of a combination with various other causes. The most obvious ones are the following:

        1) Increase of the average ocean temperature of 1ºC increases PCO2 by about 7%. …

        That increase due to temperature is a result of Henry’s Law, which was mentioned neither in R&S nor by IPCC in TAR or AR4.

        The paper concluded,

        Present data on the total amount of CO2 in the atmosphere, on the rates and mechanisms of CO2 exchange between the sea and the air and between the air and the soils, and on possible fluctuations in marine organic carbon, are insufficient to give an accurate base line for measurement of future changes in atmospheric CO2. An opportunity exists during the International Geophysical Year to obtain much of the necessary information.

        R&S(1957) was actually a pitch for IGY funding.

        R&S were unable to set the parameters for their buffer conjecture to satisfy their boundary conditions. When IPCC reported on attempts to measure the Revelle Buffer factor, it produced a graph showing the Revelle Buffer varied with temperature. AR4, Second Order Draft, Figure 7.3.10 (a). That graph was a linear transformation of Henry’s Law. Nicolas Gruber, a reviewer, said that the Revelle Buffer has almost no temperature sensitivity. Gruber’s distinction was that made by R&S: the increase anticipated for CO2 could have been due to temperature because of the other cause, that associated with Henry’s Law, ¶1), above. The Revelle Buffer is no more and no less than a conjecture for the modification of Henry’s Law for the absorption of CO2. When IPCC attempted to resurrect the Revelle Buffer, it rediscovered Henry’s Law. IPCC’s editor wrote,

        The buffer factor has a considerable T dependency (see Zeebe and Wolf-Gladrow, 2001). However, it is right that in the real ocean, this T dependency is overridden often by other processes such as pCO2 changes, TAlk changes and others. The diagram showing the T dependency of the buffer factor was omitted now in order not to confuse the reader. The text was changed. Bold added, 6/15/06.

        For a complete discussion, see On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, … or CO2: “WHY ME” by following the link at my name in the header.

        In other words, the Revelle Buffer is in dispute as to its temperature dependence, and when measured it looks like Henry’s Law rescaled. It can’t be pinned down because it is a phantom. Regardless, IPCC concealed these discrepancies in its final version of AR4.

        While reasonable doubt exists as to the validity of the Revelle Buffer conjecture, a more important issue is that R&S and IPCC applied this conjecture to ACO2 and not to natural CO2. This failure to apply lacks any support in physics, and must be deemed an error. Moreover, the magnitude of that error is scaled by the ratio of natural CO2 outgassing from the ocean, about 90 GtC/yr, to ACO2 emissions, about 6 GtC/yr.

        PP: The Revelle factor is a result of well known chemistry and not in contradiction with Henry’s law, which applies to undissociated CO2 in seawater, not to the total solubility including bicarbonate ions, whose share is more than 99% of the total solubility. The value of pH of the oceans is really essential and salinity has its effect through its influence on pH. The Revelle factor is not present if pH remains constant, but additional CO2 leads unavoidably to some decrease in pH and the Revelle factor is a manifestation of this change.

        (1) Notice that Pekka provides no citations. He recites from memory.

        (2) On the solubility of gases in water, and including CO2, the Handbook of Chemistry and Physics, 72d Ed., says,

        Solubilities of those gases which react with water, … carbon dioxide, … are recorded as bulk solubilities, i.e., all chemical species of the gas and its reaction products with water are included. P. 6-3.

        This formulation of gas solubility, unaffected by the postmodern physics of AGW, contradicts Pekka Pirilä’s undissociated CO2 assertion.

        (3) Pekka Pirilä could have found support for his undissociated model in Zeebe & Wolf-Gladrow’s Encyclopedia of Paleoclimatology and Ancient Environments, 2008, (available on line). However, Z&W-G define Henry’s Law only for thermodynamic equilibrium, and then in proportion to the sum of the concentrations of the two molecules of CO2_aq and H2CO3.

        (3.1) In thermodynamic equilibrium, the ratios are known by the solution to the carbonate equations along with the stoichiometric equilibrium constants. The solutions are given by the Bjerrum plot. Wolf-Gladrow, D., CO2 in Seawater: Equilibrium, Kinetics, Isotopes, 6/24/06 (available on line), taken in part from Zeebe and Wolf-Gladrow, 2001, IPCC’s source for carbonate chemistry (AR4, ¶7.3.4.1 Overview of the Ocean Carbon Cycle, p. 528, excluding the Bjerrum plot. Since for thermodynamic equilibrium all the reaction products are in a known proportion, changing from one species, e.g., undissociated CO2, to any mix of species, is just a matter of a known scale factor.

        (3.2) The surface layer of the ocean is quite turbulent, contains entrained air, and undergoes thermal exchanges with the atmosphere and the deep ocean. It is never in equilibrium, which Pekka Pirilä, IPCC, and others ignore. The undissociated form of Henry’s Law is not applicable to the real world.

        (4) CO2 is highly soluble in any water, and dissolution always occurs. It proceeds instantaneously on weather to climate scales, but accelerated by wind. It does not wait, that is, it is not buffered, for the state of equilibrium to adjust. Solubility does not in the first, second, or third order, depend on the chemical state of the water, meaning expressly either its pH, its alkalinity, or its DIC ratio, even though Henry’s coefficient might be estimated differently. That ratio is the partial pressure of CO2_g to the concentration of CO2, whether bulk, molecular, or some other mix, in the water. Pekka Pirilä’s phrase Henry’s law, which applies to undissociated CO2 in seawater would be correct if he meant that in some formulations, Henry’s law coefficient refers to the concentration of undissociated CO2 in seawater. As written it implies that dissolution is regulated by the concentration of undissociated CO2, which is false. The ratio of dissolution has no effect on the flux between CO2_g and CO2_aq.

        PP: His criticism of “the Bern formula” is also erroneous and appears to tell of not understanding the basics of the approach. [¶] It’s unfortunate that I don’t know good comprehensive descriptions of all essential arguments.

        Pekka Pirilä is undoubtedly skilled at relating physical processes to their algebraic representations. He is just not applying that skill, and as a result reasoning incorrectly. His missing good comprehensive description is this. The Bern formula has four coefficients, a_i, which total to 1, and which represents the mass of the pulse of CO2 put into the atmosphere. The Bern formula assigns values to those coefficients (21.7%, 25.9%, 33.8%, and 18.6%). That assignment is the algebraic equivalent of creating four reservoirs in the atmosphere to hold CO2 for the four processes. Those reservoirs do not exist. The Bern formula is incompetent.

        PP: That means in particular that the impulse response to a sudden pulse of CO2 to the atmosphere has been estimated with fairly good accuracy for periods up to 100-200 years and for levels of concentration within a factor of two from the present.

        Not by the Bern formula is it known! The Solubility Pump time constant is 1.186 years in the Bern formula, and that represent Henry’s Law uptake of CO2 from the atmosphere by dissolution. Two centuries is 168 time constants. That pulse will be reduced to 5.8E-74 in 200 years, known with fairly good accuracy.

      • Puff, the Magic Dragon
        Lived by the Sea.
        ===========

      • I have not studied, how well Revelle and Sues succeeded in their analysis, as that’s of historical interest, not scientific interest.

        The phenomenon that is described by Revelle factor is true. It’s not new speculation or a new hypothesis, but an inescapable consequence of chemistry, more specifically of the equations of chemical balance.

      • Pekka Pirilä 8/25/11, 12:57 pm CO2 residence time

        What is significant is that the Revelle Factor or Revelle Buffer was (a) not based as you claimed on chemistry and (b) never worked, not originally in 1957 and not when IPCC tried to resurrect it.

        Do you have any references substantiating your claims (a) that the Revelle Buffer is true and (b) that it is based chemistry?

      • It’s solid science. Your claims are totally without merit as is your insistence of single exponentials both for the removal of CO2 and transmission of radiation.

        I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.

      • Pekka Pirilä 8/25/11, 2:49 pm CO2 residence time

        PP: It’s solid science. Your claims are totally without merit as is your insistence of single exponentials both for the removal of CO2 and transmission of radiation.

        I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.

        When you make claims like these, sans references, inaccurate, and flying by the seat of your pants. This is not science, and you are not participating in a serious scientific dialog. Moreover, you manage to draw conclusions (e.g., nonsense) from this muck of errors.

        JAC: The goal of the blog is to discuss scientifically relevant issues, which we are doing. curryja 4/25/11, 9:53 am, CO2 residence time

        I doubt she meant the goal is to discuss scientifically relevant issues subjectively, off the top of the head, shooting from the hip, though that’s too often the result.

        1. I said that the solution to IPCC’s resident time formula in AR4 Glossary was one exponential. 8/24/11, 9:48 pm. If you have a mathematics reference that says it is otherwise, please provide it. It must be postmodern math.

        2. I never wrote anything so silly as radiation transmission could be represented by a single exponential. I did write that radiation absorption was represented by a single exponential, and showed you how.

        3. Nearly a year ago, I gave you a derivation of the Beer half of the Beer Lambert law resulting in a single exponential. Radiative transfer discussion, 12/23/10, 3:40 pm. You responded,

        PP: Your derivation is mathematically the same one that everyone gives [not one citation], but it applies only to monochromatic radiation [no citation]. It does not help that you state that you do not make that assumption, as it is hidden in your derivation as well [no citation] – unless you claim that all wavelengths are absorbed equally strongly. Bold added, id., 4:48 pm.

        In the ensuing dialog, I asked you to explain how that assumption was hidden in my derivation. Id., 5:35 pm. You failed to do so.

        You did agree that a single exponential was appropriate for radiation absorption. But as shown in this quotation, you continued to say incorrectly that it was valid only for a single wavelength.

        4. Previously, I had written to you

        The Beer-Lambert Law applies an empirical coefficient. Precisely speaking, it applies to a complex spectrum of light for which all the spectral components have the same empirical coefficient, and not restricted just to the same frequency. What we agree(?) on thread, 5/27/11, 10:35 am.

        Later, I criticized the Georgia Tech course EAS 8803 for representing the Beer Lambert law incorrectly as monochrome, explaining that it was unnecessarily restrictive. Planetary energy balance, 8/20/11, 2:47 pm. Your response was this reversal of your position:

        PP: Unfortunately for your case the Beer-Lambert law is valid only, if the absorption coefficient is the same for all wavelengths present, and that requirement is not satisfied in any general case. How serious the error is depends on the case, but it may be very large, as it indeed is in the Earth atmosphere. Id., 3:00 pm.

        Not only did you, once again, supply no citations, but you failed to provide my intervening teachings asserting that same position to you. You write as if you were correcting me when by your writings make plain, it is you who has been corrected.

        You are learning, but it’s hard to dig out from your citation-free, zig zag posts. This seat-of-the-pants style from yourself and others here is a major cause of the failure of the topics on this blog to converge as Dr. Curry seems to wish.

      • My comments are based on the fact that “you showed” something that’s mathematically absolutely wrong.

        For the removal of CO2 you presented a very similar error in most basic mathematics.

        In this case you don’t accept that the existence Revelle factor is based on very basic chemistry. You stated also that pH doesn’t matter although it tells directly the most important factor that influences the total solubility of CO2 in seawater, the value of Dissolved Inorganic Carbon (DIC). DIC is the value of interest for the carbon storage as its quantity is more than 100 times larger than the solubility of CO2.as gas.

        I have written myself that the Revelle factor is not easy to determine accurately, because it depends on the buffering of the seawater. It’s, however, known to be of the order of 10 and thus very important.

        Your continuing dismissal of simple mathematics as well as most basic and most reliably known physics and chemistry is still incomprehensible to me. Scientific references are not needed, when the conclusions are based directly on knowledge available in every basic textbook of physics or chemistry, or on elementary properties of exponential functions and when these arguments are given in full.

        I have presented my arguments in sufficient detail and many of them several times also in my answers to your comments. If you are in denial mode, the amount of detail doesn’t make any difference. The validity of your claim that I don’t participate in scientific discussion can be judged from my various comments on this site. Sometimes the discussion has proceeded to the point, where my comments become more blunt.

      • Pekka Pirilä, 5/27/11, 9:27 Am, CO2 discussion

        Your reply is as empty as your citation list. What error in mathematics are you talking about?

        PP: I have written myself that the Revelle factor is not easy to determine accurately, because it depends on the buffering of the seawater. It’s, however, known to be of the order of 10 and thus very important.

        You claim! Where is your reference? Who got to read it? We need to see it because what you describe here doesn’t make sense. The Revelle Factor doesn’t depend on the buffering of seawater – it IS the buffering of seawater. Revelle and Suess (1957) suggested that it might be of the order of 10, under equilibrium conditions at constant alkalinity. P. 22. IPCC measured it between 8 and 13 (AR4, ¶7.3.4.2, p. 531), another large range, and in the open ocean (neither constant alkalinity nor equilibrium). IPDD measured it with a disputed, temperature dependent co-parameter, which it concealed so as not to confuse the reader.

        You claim the Revelle factor is … very important, but give no reference to where it was ever used, or to what the result of its application might have been. The RF was discussed in IPCC Reports, but never used, and its effects contradicted, as I cited previously for you.

      • You can easily find thousands of references related to the Revelle factor.

        One example of lecture material explaining it is here.

        You didn’t only criticize the Georgia Tech course based on the irrelevant comment that Beer-Lambert law is valid also for multichromatic radiation, when the absorption coefficients happen to be the same, but you implied clearly that this trivial extension would change essentially the outcome. Furthermore you claimed that changes to the presentation of Beer-Lambert law would be done to justify somehow suspect conclusion on the behavior in the atmosphere. This proves that your point was not only irrelevant sophistry, but you really tried to perpetuate wrong conclusions.

        If you are now willing to admit that the Beer-Lambert law does not at all work for the total IR radiation in atmosphere, that’s fine. Is that your present view?

        The controversy on the Bern model is very similar in it’s nature. You try to discredit it based on false arguments.

      • Mr. Pekka Pirilä, You seem to have all the answers to the infinitely complex questions dealing with the workings of planet Earth. You even were quick to model a relativity experiment using ‘small balls’. Earth’s rotational speed was put forward by you as a difficult mathematical problem that would need much thought…

        I suggested Peter’s model out of hand using the Bible as a possible proxy
        for the Earth’s rotational speed for your experiment, at: 4.167 rps…
        —————————————————————————————
        “David Wojick, Perhaps the fisherman Peter, was a scientist too.
        Yesterday…

        Pekka Pirilä | August 20, 2011 at 10:56 am | Reply
        Using scale models is a well known and useful tool in engineering. The
        models agree never fully on the real system, but in many cases it’s possible
        to construct models that behave in a very similar way. That requires always
        that several different properties are matched. In fluid dynamics Reynold’s
        number is usually the most important parameter that must have the same value in the model as in the real system. Prandl’s number is another common important parameter.

        In the example described by DocMartyn some critical numbers must also be matched to get results of any significance. Without going to the details, it appears totally clear that the small balls must be made to turn much faster than the Earth turns. I cannot tell, what would be the most appropriate length of day, but it might well be closer to 1 min than to 24 hours.

        Such an experiment tells practically nothing unless it’s supported by a
        careful theoretical analysis that tells the right combination of parameters.

        Tom | August 20, 2011 at 12:04 pm | Reply
        What if?…
        II Peter 3:8 But you must not forget this one thing, dear friends: A day is
        like a thousand years to the Lord, and a thousand years is like a day.

        1000 years, times 360 days= 360,000 night-day cycles (NDC)
        Divided by 24 heavenly hours= 15,000 NDC per HH
        Divided by 60 heavenly minutes=250 NDC per HM
        Divided by 60 heavenly seconds=4.167 NDC; as revolutions per second. for your model?

        David, what do you think of the model Peter gives us, above? Is my math
        sound? Would this rotation speed work on the ‘small balls’ representing the
        Earth? Even if we don’t have all the answers, it is helpful to know where to
        look for the answer.”
        ——————————————————————————–

        Pekka Pirilä, what do you say? If this number works for you, what would that mean for our relative ‘size’ to the observer? Fun, to think outside the box.

      • I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.

        Of these two categories, I would put Jeff’s point that “The residence time of CO2 in the atmosphere is taught in 11th year public school physics as the leaky bucket problem” in the latter. Although my high school physics didn’t get into atmospheric physics, I consider the leaky bucket analogy a helpful one for understanding what would happen if total CO2 emissions were to drop by more than 4 GtC/yr. I replied to Jeff’s dragon-part-iv post just now at

        http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-106478

        I’ll reserve judgement on the rest of Jeff’s recent posts, there’s only so many arguments one can usefully engage with at one time.

      • Pekka Pirilä 8/25/11, 12:57 pm CO2 residence time

        A PS to my last:

        I have just re-read the AR4 §7.3.4.2 on the Revelle Factor. It concludes nothing relevant to it. The section has a beautiful diagram of the distribution of the measured RF, Figure 7.11, but it’s a half measurement: the old part (a) showing its disputed temperature dependence is gone. The conclusion to Section 7.3.4.2 Ocean Carbon Cycle Processes and Feedbacks to Climate includes this:

        Future changes in ocean circulation and density stratification are still highly uncertain. Both the physical uptake of CO2 by the ocean and changes in the biological cycling of carbon depend on these factors.

        Thus the physical uptake of CO2 by the ocean is still highly uncertain, analysis of the Revelle Factor notwithstanding. The RF got IPCC nowhere. It’s past time for it to learn about Henry’s Law, the concealed chart, and to apply it instantaneously to the surface ocean.

      • And of course you have read and thoroughly comprehended all of the references on that page as well. Right?

        I have just re-read the AR4 §7.3.4.2 on the Revelle Factor.

        Some of the required reading from just that sub-section of a section of Chapter 7 of AR4 is:

        Oceanic carbon exists in several forms: as DIC, DOC, and particulate organic carbon (POC) (living and dead) in an approximate ratio DIC:DOC:POC = 2000:38:1 (about 37,000 GtC DIC: Falkowski et al., 2000 and Sarmiento and Gruber, 2006; 685 GtC DOC: Hansell and Carlson, 1998; and 13 to 23 GtC POC: Eglinton and Repeta, 2004)…

        Seawater can, through inorganic processes, absorb large amounts of CO2 from the atmosphere, because CO2 is a weakly acidic gas and the minerals dissolved in the ocean have over geologic time created a slightly alkaline ocean (surface pH 7.9 to 8.25: Degens et al., 1984; Royal Society, 2005)… Gas exchange rates increase with wind speed (Wanninkhof and McGillis, 1999; Nightingale et al., 2000) .. (Zeebe and Wolf-Gladrow, 2001; see Box 7.3).

        That’s just from the first two paragraphs of 7.3.4.1.

        You whine like a child that Pekka won’t get into scientific details with you, but I don’t believe you have read any real climate science beyond the IPCC’s summary. For the rest of your “information” you’re just regurgitating what you’ve passively absorbed from climate science denial ‘blogs like wuwt, climateoddity and in at least 50% of its content, climateetc. But if I’m wrong about that, if you have any original thoughts on the matter, then you can pick the one paper cited in section 7.3.4 of AR4 most relevant to your claims about ocean chemistry, and you can tell us
        (1) exactly what it gets wrong
        &
        (2) what’s the right answer.

        That is, if you’re at all serious. Otherwise, just continue the hand-wavy nonsense, and keep pretending that it’s Dr. Pirilä’s burden to explain to you the chemistry of buffers to your satisfaction, as if your failure to understand and then apply first-year college chemistry makes all the legitimate scientists wrong.

      • settledscience, 8/28/11, 1:12 pm, CO2 discussion

        ∅: And of course you have read and thoroughly comprehended all of the references on that page as well. Right?

        Who are you who names himself “settledscience”, the empty set, ∅, and why do you ask unimportant questions? Are you taking a poll, writing a term paper for summer school?

        If you ever get into science, you’ll find that when researching the accuracy of a paper, reading every citation is not required.

        For your term paper, the answer in general to your question is “no”, because

        (1) most of IPCC’s references are behind a paywall,

        (2) many have proved irrelevant to IPCC’s claim for them, and

        (3) citing references without quotations violates a basic principle for writing scientific papers, one that provides that references should be for the purposes of validating claims and not a divergence for the reader to validate the paper through independent, collateral research .

        By contrast, note I made several reference to ¶7.3.4.2 for propositions stated. If you want to participate in a scientific dialog instead of just adding noise, check that reference for yourself and raise any errors you might perceive.

        ∅: Some of the required reading from just that sub-section of a section of Chapter 7 of AR4 is: [quotations] That’s just from the first two paragraphs of 7.3.4.1.

        How might you know what is or is not required on a subject for which you have expressed and can express no opinion?

        You are not on the same page with yourself.

        Those two paragraphs you cite contain the following nine references. A little analysis provides a nice object lesson in how IPCC abuses referencing.

        1. Degens, E.T., S. Kempe, and A. Spitzy, 1984: Carbon dioxide: A biogeochemical portrait. In: The Handbook of Environmental Chemistry [Hutzinger, O. (ed.)]. Vol. 1, Part C, Springer-Verlag, Berlin, Heidelberg, pp. 127–215. [$249.50; 10 pdf chapters at 24.95 per chapter.]

        2. Eglinton, T.I., and D.J. Repeta, 2004, Organic matter in the contemporary ocean. In: Treatise on Geochemistry [Holland, H.D., and K.K. Turekian (eds.)]. Volume 6, The Oceans and Marine Geochemistry, Elsevier Pergamon, Amsterdam, pp. 145–180. [$107]

        3. Falkowski, P., et al., 2000: The global carbon cycle: A test of our knowledge of Earth as a system. Science, 290(5490), 291–296. [$15]

        4. Hansell, D.A., and C.A. Carlson, 1998: Deep-ocean gradients in the concentration of dissolved organic carbon. Nature, 395, 263–266. [$32]

        5. Nightingale, P.D., et al., 2000: In situ evaluation of air-sea gas exchange parameterisations using novel conservative and volatile tracers. Global Biogeochem. Cycles, 14(1), 373–387. [Downloaded, superseded and not read.]

        6. Royal Society, 2005: Ocean Acidification Due to Increasing Atmospheric Carbon Dioxide. Policy document 12/05, June 2005, The Royal Society, London, 60 pp., http://www.royalsoc.ac.uk/document. asp?tip=0&id=3249. [Downloaded; acidification wrongly considered real to rely on Bjerrum solution.]

        7. Sarmiento, J.L., and N. Gruber, 2006: Ocean Biogeochemical Dynamics. Princeton University Press, Princeton, NJ, 503 pp. [$75]

        8. Wanninkhof, R., and W.R. McGillis, 1999: A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13), 1889–1892. [Down loaded & reviewed. Relies on pCO2 from Takahashi, which produces incorrect result.]

        9. Zeebe, R.E., and D. Wolf-Gladrow, 2001: CO2 in Seawater: Equilibrium, Kinetics, Isotopes. Elsevier Oceanography Series 65, Elsevier, Amsterdam, 346 pp. [$101].

        Are you pretending to have read these?

        Six of your own citations are behind a paywall: purchase price for all six is $587.50, which would have to be paid in the blind in the hope of answering an impudent, incompetent, and irrelevant inquiry. Major bits of reference 9 are available in a separate publication by Dieter Wolf-Gladrow, which I have read and on occasion cited. IPCC should be required under the US and UK Freedom of Information Acts to provide accessible, text readable copies of all its citations.

        The other three I have downloaded. Two are incompetent, one relying on the Takahashi method, which produces an incorrect result (see Takahashi method, rocketscientistsjournal.com, On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc., ¶5), and the other relying on the Bjerrum solution for a surface layer in equilibrium, which does not exist (see id). The third I have not read because IPCC deemed its results contradicted by ref. 9.

        ∅: I don’t believe you have read any real climate science beyond the IPCC’s summary. … But if I’m wrong about that, if you have any original thoughts on the matter, then you can pick the one paper cited in section 7.3.4 of AR4 most relevant to your claims about ocean chemistry, and you can tell us (1) exactly what it gets wrong & (2) what’s the right answer.

        1.) Why do you imagine anyone cares about your opinion?

        2.) Yes, you are wrong about that.

        3.) If any serious reader, or as improbable as it might be, yourself, is interested, I have, above, provided sufficient reasons with links to complete answers. I have no reason or desire to restrict myself to “one paper” as you ask. Short answers include

        (a) the surface layer is not in equilibrium so the Bjerrum solution to the carbonate chemical equations for equilibrium are not applicable,

        (b) the Takahashi diagram provides a small fraction of the CO2 flux reported elsewhere by IPCC, so gets the carbon cycle wrong,

        (c) IPCC wrongly applies the imaginary CO2 buffer to ACO2 and not to natural CO2, a physical impossibility, and

        (d) the right answer is that the surface layer stores excess CO2_aq instead of the atmosphere storing excess CO2_g. And of course none of it is relevant to the basic climate question because Earth’s surface temperature follows the Sun with a simple transfer function.

        ∅: Otherwise, just continue the hand-wavy nonsense, and keep pretending that it’s Dr. Pirilä’s burden to explain to you the chemistry of buffers to your satisfaction, as if your failure to understand and then apply first-year college chemistry makes all the legitimate scientists wrong.

        I was waving bye-bye to ∅.

        I pretend nothing. Dr. Pirilä is quite wrong about the Revelle Factor, and most recently about climate science invalidating the Beer-Lambert Law. He is stuck supporting AGW, a belief system, thereby inheriting all IPCC’s many errors, and leaving himself to argue sans references.

      • Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.

        (d) … And of course none of it is relevant to the basic climate question because Earth’s surface temperature follows the Sun with a simple transfer function.

        And of course, now that I know that Jeff denies the Settled Science that is the Greenhouse Effect, I need not bother speaking to it ever again.

      • Jeff it appears does not believe that excess CO2 actually exists in the atmosphere. I say this because the title of the article he cites is called:
        “On Why CO2 Is Known Not To Have Accumulated in the Atmosphere”.

        Do I have that interpretation right?

      • settledscience bloviates,

        “Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.”

        Here is your chance to be the big man on campus. Simply link the appropriate studies and/or papers from the literature to prove your point.

        I would suggest you reread Mr. Glassman’s post so that you actually understand what he is saying also.

      • settledscience 9/5/11, 7:47 pm, CO2 residence time

        ∅ [empty set]: Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.

        I suppose you don’t care about self-respect judging by your anonymity. But if you have any, and don’t want any longer to be seen as bloviat[ing] [thanks to kuhnkat, 9.8.11, 3:42 pm], here’s what you need to do in trying to participate in an intelligent discussion, to say nothing of science.

        (1) Make a point, e.g., describe something you think is not in accord with original research.

        (2) State your point completely, quoting from any sources as necessary. Don’t make the reader do your research for you.

        (3) Provide references, never behind a pay wall, so the reader can check your interpretation of the source.

        Otherwise, you come off as noise, and as a sycophant — someone who is trying to gain respect by association, in this case, the AGW movement.

      • WebHubTelescope 9/5/11, 10:45 pm, CO2 residence time

        WHT: Jeff it appears does not believe that excess CO2 actually exists in the atmosphere. I say this because the title of the article he cites is called:

        “On Why CO2 Is Known Not To Have Accumulated in the Atmosphere”.

        Do I have that interpretation right?

        No.

        Johnny Carson had a routine called the Great Carsoni in which he would conjure the contents of a sealed envelope by pressing it to his turbaned head. It isn’t working for you. Why don’t you go beyond the title and actually read the article?

        The article answers the question of why the CO2 that IS in the atmosphere is not an accumulated backlog. It begins,

        The Acquittal [of Carbon Dioxide shows that carbon dioxide did not accumulate in the atmosphere during the paleo era of the Vostok ice cores. If it had, the fit of the complement of the solubility curve might have been improved by the addition of a constant. It was not. And because the CO2 presumably still follows the complement of the solubility curve, it should be increasing during the modern era of global warming in recovery from Earth’s various ice epochs. These conclusions find support in a number of points in the IPCC reports.

        The remainder of the paper contains 18 enumerated reasons, in gory detail, including tangential considerations of the fact that CO2 does not accumulate. Read on, see if you disagree with any, and then follow my advice to skepticalscience on 9/8/11, 6:31 pm.

      • (1) Make a point, e.g., describe something you think is not in accord with original research.

        That’s all I do is original research. If it’s not original it’s not challenging and therefore not much fun.

      • WebHubTelescope 9/9/11, 3:16 am, CO2 Residence Time

        WHT: That’s all I do is original research.

        You answer criticism of your empty post with another empty post. Besides, you contradict yourself. It’s not all you do – you also submit empty posts.

        Even if what you claim were true, that you do original research, how is that relevant to anything here? History is littered with respected scientists, much less anonymous posters, whose pottery is thoroughly cracked (crazed) outside their narrow confines. Climate Etc. is surfeit with contributors, some who use a real name with doctor attached, who think AGW exists as a matter of belief to defend it tooth-and-nail, of course, like WHT, liberally spiced with ad hominem, off-point, and sans references.

      • like WHT, liberally spiced with ad hominem, off-point, and sans references.

        thanks for your support.

      • Jeff, sorry for the late contribution. Wasn’t aware of this discussion during my absence, but here is a nice empiric proof of the Revelle factor. Total carbon (DIC) is measured in seawater at several places, the longest series are in Hawaii and Bermuda. The latter can be found here:
        http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
        The period displayed is 20 years, 1983-2003.
        The atmospheric CO2 levels in that period increased about 10%, the pCO2 (that is free dissolved CO2) of the oceans increased about 8% (which obeys Hemry’s Law, with some delay), but DIC increased only with 0.8%, about a factor 10 compared to the free CO2 in the oceans, even more for atmospheric CO2.

        As the total amount of carbon in the atmosphere is about 800 GtC and in the ocean’s surface about 1000 GtC, it is obvious that a 10% increase of CO2 in the atmosphere only gives less than a 1% increase in total carbon in the ocean’s surface.

        The CO2 exchange between the atmosphere and the upper part of the oceans is very fast (e-folding time about 1.5 years) but is limited to not more than 10% of any change in the atmosphere. The rest of the 50% of what disappears from the human emissions (as mass, not as individual molecules) is absorbed by much slower sinks in other reservoirs (deep oceans, more permanent storage in the biosphere).

      • Ferdinand Engelbeen 9/9/11, 4:57 pm, CO2 Residence Time

        Your calculation about a factor of 10 for the Revelle Factor is a single calculation for a parameter that IPCC says ranges from 8 to 16 over the ocean. AR4, Figure 7.11(b).

        http://www.rocketscientistsjournal.com/2007/06/_res/F7-11.jpg

        Your link is to an anonymous article from the Bermuda Institute of Ocean Sciences (BIOS) that features a chart prepared expressly for AR4. IPCC used that chart in part, but rejected it in relevant part. See AR4, Figure 5.9, p. 404. IPCC kept data from BATS and ESTOC, added data from HOT, but rejected data from ALOHA and Hydrostation S. More importantly, IPCC rejected the BIOS curve of normalized DIC, the parameter on which you rely for your calculation.

        You have neither proof nor evidence of the Revelle factor. BIOS does not mention the Revelle factor. Where IPCC discussed the Revelle factor (AR4, &7.3.4, pp. 528 et seq.), it did not mention the BIOS data.

        The most complete picture of the Revelle factor appeared in AR4 Second Order draft as Figure 7.3.10.

        http://www.rocketscientistsjournal.com/2007/06/_res/F7-3-10.jpg

        The global map of the Revelle factor IPCC attributes to Sabine et al. (2004a), co-authored by Nicolas Gruber. The Revelle factor variation with temperature (Figure 7.3.10a), attributed to Zeebe and Wolf-Gladrow (2001) ($101), is a simple linear transformation of the solubility curve. Gruber objected, and his objection resulted in concealment of the temperature sensitivity curve:

        Comment: “buffer factor decreases with rising seawater temperature…” This is a common misconception. The buffer factor itself has almost no temperature sensitivity (in an isochemical situation). In contrast, the buffer factor strongly depends on the DIC to Alk ratio. The reason why there is an apparent temperature sensitivity is because of the temperature dependent solubility of total DIC (note that (a) is not isochemical, it is done with a constant pCO2, i.e. DIC will decrease with increasing temperature). In the ocean, surface ocean DIC and Alk are controlled by a myriad of processes, including temperature, so it is wrong to suggest that the spatial distribution of the buffer factor shown in Figure 7.3.10c is driven by temperature . [Nicolas Gruber (Reviewer's comment ID #: 307-70)]

        [Editor's] Notes: Taken into account. The buffer factor has a considerable T dependency (see Zeebe and Wolf-Gladrow, 2001). However, it is right that in the real ocean, this T dependency is overridden often by other processes such as pCO2 changes, TAlk changes and others. The diagram showing the T dependency of the buffer factor was omitted now in order not to confuse the reader. The text was changed. Bold added, AR4, Second-Order Draft.

        The Revelle Buffer was a conjecture, a mere relationship between anthropogenic CO2 parameters in the ocean and in the atmosphere. The authors, Revelle & Suess (1957) wanted to validate the Callendar Effect, a conjecture that manmade CO2 would build up in the atmosphere and cause global warming. To do so, they expressed the ratio of CO2 from fossil fuels going into the atmosphere, r, to that going into the ocean, s, as constant factor, λ, times the ratio of total carbon in the atmosphere, A_0, to that in the ocean at equilibrium, S_0. r/s = λ*A_0/S_0. The factor λ is the Revelle buffer factors, and they guessed it might be about 10. However they were unable to find a realistic set of parameters to produce a factor of 10, and left the problem as one for IGY funding.

        IPCC tried to rehabilitate the buffer for the same reason. If the ratio proved to be even approximately a constant over the ocean, meaning that r and s were correlated, then a physicist might discover a cause and effect relationship. The ratio is not even approximately constant, but varies by ±50%. Furthermore, one set of experts, Zeebe & Wolf-Gladrow, say the buffer is no different than solubility, while another, Gruber, denies it.

        The Revelle factor is useless. All sorts of ratios can be postulated, and some might someday be meaningful. The Revelle buffer factor is not one of them.

        The analyses from Revelle & Suess; Sabine, et al; Gruber; IPCC, Zeebe & Wolf-Gladrow, and BIOS all rely on thermodynamic equilibrium in the surface layer. Surface layer equilibrium is ludicrous on its face. The relationship between the carbonate components when not in equilibrium is unknown.

        Lastly, although this is not the end of the nonsense, the idea that the ocean buffers against dissolution of anthropogenic CO2 at 6 GtC/yr but not at all against natural CO2 at 90 GtC/yr is equally ludicrous.

        The ocean absorbs CO2 according to Henry’s Law. The absorption depends on the partial pressure of CO2_g and SST, and somewhat on salinity, according to an unquantified solubility curve. It does not depend on surface layer pH or ionization, and it does not accumulate in the atmosphere.

      • Jeff, that is a long exposure. It seems that there still is a lot of discussion about the exact height of the Revelle factor. But that is not the point in discussion (it doesn’t matter if the factor is 8 or 16, it matters that there is a factor).
        I haven’t looked at the AR4 references yet, but regardless of the reasons why some results were rejected, nDIC or DIC without normalizing is measured in many places and all show a much slower increase than pCO2(aq) or pCO2(atm). According to you that should be impossible, as DIC should increase at the same rate as pCO2 in air and water…

        But the main points of discussion are in your last paragraphs:

        Lastly, although this is not the end of the nonsense, the idea that the ocean buffers against dissolution of anthropogenic CO2 at 6 GtC/yr but not at all against natural CO2 at 90 GtC/yr is equally ludicrous.

        You are misinterpreting the facts: The 90 GtC/yr is the effect of temperature on CO2 solubility: partly causing continuous CO2 releases from the tropic oceans (mainly the Pacific deep ocean upwelling) and continuous CO2 sinks near the poles (mainly the THC sink in the NE Atlantic); partly the warming and cooling of the mid-latitude oceans over the seasons. The net effect of this is zero CO2 change over a full seasonal cycle at zero average temperature change over a year (16 microatm in the water or ~16 ppmv in the atmosphere for 1°C temperature change).
        In contrast, any increase in the atmosphere pushes more CO2 in average in the ocean surface at the same temperature. That is where and when the Revelle factor is working.

        The ocean absorbs CO2 according to Henry’s Law. The absorption depends on the partial pressure of CO2_g and SST, and somewhat on salinity, according to an unquantified solubility curve.

        That is right and wrong: the ocean absorbs CO2 according to Henry’s Law. But Henry’s Law only is about free CO2 in the waters. For a 10% increase of CO2 in the atmosphere and the same temperature, the amount of free CO2 in the ocean surface waters will increase with 10%. If there were no following reactions, that increase of free CO2 results in an increase of total carbon in solution with 0.1%, as free CO2 is less than 1% of total carbon in solution.

        But there are following reactions. Free CO2 is converted into carbonic acid that dissociates into bicarbonate and carbonate ions and hydrogen ions. Thus if you add more CO2 to the ocean waters, the pH lowers. But a lower pH means that the dissociation reaction is going the other way out, back to bicarbonate and free CO2. With other words, the amount of total CO2 in solution increases beyond the increase in 1% free CO2, because of further dissociation reactions but not to the full extent of the increase in the atmosphere, because of the counteraction caused by a lowering of the pH. See further:
        http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf

        It does not depend on surface layer pH or ionization

        Simple proof that you are wrong: make a solution of soda or bicarbonate and add some acetic acid. Lots of CO2 are bubbling up, because the amount of free CO2 in the solution gets far beyond Henry’s Law “normal” concentration.

        it does not accumulate in the atmosphere

        The increase in the atmosphere is measured, the human emissions are double that. But if you assume that some natural cause follows the human emissions at such an incredible fixed ratio, but the human emissions disappear somewhere without leaving a trace in the atmosphere, that is your opinion, not based on any observed facts.

      • Ferdinand Engelbeen, 9/10/11, 7:06 pm, CO2 Residence Time

        Did I say welcome back to the discussions? Well, welcome back.

        FE: It seems that there still is a lot of discussion about the exact height of the Revelle factor. But that is not the point in discussion (it doesn’t matter if the factor is 8 or 16, it matters that there is a factor)

        One could make a ratio out of any two random variables, but the fact that dividing one by another produces a number is meaningless. Sometimes RVs will appear correlated suggesting to an investigator that measuring their correlation might be fruitful. That is not the case with the fossil fuel CO2 emissions because they cannot be measured separately from the natural CO2. Of course, if anthropogenic CO2 were directly measurable, no one would need the Revelle factor.

        FE: nDIC or DIC without normalizing is measured in many places and all show a much slower increase than pCO2(aq) or pCO2(atm).

        and previously,

        FE: The atmospheric CO2 levels in that period increased about 10%, the pCO2 (that is free dissolved CO2) of the oceans increased about 8% (which obeys Henry’s Law, with some delay), but DIC increased only with 0.8%, about a factor 10 compared to the free CO2 in the oceans, even more for atmospheric CO2. 9/9/11, 4:57 pm.

        The parameter pCO2(aq), which I assume is the same as your pCO2 … of the oceans, doesn’t exist. CO2 has no partial pressure when dissolved. Instead, pCO2(g), g for the gas in thermodynamic equilibrium with the water, is deemed to be pCO2(aq). I have not reviewed the measurements or methods by which investigators have estimated pCO2(aq). I do know, however, that they report a difference between pCO2(atm) and pCO2(aq), and that that difference, coupled with wind speed, has been taken as the cause of the air-sea flux of CO2. Takahashi used these data to produce his map of CO2 flux across the globe. Unfortunately, the sum of his positive and negative fluxes is about an order of magnitude less than the positive and negative fluxes estimated by other means and reported by IPCC and others. For a full discussion, see ¶#5, and especially Figures 1 and 1A, and Equation (1), On Why CO2 Is Known Not To Have Accumulated in the Atmosphere …,

        http://rocketscientistsjournal.com/2007/06/on_why_co2_is_known_not_to_hav.html

        Takahashi’s results from application of pCO2(aq) deflate any confidence one might assign to pCO2(aq). His map depends on the differential pressure of pCO2(g) and pCO2(aq), where the latter does not exist except as it might be created in the laboratory procedures.

        FE: You are misinterpreting the facts: The 90 GtC/yr is the effect of temperature on CO2 solubility: partly causing continuous CO2 releases from the tropic oceans (mainly the Pacific deep ocean upwelling) and continuous CO2 sinks near the poles (mainly the THC sink in the NE Atlantic); partly the warming and cooling of the mid-latitude oceans over the seasons.

        I agree with your geography, and in fact I think the role of the THC in CO2 was first postulated in The Acquittal of Carbon Dioxide in October, 2006. If you have an earlier reference, please let me know.

        However, I don’t put any stock in the seasonal effects, nor in the warming and cooling, with the attendant breathing of CO2, in the ocean gyres. These would have a net effect of zero in the +92 GtC/yr and -90 GtC/yr annual air/sea fluxes. The upwelling caused primarily by Ekman pumping appears to be the exit of the THC, where deep, cold, CO2-saturated waters are dumped on the surface to warm and outgas.

        FE: The net effect of this is zero CO2 change over a full seasonal cycle at zero average temperature change over a year (16 microatm in the water or ~16 ppmv in the atmosphere for 1°C temperature change).

        For air-sea CO2 flux, I model surface oceanography as the superposition of three components: seasonal effects, gyres, and the year-long transport of surface waters from the exit of the THC to its headwaters at the poles. As I said, the first two should contribute a net of zero to the annual air-sea CO2 fluxes of about 90 GtC/yr. The mechanisms of outgassing and recharging are quite different, though both depend on Henry’s Law. That outgassing occurs continuously in bulk at the Eastern Equatorial Pacific and at a couple of other hot spots. The recharging occurs distributed over the entire ocean surface due to the cooling in the transport current, the return path of water for the THC. This action causes a high volume river of CO2 to spiral around the globe. This river cannot be represented successfully by the net uptake of 2.2 GtC/yr, nor by parceling that net over the globe.

        FE: In contrast, any increase in the atmosphere pushes more CO2 in average in the ocean surface at the same temperature. That is where and when the Revelle factor is working.

        First, any increase in atmospheric CO2 would cause more to CO2 to be dissolved in the surface layer according to Henry’s Law. However, ACO2 is only about 6 GtC/yr, and if it weren’t for the flap adopted by IPCC that AGW exists, the anthropogenic contribution would be negligible. It only contributes about 3% to the total CO2 flux from land and ocean into the atmosphere, and about half that if you include the leaf water that IPCC introduced and dropped – all this in a model that has yet to be successful in estimating global warming within the first order of magnitude, and that is turning out to be invalid with respect to climate sensitivity based on satellite data.

        The AGW model raises the art of relying on small difference between large numbers to a new height by neglecting effects and putting aside any noise masking the signals, both through fantastic assumptions. The danger in relying on the small difference between large numbers used to be taught in grade school; now they don’t even teach grammar in the US. (Schooling here is in its fourth generation of teaching the Delicate Blue Planet Model, environmentalism, and self-esteem trumping everything else.)

        Examples of the incredible assumptions include these: (1) Cloud cover can be successfully parameterized as a statistical constant. (2) Shortwave and longwave radiations tend to balance. (3) Radiative transfer will produce sufficiently accurate longwave global results if only the atmosphere could be correctly modeled. (4) The surface layer is in thermodynamic equilibrium. (5) Amplification of solar variations does not exist. And with regard to your claim that the Revelle factor is working, (6) the ocean buffers against dissolution of anthropogenic CO2 but impedes natural CO2 not at all.

        FE: Henry’s Law only is about free CO2 in the waters.

        You are correct enough (because the surface layer is never in equilibrium), and that contradicts the Revelle factor. Formally, Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-). On the other hand, Henry’s Coefficients are only tabulated for thermodynamic equilibrium, in which case the ratio would be known by the Bjerrum solution to the carbonate equations. However, CO2 is always highly soluble in water, equilibrium or not. In the dynamic situation, the ratio could be anything, (neither IPCC’s 1:100:10 nor Dr. King’s 1:175:21), with the formation of HCO3(-) being almost instantaneous but CO3(2-) lagging.

        FE: For a 10% increase of CO2 in the atmosphere and the same temperature, the amount of free CO2 in the ocean surface waters will increase with 10%. If there were no following reactions, that increase of free CO2 results in an increase of total carbon in solution with 0.1%, as free CO2 is less than 1% of total carbon in solution. But there are following reactions. Free CO2 is converted into carbonic acid that dissociates into bicarbonate and carbonate ions and hydrogen ions. Thus if you add more CO2 to the ocean waters, the pH lowers. But a lower pH means that the dissociation reaction is going the other way out, back to bicarbonate and free CO2. With other words, the amount of total CO2 in solution increases beyond the increase in 1% free CO2, because of further dissociation reactions but not to the full extent of the increase in the atmosphere, because of the counteraction caused by a lowering of the pH. Bold in original.

        Your rationale reads like IPCC’s. AR4, ¶7.3.4.1, p. 528, and Box 7.3, p. 529. IPCC relies on equations and on the Revelle factor (AR4, ¶7.3.4.1, p. 531), both attributed to from Zeebe & Wolf-Gladrow (2001) ($101). Revelle’s original factor was a conjecture that was the ratio two fractions, the numerator was the ratio of new fossil fuel emissions that goes into the atmosphere divided by that which goes into the ocean, and the denominator was the ratio of CO2 in the atmosphere to that in the ocean, i.e., as I wrote earlier, γ = (r/s)/(A_0/S_0).

        ZWG formulated the Revelle buffer differently. The numerator was the ratio of the change in [CO2] to the total [CO2], where [CO2] is the concentration of unionized CO2 in sea water, and the denominator was the ratio of the change in DIC to total DIC, i.e., (Δ[CO2])/([CO2])/(ΔDIC/DIC). One of the differences in the ZWG and IPCC formulation was that it removed the restriction to anthropogenic CO2 in R&S’s formula, making the Revelle factor applicable to natural and anthropogenic CO2 (ACO2). Then IPCC, and by implication ZWG, applied the Revelle factor only to ACO2 and NOT to natural CO2. Silly assumption (6), above.

        Moreover, ZWG’s formulation required Total Alkalinity to be constant, a condition IPCC ignored. IPCC also confused terms by changing DIC to [DIC] (the concentration of a concentration) and by calling Δ[CO2]/[CO2] the fractional change in seawater pCO2. As ZWG say, Doubling of pCO2 –> doubling of [CO2], NOT that “Doubling of pCO2 ⊧ doubling of [CO2]. Wolf-Gladrow, D., CO2 in Seawater: Equilibrium, Kinetics, Isotopes, 6/24/06, p. 56. ZWG is careful not to confuse pCO2 and [CO2(aq)].

        Most importantly, in all formulations, whether from Revelle & Suess, ZWG, or IPCC, the equations apply only in equilibrium. See silly assumption (4), above. IPCC specifies after re-equilibration. AR4, ¶7.3.4.2, p. 531. It continues, saying,

        Due to the slow CaCO3 buffering mechanism (and the slow silicate weathering), atmospheric pCO2 will approach a new equilibrium asymptotically only after several tens of thousands of years (Archer, 2005; Figure 7.12). Id.

        So according to IPCC, the Revelle factor applies only after several tens of thousands of years. This is all quite inane. As I have repeated and you have answered below, pCO2 (which is only atmospheric) and Henry’s Law are unaware of the sequestration of CO2 by the two biological pumps. Those two pumps are NOT connected to the atmosphere as IPCC shows here:

        http://www.rocketscientistsjournal.com/2007/06/_res/10AR4F7-10.jpg

        Those biological pumps feed on ions from the surface layer, not on air borne gaseous CO2. By never being in thermodynamic equilibrium, the surface layer acts to isolate Henry’s Law from ocean chemistry. The surface layer need not be in equilibrium, which IPCC assumes (a) in order to create a bottleneck, (b) creating the bulge in MLO CO2, which still (c) is inadequate to cause the observed global warming, so IPCC (d) makes CO2 initiate warming, releasing the potent greenhouse gas, water vapor, which (e) fortunately for AGW, has no effect on cloud cover. Fiction has a snow-balling effect.

        And the ultimate inanity perhaps, the surface layer was never in equilibrium and so cannot and will not re-equilibrate.

        Ferd, your calculations require you to establish first that the surface layer is equilibrium so that you can rely on the implications of the stoichiometric equilibrium constant, (ZWG) and after that you need to show the sensitivity of your calculations to whatever assumptions or calculations you desire for Total Alkalinity.

        As ZWG says,

        [CO2], [HCO3(-)], [CO3(2-)], and pH can be calculated from DIC and TA. Quoted in Wolf-Gladrow, 6/24/06, p. 50.

        In other words, you cannot calculate [CO2] from DIC without TA.

        FE: See further: http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf

        Your link at Warwick University is to 2004 class notes by Dr. G. P. King. He says,

        In contrast to nitrogen and oxygen, most CO2 of the combined atmosphere– ocean system is dissolved in water (98%). This is because CO2 reacts with water and forms bicarbonate (HCO−2 ) and ca[r]bonate (CO2− 3 ) ions. King, p. 2.

        The dissolution of CO2 in water compared to N2 and O2 has to do with their respective solubility coefficients and not because of the chemical equations of carbonates.

        King derives an expression for the Rayleigh Factor as RF = [DIC]/[CO3(2-)], an uninteresting ratio on the right hand side. See King, eqs. (25) and (26), p. 6. Because that ratio is uninteresting, King has found no inherent property or use for the RF. He just made RF appear algebraically in something after a number of assumptions.

        In his derivation, he says Total Alkalinity is approximately constant (Eq. (19), p. 5, which is close enough), and substitutes [DIC] (DIC) (Eq. (12)) after assuming [CO2] is negligible (Eq. (20)). He differentiates the result (Eq. (21), effectively assuming that because [CO2]_ml (ml for mixed layer) is small, its differential must be small! That is not sound mathematics.

        After assuming [CO2] in the ocean (his [CO2]_ml) is small, King says:

        Returning our attention to (18) and taking differentials and dividing by [CO2]_ml it is easily shown that … [Eq. (24)]. Bold added, King, p. 6.

        In this step, the author divides by something he assumed to be negligibly small! This is an incredibly delicate step, and without justification is unsound mathematics.

        The uptake factor expresses the increase in the concentration of total CO2 (i.e., DIC) in seawater corresponding to an increase in the concentration of CO2 (or partial pressure of CO2). See Fig 7.14 in the attached pages. King, p. 7.

        King seems to know pCO2 refers to the gas state. However, the attached pages are missing.

        It is known as the Revelle factor after Roger Revelle, who was among the first to point out the importance of this sensitivity for the oceanic uptake of anthropogenic CO2. King, p. 7.

        Just preceding this sentence, King derives the Revelle factor without regard to the species of CO2. King, ¶2.2.3-¶2.2.4, Eqs. (12) to (26), pp. 5-6. Now he arbitrarily attributes his derivation to ACO2. King is silent about why the Revelle factor might apply to 6 GtC/yr of ACO2 but not to 90 or 210 GtC/yr of natural CO2.

        Dr. King is an American physicist lecturing in climate to undergraduates in a school of engineering in the UK.

        FE: >>It does not depend on surface layer pH or ionization

        Simple proof that you are wrong: make a solution of soda or bicarbonate and add some acetic acid. Lots of CO2 are bubbling up, because the amount of free CO2 in the solution gets far beyond Henry’s Law “normal” concentration.

        What you have created is the chemistry of a water bottle bomb, also known as a bubble bomb, among other names. The acid working on the baking soda turns into salt water and carbon dioxide that wants to outgas. This can explode a closed household plastic bottle because the pCO2 generated will far exceed one atmosphere per Henry’s Law. Your experiment does NOT show that Henry’s coefficient depends on pH or ionization as I denied. Henry’s coefficient was the same before and after your experiment, a tabulated, known value if only the bottle were in thermodynamic equilibrium.

        As your own authority, King, says

        α is the solubility of CO2, which is a function of temperature and salinity. King, p. 7.

        That is, Henry’s Coefficient is not known to be a function of pH or ionization. If it is dependent on one of these parameters, it is a third order effect, and a fourth order effect in Henry’s Law after pressure.

        FE: >>it does not accumulate in the atmosphere

        The increase in the atmosphere is measured, the human emissions are double that. But if you assume that some natural cause follows the human emissions at such an incredible fixed ratio, but the human emissions disappear somewhere without leaving a trace in the atmosphere, that is your opinion, not based on any observed facts.

        During the time that the atmospheric CO2 increased 15%, my dog’s breath got 30% more foul. That coincidence does not mean that half the stink in my dog’s breath had any thing to do with the CO2 increase.

        I make no such assumption as you suggest. Your incredible fixed ratio does not exist anywhere in the real world. If it did, IPCC would have had evidence it was desperate to find for the MLO bulge to be manmade.

        Instead, IPCC manufactured other evidence to show that the increase in CO2 was caused by human emissions using two other methods than yours. First, it tried to show that the increase in CO2 paralleled the decline in O2, which was supposed to indicate that fossil fuel combustion reduced O2 by the same amount as it increased atmospheric CO2. Second, IPCC tried to show that the isotopic lightening measured in the increase of CO2 paralleled the rate of emissions from fossil fuel, which was supposed to emit an extra light isotopic ratio because ancient vegetation preferred 12C over 13C. See Fingerprints on SGW, by clicking on my name. IPCC’s claims are in this chart:

        http://www.rocketscientistsjournal.com/2010/03/_res/AR4_F2_3_CO2p138.jpg

        which is the product of chartjunk. If the right hand ordinates had been honestly drawn, the curves would have looked like these:

        http://www.rocketscientistsjournal.com/2010/03/sgw.html#III_

        and

        http://www.rocketscientistsjournal.com/2010/03/_res/CO2vO2.jpg

        IPCC’s visual correlation implied by parallel traces is incompetent on several levels. Rule 1: never rely on visual correlation. Rule 2: quantify results, here with a mass balance analysis, which IPCC never produced. Rule 3: even if the results tracked one another according to the mass balance analysis, that is merely correlation, which does not establish cause and effect any more than my dog’s breath did.

        Ferd, you need to do a mass balance analysis before you announce the discovery of an incredible fixed ratio, and then establish a cause and effect by showing that (1) the MLO data are global, and (2) eliminating all possible natural causes. Ultimately, (3) you will need to do the same thing once again, because even if ACO2 were the cause of a global pCO2 increase, that doesn’t show that CO2 causes the observed global warming. For that, you will also have to establish (4) that the Sun was not the cause. Lots of luck.

        IPCC’s results are either incompetent or out-and-out fraud, and I’m sorry to say that I don’t think the authors were all that incompetent.

      • Jeff Glassman | September 12, 2011 at 7:34 pm |

        Jeff, thanks for the welcome. And sorry for the delay in reply. Again you have a long exposure of arguments. I will try to respond only on those points where I think there are problems…

        About pCO2(aq):
        First an explanation how pCO2(aq) is observed: seawater is continuously sprayed in a small amount of air, so that that there is a rapid equilibrium with each other. CO2 levels are measured (semi-)continuously in the air. That is deemed pCO2(aq). This method was and is used on (currently lots of commercial) sea ships and fixed stations worldwide.

        If pCO2 in the atmosphere is higher than pCO2(aq), then the CO2 flux is net from air to water or reverse if pCO2(atm) is lower. One can discuss the local and total fluxes (which depend of wind/mixing speed), but the direction anyway is fixed by the pCO2(aq-atm) difference.
        E.g. for the Bermuda longer series, pCO2(aq) is higher than pCO2(atm) in high summer, the rest of the year the Atlantic Ocean around Bermuda (and most of the Atlantic Gyre) is a net sink for CO2.

        About seasonal effects:
        If you look again to the BATS graph at
        http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
        (BTW that is composed by Bates, from BATS) you will see that there is a huge seasonal variability in all variables. Although difficult to see in the scale of the graphs, the seasonal trend is:

        spring to fall: decrease in pH, increase in pCO2, decrease in DIC
        fall to spring: increase in pH, decrease in pCO2, increase in DIC

        The decrease in DIC during summer months is by increased biolife, which uses bicarbonate for shells and CO2 for organics. But at the same time CO2 is set free by the bicarbonate to carbonate shells reaction, which lowers the pH and increases pCO2.
        Here you see that pCO2 and DIC are decoupled by biolife, even to the effect that pCO2 increases and DIC decreases at the same time.
        Further, temperature increases pCO2(aq) with about 16 microatm/°C. If that leads to a sea-air flux, that will decrease DIC too, because CO2 must come from somewhere, while pCO2(aq) remains high as that is temperature dependent, bicarbonates and carbonates are far less (as long as not saturated in carbonates), but will supply CO2 to maintain free CO2 levels at elevated temperature.

        Anyway, you shouldn’t underestimate the fluxes back and forth caused by the seasons from the mid-latitude oceans. Based on the d13C decline, which is about 1/3rd of what can be expected from fossil fuel use, the continuous CO2 exchange via the THC down/upwelling is about 40 GtC/year, thus leaving about 50 GtC/year back and forth from seasonal changes in ocean temperature.

        About the contribution of humans:
        It only contributes about 3% to the total CO2 flux from land and ocean into the atmosphere, and about half that if you include the leaf water that IPCC introduced and dropped
        You make the same logic error as many before you: the flux is a back and forth flux, where inputs and outputs are near equal. That doesn’t contribute to any change in atmospheric CO2 levels, as long as total influx and total outflux are equal. It doesn’t matter if the total influx is 100 or 1,000 or 10,000 GtC/yr, only the difference between the influx and outflux is important. And that is easily calculated:

        increase in the atmosphere = natural inputs + human input – natural outputs
        or
        increase in the atmosphere – human input = natural inputs – natural outputs
        or
        4 GtC/yr – 8 GtC/yr = natural inputs – natural outputs = – 4 GtC/yr

        Thus at this moment, and in the past 50 years, the natural outputs are/were 1-7 GtC/yr larger than the natural inputs, including natural variability no matter what the real heights of the natural inputs and outputs were, see:
        http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em.jpg

        Thus we know the difference between total CO2 inputs and outputs quite exactly, including the noise. Both are smaller than the human emissions.

        About free CO2 in water:
        Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-).

        Jeff, here you are completely wrong. Henry’s Law is only about the same species in air and water. In this case that is CO2 in air and CO2 in solution. Not bicarbonate and carbonate. That are different species which have no intention to escape from the solution at all. Only CO2 in solution does exchange with CO2 in the atmosphere. Which leads to a ratio between the two in steady state, depending of the temperature, that is what Henry’s Law is about. In fact that is what the pCO2(aq) observations show.

        Thus pCO2(aq) is a direct measure for [CO2] in the surface water, if the temperature is known, but that doesn’t tell us anything about DIC.
        That makes that your discussion of the Revelle factor is based on this erroneous assumption and some more.

        The ratio between observed change d [CO2]/ [CO2] vs. dDIC/DIC indeed is the Revelle factor. For BATS indeed a factor of about 10 in change rate.
        That will be further elaborated in the household (bi)carbonate experiment…

        The Revelle factor is applicable to all CO2, anthro or not, but the IPCC and most other people involved (myself included) assume that near all the increase in the atmosphere and thus surface waters is caused by the human emissions…

        So according to IPCC, the Revelle factor applies only after several tens of thousands of years.

        Sorry, but that is a misinterpretation. The dissociation reaction from CO2 to HCO3(-) and CO3(2-) is very fast (fractions of seconds to seconds). That has only an indirect connection with the precipitation out of solution by the formation of CaCO3/MgCO3 by coccoliths, which is a slow process (as an overall net process: much is formed, but much is dissolved again). That is what the IPCC is referring to, nothing to do with the Revelle factor, except that this process influences the ratio between the different carbon species and the pH.

        About the carbonate / acid experiment

        Your experiment does NOT show that Henry’s coefficient depends on pH or ionization as I denied. Henry’s coefficient was the same before and after your experiment, a tabulated, known value if only the bottle were in thermodynamic equilibrium.
        and
        That is, Henry’s Coefficient is not known to be a function of pH or ionization.
        or as expressed by King:
        α is the solubility of CO2, which is a function of temperature and salinity.

        Indeed the solubility of CO2 (that, again, is about free CO2 gas in the liquid) is a function of temperature and salinity, not of pH. But DIC is strongly influenced by pH. That is what the experiment shows. After you have added an acid, 95 to 99% of all carbonate and/or bicarbonate disappeared to the atmosphere as CO2, thus DIC reduced with some 94 to 98% (depending of the strength of the acid), while the solubility of CO2 still is the same (after temperature re-equilibrium), according to Henry’s Law.

        That simply shows that Henry’s Law is about the solubility of CO2 as gas in water and not about the rest of the carbon (as bicarbonate and carbonate) in solution.

        About the mass balance

        As said before, humans emit 8 GtC/year nowadays. The increase in the atmosphere is 4 +/- 2 GtC/year. Because of the law of conservation of matter, and no carbon species escapes to space, nature as a whole is a net sink for CO2. It is that simple. The real, net addition from nature to the atmosphere is zero, nada, nothing.
        No matter how large the contributions from oceans, vegetation, volcanoes,… within a year are, all natural sinks combined were larger over the past 50 years than all natural sources combined. Thus even without knowledge of any individual natural flow, we know that humans are the cause of the increase of CO2 in the past at least 50 years.
        All other observations add to this knowledge and all alternative explanations fail one or more observations.

        About the O2 balance and d13C balance

        Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline

        Not completely right: that only is right for pure carbon (C + O2 -> CO2), not for oil (CnH2n+… 3/2n O2 -> n CO2 + n/2 H2O) and not for natural gas (CH4 + 2 O2 -> CO2 + 2 H2O).

        The d13C level is decreasing by the addition of 13C depleted fossil fuels, but the decrease is diluted by the deep ocean exchange with the atmosphere: what goes down in the deep is the current 13C/12C mix, but what is upwelling is the deep ocean mix, which is higher in d13C than the current atmosphere. Based on that difference, one can calculate the deep ocean – atmospheric exchanges:
        http://www.ferdinand-engelbeen.be/klimaat/klim_img/deep_ocean_air_zero.jpg

        Some more calculations with O2 and d13C, which show how much CO2 is sequestered by vegetation and how much by the oceans:
        http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf

        Finally…

        Ferd, you need to do a mass balance analysis before you announce the discovery of an incredible fixed ratio, and then establish a cause and effect by showing that (1) the MLO data are global, and (2) eliminating all possible natural causes. Ultimately, (3) you will need to do the same thing once again, because even if ACO2 were the cause of a global pCO2 increase, that doesn’t show that CO2 causes the observed global warming. For that, you will also have to establish (4) that the Sun was not the cause. Lots of luck.

        (1) The MLO data are representative for 95% of the atmosphere, which is global enough I suppose. See the data for lots of stations, air flights, ships, buoys and nowadays satellite data, like here:
        http://www.esrl.noaa.gov/gmd/ccgg/iadv/
        Only in the first few hundred meters over land, CO2 levels are far too variable to be of interest (except for people interested in individual fluxes).
        (2) I have calculated the mass balance, which shows that nature is a net sink for CO2. That effectively eliminates nature as a source.
        (3) That humans are the cause of the CO2 increase doesn’t say anything about its effect on temperature. I suppose that CO2 has some effect, but less than the minimum range of what the current climate models “project”.
        (4) I am pretty sure that current climate models underestimate the role of the sun.

        Points (1) and (2) are proven beyond doubt, but there can be a lot of discussion between “warmers” and “skeptics” about (3) and (4).

      • Ferdinand Engelbeen, 9/17/11, 11:44 am, CO2 Residence Time

        FE: About pCO2(aq) … .

        The method you describe is not unfamiliar to me, and it employs what is called an equilibrator. http://bloomcruise.blogspot.com/2011/07/sampling-co2-in-air-and-sea.html

        JG: Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-).

        to which you responded

        FE: Jeff, here you are completely wrong. Henry’s Law is only about the same species in air and water. In this case that is CO2 in air and CO2 in solution. Not bicarbonate and carbonate. That are different species which have no intention to escape from the solution at all. Only CO2 in solution does exchange with CO2 in the atmosphere. Which leads to a ratio between the two in steady state, depending of the temperature, that is what Henry’s Law is about. In fact that is what the pCO2(aq) observations show. [¶] Thus pCO2(aq) is a direct measure for [CO2] in the surface water, if the temperature is known, but that doesn’t tell us anything about DIC.

        1. So if I am not wrong about some things, but instead completely wrong, I must be wrong about the constitution of DIC.

        IPCC: The marine carbonate buffer system allows the ocean to take up CO2 far in excess of its potential uptake capacity based on solubility alone, and in doing so controls the pH of the ocean. This control is achieved by a series of reactions that transform carbon added as CO2 into HCO3– and CO32–. These three dissolved forms (collectively known as DIC) are found in the approximate ratio CO2:HCO3–:CO32– of 1:100:10 (Equation (7.1)). AR4 Box 7.3 Marine Carbon Chemistry and Ocean Acidification, p. 529.

        IPCC: Equation (7.3)), relating the fractional change in seawater pCO2 to the fractional change in total DIC after re-equilibration: Revelle factor (or buffer factor) = (Δ[CO2])/[CO2]/(Δ[DIC]/[DIC]) (7.3). Citations deleted, AR4, ¶7.3.4.2, p. 531.

        So what I did was substitute CO2(aq) for CO2 on p. 529, or for [CO2] on p. 531, and I think this agrees fully with your last sentence. This is reinforced by IPCC’s text in which the left hand side (necessarily) of Eq. (7.3) is identified as the fractional change in seawater pCO2. Thus IPCC refers to [CO2] as the pCO2 in seawater, which is no different than CO2(aq) as I was using the term. Also David Archer, Contributing Author to AR4, Chapter 7, confirms,

        DA: For total CO2, the exchanging species is CO2(aq), which constitutes about 0.5% of the total dissolved CO2 concentration. Archer, D., Daily, seasonal, and interannual variability of sea surface carbon and nutrient concentration in the Equatorial Pacific ocean, p. 10 of 17.

        So my terminology is not wrong. However the Archer citation about CO2(aq) concentration needs to be read in context with his much earlier statement:

        DA: The pCO2 of sea water is determined by its alkalinity, CO2, temperature, and salinity through carbonate equilibrium chemistry. Archer, id., p. 7 of 17.

        that is, pCO2 is not measured. And IPCC’s reference to [DIC] is erroneous, a reference to the concentration of a concentration.

        2. Your view about Henry’s Law and that carbonate and bicarbonate ions have no intention to escape from the solution at all shows a lack of understanding of the physics involved in the distinctly different processes of dissolution and chemical kinetics with their widely separated time scales. And of course, no one claims ions ever escape into the atmosphere.

        Those ions, created in a matter of a fraction of a microsecond from equilibrium by a pulse of CO2(aq), left undisturbed, return to equilibrium in a matter of a few milliseconds. Mitchell, M.J., et al., A model of carbon dioxide dissolution and mineral carbonation kinetics, 12/11/09, p. 1274, Figure 1. The process should be much faster with agitation of the solution, but the states of dynamic equilibrium would be unrecognizable, difficult to estimate, and perhaps impractical to generalize.

        Dissolution on the other hand proceeds according to the gas transfer velocity, which does account for agitation by wind and surface layer turbulence (not equilibrium). Velocity measurements lie in the region of about 4 to 29 cm/hr, which is equivalent to a cubic meter of gas dissolving in about 3 to 25 hours. See Wanninkhof, id., p. 7374.

        Within the time scale of Henry’s Law, the surface layer can for all practical purposes instantaneously buffer CO2(aq), not to its equilibrium value, but to the limits of DIC, whether for uptake or outgassing. That is so even constraining the layer to the sluggish, hypothetical state of equilibrium. Similarly, Henry’s Law proceeds effectively instantaneously with respect to diurnal or longer processes involved in weather, climate change, or climate.

        3. I’ll ignore the fact that your method doesn’t measure alkalinity, along with your concerns about biolife and seasonal effects. What no one can ignore is that the input seawater is not in thermodynamic equilibrium. So its ratio of CO2:HCO3(-):CO3(2-) is unknown. The measuring method changes that inlet ratio by equilibrating the sample. You don’t know if the open sea CO2(aq) is 0.5% as Archer says, or 0.1%, or 10%, for that matter. You only know CO2(aq) in the open sea is different than the estimated CO2(aq), otherwise the equilibrator would be a waste of time and money.

        Of course, my critique would be wrong should the surface layer actually be in thermodynamic equilibrium. Thermodynamic equilibrium is a ludicrous assumption, even if explained to a science illiterate, and one of the fatal errors in IPCC’s AGW model. As it stands, arguing as you and IPCC do that pCO2(aq) exists as an alternate theory to Henry’s Law is a corollary to the same false assumption.

        4. On the matter of a partial pressure difference, you are in agreement with others when you say,

        FE: If pCO2 in the atmosphere is higher than pCO2(aq), then the CO2 flux is net from air to water or reverse if pCO2(atm) is lower. One can discuss the local and total fluxes (which depend of wind/mixing speed), but the direction anyway is fixed by the pCO2(aq-atm) difference.

        You give no references, but your notion about CO2 flux is consistent with these sources:

        IPCC: So long as atmospheric CO2 concentration is increasing there is net uptake of carbon by the ocean, driven by the atmosphere-ocean difference in partial pressure of CO2. Bold added, TAR, Ch. 3, Executive Summary, p. 185.

        IPCC: Estimates (4º x 5º) of sea-to-air flux of CO2, computed using 940,000 measurements of surface water pCO2 collected since 1956 and averaged monthly, together with NCEP/NCAR 41-year mean monthly wind speeds and a (10-m wind speed) dependence on the gas transfer rate (Wanninkhof, 1992). Footnote deleted, bold added, AR4, Figure 7.8 caption, p. 523.

        RW: Gas transfer of CO2 is sometimes expressed as a gas transfer coefficient K. The flux equals the gas transfer coefficient multiplied by the partial pressure difference between air and water:
        F = K(pCO2_w – pCO2_a) (A2)
        where K – kL and L is the solubility expressed in units of (concentration/pressure).
        Citations deleted, bold added, Wanninkhof, R., Relationship Between Wind Speed and Gas Exchange Over the Ocean, Appendix, p. 7379.

        Here pCO2_w is pCO2(aq) and pCO2_a is pCO2(atm).

        But as you correctly state, what is measured

        FE: is deemed pCO2(aq). Bold added.

        It has to be deemed to be a partial pressure because it doesn’t exist. It can’t be measured. It can’t exert a pressure to resist pCO2(atm). As logical as it might seem to attribute a flux to a difference in pressure, the analogy is lost when one term is only deemed to exist.

        Besides, this theory that the flux depends on a partial pressure difference is not in accord with Henry’s Law. That Law explicitly depends on pCO2(atm), and says nothing about an alleged pCO2(aq). Henry’s Law does not make the mistake of giving reality to a parameter deemed to exist, especially by bootstrapping a parameter derived from the same law.

        5. Your reference to the isotopic ratio is imaginary, compounding IPCC’s fraud on this so-called fingerprint.

        FE: Based on the d13C decline, which is about 1/3rd of what can be expected from fossil fuel use … .

        I think when pinned down, you just make up data out of whole cloth. Prove me wrong with a reference. When IPCC tried to rely on δ13C, it had to manufacture fraudulent data. See SGW, Part III, Fingerprints, and especially Figure 27 (AR4, Figure 2.3, p. 138.

        6. You make unsupported claims about the physics of solubility and of the thermohaline circulation.

        FE: … the THC down/upwelling is about 40 GtC/year, thus leaving about 50 GtC/year back and forth from seasonal changes in ocean temperature.

        I claim the THC accounts for the 90 GtC/yr given by IPCC and others. I may have been the first to have the THC transport any amount of CO2. I think you just made up the 50/40 split. Prove me wrong by providing a reference.

        7. You disassemble huge carbon mass flows, scattering them into many small differences between large numbers. This is a conjecture essential to IPCC’s model in which natural forces are in balance, a model in which the alleged Revelle Factor magically buffers only manmade CO2 emissions. This conjecture makes man’s CO2 emissions appear far more significant than their insignificant 1.5% to 3% contribution. Your argument is

        FE: Anyway, you shouldn’t underestimate the fluxes back and forth caused by the seasons from the mid-latitude oceans. … You make the same logic error as many before you: the flux is a back and forth flux, where inputs and outputs are near equal.

        You, as well as others, have repeatedly tried to characterize the annual fluxes in the carbon cycle as distributed into a large number of small, nearly zero fluxes. It is, for example, the essence of Takahashi’s beautiful map of faux air-sea fluxes. I am willing to accept that model with respect to land-air fluxes, which are in fact distributed. But you have no rational physical model for the air-sea fluxes of about ± 91 GtC/yr. Your model fails physics.

        Prof. Robert Stewart, Texas A&M University, maintains a commendable, open source, online textbook, Oceanography in the 21st Century. He shows the surface ocean to atmosphere flux as +90/-92 GtC/yr. Part II, ch. 5, p. 2 of 4. He explains:

        Carbon dioxide dissolves into the ocean at high latitudes. CO2 is carried to the deep ocean by sinking currents, where it stays for hundreds of years. Eventually mixing brings the water back to the surface. The ocean emits carbon dioxide into the tropical atmosphere. This system of deep ocean currents is the marine physical pump for carbon. It help pumps carbon from the atmosphere into the sea for storage. Stewart, id., p. 3 of 4.

        His air-sea fluxes agree with IPCC, and they are not the sum of small increments. This CO2 flux is a massive river, with an input and an output, contradicting the distributed model. Stewart’s model supports mine in which the flux is controlled by the thermohaline circulation, also known as the MOC. However, because of the underlying poleward cooling currents, physics demands that the CO2 dissolving into the ocean occur over the entire surface, not just at high latitudes as Stewart states. It is at high latitudes that DIC is carried to the deep ocean, as Stewart does say.

        What you refer to as mid-latitude … back and forth flux and as seasonal changes in ocean temperature exist, but add to zero, and are quite negligible. They are also due in major part to the ocean gyres. These flux variations are second order effects that contribute nothing to the first order effect of the +90/-92 GtC/yr pair of air-sea fluxes. You still need to account for them.

        On another point, Stewart’s model is not yet mine because he says,

        I define the deep circulation as the circulation of mass. Of course, the mass circulation also carries heat, salt, oxygen, and other properties. Stewart, id., Ch. 13, Deep Circulation in the Ocean, p. 1 of 7.

        Stewart has omitted the major feature of the transport of CO2, and a major component of the THC/MOC mass. The total MOC flow is 31 Sv. AR4, Box 5.1, p. 397. That’s a potential to outgas over 530 PgC/yr if it were 100% efficient and warmed from 0ºC to 35ºC.

        8. You misunderstand what I wrote about the Revelle factor.

        JG: So according to IPCC, the Revelle factor applies only after several tens of thousands of years.

        FE: Sorry, but that is a misinterpretation

        If you wanted to be accurate, you might have said that the Revelle statement was wrong. However, the error is IPCC’s, not mine. I summarized IPCC’s words literally and correctly.

        9. You have failed to grasp the error in your model for acid changing solubility.

        FE: But DIC is strongly influenced by pH. That is what the experiment shows. After you have added an acid, 95 to 99% of all carbonate and/or bicarbonate disappeared to the atmosphere as CO2, thus DIC reduced with some 94 to 98% (depending of the strength of the acid), while the solubility of CO2 still is the same (after temperature re-equilibrium), according to Henry’s Law.

        You ignore the error I pointed out in your experiment to talk around it. You added baking soda and acid, and that is what produced the CO2. It was not released by a change in Henry’s coefficients as you claimed originally, and now try to rehabilitate.

        10. You next quote me accurately but out of context to change IPCC’s alleged combustion fingerprint for your own purposes. Here is the statement from me with your omissions in bold:

        JG: IPCC’s argument is that the decline in O2 matches the rise in CO2 and therefore the latter is from fossil fuel burning. Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline, so the traces should be drawn identically scaled in parts per million (1 ppm = 4.773 per meg (Scripps O2 Program)). SGW, III, Fingerprints.

        My complete statement is about IPCC’s argument. That’s why the sentence you saved says should, not “does”. Having distorted what I said, you say.

        FE: Not completely right: …

        What’s not right is your dishonest quotation. What’s not completely right is IPCC’s argument.

        11. You believe that MLO CO2 concentrations are global because they agree with other stations, not recognizing that investigators intentionally calibrate the other stations to agree with MLO.

        FE: The MLO data are representative for 95% of the atmosphere, which is global enough I suppose. See the data for lots of stations, air flights, ships, buoys and nowadays satellite data, like here: http://www.esrl.noaa.gov/gmd/ccgg/iadv/ .

        Your link to the CO2 measuring station network is a nice resource. It’s an update to the version blessed by IPCC. TAR, Figure 3.7, p. 212. MLO should indeed be representative, not because it IS representative, but because IPCC has seen fit to calibrate the network into agreement:

        The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically <1 ppm. Resolution of such a small signal (against a background of seasonal variations up to 15 ppm in the Northern Hemisphere) requires high quality atmospheric measurements, measurement protocols and calibration procedures within and between monitoring networks (Keeling et al., 1989; Conway et al., 1994). Bold added, TAR, ¶3.5.3, p. 211.

        And

        To aid in interpreting the interannual patterns seen in Figure 4, and in derived CO2 fluxes shown later, we identify quasi-periodic variability in atmospheric CO2 defined by time-intervals during which the seasonally adjusted CO2 concentration at Mauna Loa Observatory, Hawaii rose more rapidly than a long-term trend line proportional to industrial CO2 emissions. The Mauna Loa data and the trend line are shown in Figure 5 with vertical gray bars demarking the intervals of rapid rising CO2. Data from Mauna Loa Observatory were chosen for this identification because the measurements there are continuous and thus provide a more precisely determined rate of change than any other station in our observing program. Also, the rate observed there agrees closely with the global average rate estimated from the nearly pole to pole data of Figure 4 (plot not shown). Bold added, Keeling, et al., Exchanges of Atmospheric CO2 and 13CO2 with the Terrestrial Biosphere and Oceans from 1978 to 2000. I. Global Aspects, SIO Ref. 01-06, June, 2001.

        Your ESRL reference says under Measurement Details, Carbon Dioxide,

        Because detector response is non-linear in the range of atmospheric levels, ambient samples are bracketed during analysis by a set of reference standards used to calibrate detector response.

        That should be sufficient. To bracket: to place within. Thefreedictionary.com.

        MLO is representative of global CO2 measurements by necessity for the AGW conjecture, then by assumption, followed by so-called calibration procedures. You might note that IPCC has not made the calibration data available.

        12. Conclusion: Because your belief system, AGW, is a fiction, it must be supported by a seemingly unending string of fictions, and a concealment of reality.

        Ferd, you have been bamboozled, not once but many times just in this one post – • by the phantom of pCO2(aq) only deemed to exist, • by the unwarranted assumption, both overt and hidden, of thermodynamic equilibrium, • by the broken link between open ocean CO2 measurements and the ocean surface, • by IPCC’s fraudulent reliance on δ13C, • by an imaginary partial pressure difference substituting for ordinary solubility, • by reports of the existence of the failed Revelle factor, • by your own hypothetical experiment to add CO2 and confuse it with Henry’s Law outgassing, • by a network of CO2 stations calibrated into agreement.

        When you put aside the powerful evidence that both Earth’s climate and its climate change are determined by the Sun, and that surface temperatures in both Earth’s cold and warm states are regulated by albedo, to adopt the belief that man is the cause of climate change, you put yourself in the box of having to defend a raft of exceptions to physics and fudged data.

  11. Judith,
    You should have a good answer to Hal’s question. It’s a point of elementary physical chemistry, well set out by your Skeptical Science link:
    “Individual carbon dioxide molecules have a short life time of around 5 years in the atmosphere. However, when they leave the atmosphere, they’re simply swapping places with carbon dioxide in the ocean. The final amount of extra CO2 that remains in the atmosphere stays there on a time scale of centuries. “

    They could also have included exchange with the biosphere, which is significant on the 5 year scale. Isotope exchanges measure that individual molecule scale. What counts is the time taken for the CO2 excess to go.

    98% of the atoms in a human body are exchanged every year. Residence time is a matter of months. That’s what isotopes would measure. We stay around a lot longer.

    • CO2 is a trace gas in a large atmosphere.
      Does Skeptical Science or anyone else mention much CO2 is in the
      atmosphere?
      Wiki:
      “Carbon dioxide in earth’s atmosphere is considered a trace gas currently occurring at an average concentration of about 390 parts per million by volume or 591 parts per million by mass.The total mass of atmospheric carbon dioxide is 3.16×1015 kg (about 3,000 gigatonnes).”

      So various natural processes are emitting hundreds of gigatonnes, plants even though they consume hundreds of gigatonnes also emit about 100 gigatonnes per year. The largest absorber of CO2 is weathering of rocks from rainfall, such CO2 and limestone. The dissolve minerals end up in the ocean which is used in biologic process of all life and is also used to make shells, which can deposit on the ocean floor and after millions of years can make sedimentary rock [such as limestone] which can end up back on the land again and being dissolved by CO2. Rinse and repeat.
      The ocean also absorbs and emits CO2.
      The total amount of CO2 processed per year from biological, weathering, and warmer ocean water emitting CO2 and cooler Ocean water absorbing CO2, is unknown- though there are rough estimates. But in total it’s on the order of 1000 gigatonnes per year. And tens of gigatonnes are emited due to human activity per year. And so you have 1000 gigatonnes added to a 3,000 gigatonnes atmosphere and 1000 gigatonnes are removed each year. So roughly, a 1/4 of CO2 emitted by human activity is absorbed each year. The amount of total global CO increase per year is about 1/2 the amount of human emission per year [roughly].
      So you have two different answers- about 1/2 human emission “looks” like it’s being adsorbed per year. And second answer is looking at the huge pot of CO2 in which Human emission is added and how much of total CO2 is being recycled each year- roughly 1/4 of all CO2 in atmosphere is turned over yearly.

    • Stop eating for a year, and there wouldn’t be much of you left. You have to intake new mass, and without that new mass to exchange the old mass with in the first place, there would be no turnover; only a slow leeching into decay.

      The same is for CO2. If a molecule leaves the atmosphere its mass has to be replaced, and where does a replacement come from? The Ocean? Then how do you get a rise or decline of bulk CO2 in the first place if exchange is always 1 to 1? That’s the problem. If the residence time of a CO2 molecule is only 5 years (that is the kinetic rate of the sinks), then the response rate of the bulk CO2 amount is also 5 years; as any imbalance in input and output from the atmosphere will change the bulk no slower than the slowest kinetic rate of source or sink. Realize too, that this rate can be affected by concentration: the more CO2 is in the atmosphere, the shorter the residence time could become, or put another way the faster the kinetic rate for uptake by sinks. This is dependent on the -rate law- that covers CO2 equilibriums, be it zero order, first order, second order or so forth. And some of these rate laws, like second order, are independent of concentration and fixed; while others like zero and first are dependent and variable. Which is CO2?

      So, to show that elevated CO2 levels stay elevated, you have to show the kinetic rate and equilibrium constants for all the sources/sinks, and why additional CO2 would suddenly change those rates as would be necessary to say elevated CO2 can last centuries when residence time is 5 years.

      I fear there is a huge lack of understanding for kinetics in these debates, and just what equilibrium actually means.

    • The size of a mixed-layer (upper 50-100m) ocean CO2 reservoir is approximately equal to that of the entire atmosphere, so we can expect the rate of increase in atmospheric co2 to be half of that of emission. And incidentally, that’s what happens. Approximatly half of 14 GT of co2 are going somewhere. If they go into mixed layer ocean, it means that time for mlo to equilibrate with atmosphere is on order of weeks to months. How can they claim then that individula molecule residence time is 5 years? Are they nuts?

      • “The size of a mixed-layer (upper 50-100m) ocean CO2 reservoir is approximately equal to that of the entire atmosphere, so we can expect the rate of increase in atmospheric co2 to be half of that of emission.”
        STOP DAMN IT STOP.

        The 14CO12 from the Atmospheric H-bomb tests had a t1/2 of 15 years and a t1/4 of 30 years. We know this to be true. It is a fact.
        Now, if the Ocean CO2 Reservoir was the same size as the AtmosphericCO2 Reservoir then it could NEVER drop below half max. If the Ocean CO2 Reservoir was 10 times bigger than the AtmosphericCO2 Reservoir, then the 14CO2 end point would be 10% higher than before the bomb tests.
        The end point for 14CO2 is within the noise, 1-2%, of the starting point.
        We therefore KNOW that the Ocean CO2 Reservoir, which is in exchange with the atmosphere, is >30 times greater than the AtmosphericCO2 Reservoir .
        This is true. This is not speculation. This is not pulling figures out of my ass. This is reality. This is basic dilution. The 14CO2 spike’s from the 50’s until 1965, increased the 14C background by a factor of 10. 45 years later (three t1/2’s), the 14CO2 is back to the background level it was before. So, all the 14CO2 generated by the bomb tests has been diluted;
        The dilution due to the increase in atmospheric CO2, from 320 ppm to 390 ppm, explains only 18% of the disappearance, ocean buffering and biotic has taken the rest.

    • She does have a good answer to Hal’s question, and she knows it, and it is exactly the source you noted. But she doesn’t like the right answer, so she presents it with a lot of absolutely wrong, uninformed and willfully misinformed gobbledygook, to dredge up page hits by pretending that propaganda from oil- and coal-financed doubt merchants are legitimate scientists or have anything to add to scientific knowledge. In fact, the likes of Hal Doiron and Jack Schmitt do nothing but subtract from the sum of human knowledge, and by lending the credibility of her academic stature to their ilk, Curry aids and abets their fraud.

  12. The Earth, mainly the oceans, has been absorbing about 50% of human emissions in recent decades, so doesn’t it follow from this that if human emissions were to halve, and the 50% rate were maintained, CO2 concentrations would stabilise?

    And in the very unlikely event that they ceased completely, doesn’t it also follow that CO2 concentrations would fall at approximately the same rate that they have risen?

    • i>”so doesn’t it follow from this that if human emissions were to halve, and the 50% rate were maintained, CO2 concentrations would stabilise?”
      No, if the airborne fraction remains at 50%, CO2 concentrations would rise at 50% of the former rate.

      “doesn’t it also follow that CO2 concentrations would fall at approximately the same rate that they have risen?”
      No, on that logic they would then stabilise. The rate at which they rose was determined by the rate at which we mined and burnt the carbon. That won’t be reflected in any natural redistribution process. Excess CO2 would be absorbed by the sea, but on a different and much longer timescale.

      • Why does CO2 at Mauna Loa fluctuate by as much as 2.3ppm in 7 days, and quite regularly by 1ppm ore more?

        Why would that be a sign of man-made Co2?

      • How about 2ppm over 1 day?

        http://cdiac.ornl.gov/ftp/trends/co2/Jubany_2009_Daily.txt

        Isn’t this supposed to be a well mixed gas?

      • Nick and tempterrain

        The answer to tempterrain’s question is: we don’t know.

        IF (the BIG word) the CO2 half-life in our climate system is really 120 years (upper end of range suggested by Zeke Hausfather at a recent Yale climate forum), this means that the annual decay rate (at the beginning of the decay curve) is 0.58% of the atmospheric concentration, or around 2.2 ppmv today.

        This represents a bit less than half of the CO2 emitted by humans today.

        This tells me that IF (there’s that BIG word again) the increase in atmospheric CO2 is caused primarily by human emissions, and IF we were to reduce these emissions to around half what they are today, the net in and out would be in balance, and the CO2 concentration would stabilize.

        But, hey guys, there’s a lot of IF there.

        IF Professor Salby is right, it all has very little to do with human emissions.

        More IF.

        Max

      • Nick,

        No. If human CO2 emissions were reduced by 50% the Earth wouldn’t actually “know” that’s what had happened. Unless you believe in Gaia that is! It’s just coincidental that, at present atmospheric concentrations of CO2, the natural absorption rate is approximately half of emissions. Therefore, if emissions are halved the concentration of CO2 should stay constant.

        By the same logic, neither would the Earth be “aware” that CO2 emissions had stopped completely. In the unlikely event of that happening, CO2 concentrations would start to fall at approximately the same rate as they would have otherwise risen.

      • No it’s simpler than that – essentially Henry’s Law. The Earth “knows” we’re emitting because pCO2 goes up. The top layer of the sea in response takes in CO2 to reach an equilibrium concentration. Then CO2 is transported downward, more absorbed at the surface, and so on. All driven by our addition of CO2. If we add half as much, the driving pCO2 is less, and so are the fluxes.

        Henry’s Law is for equilibrium, and overall this isn’t equilibrium. But the partitioning has worked out to be fairly stable at 50% of added CO2, and that would be the strating point for estimating the effect of an emission reduction.

      • Nick,

        Any argument that relies on the Earth “knowing” anything is a bit suss, IMO.
        If you aren’t convinced by my few lines of argument , maybe you should take a look at:

        http://www.ipcc.ch/pdf/assessment-report/ar4/wg3/ar4-wg3-ts.pdf

        I’m not saying anything different to what the IPCC have already said. They make the point that to reduce CO2 concentrations, human CO2 emissions have to be reduced by more than half.

  13. Does the residence time even make an difference to anything???? Is a CO2 molecule from a fossil fuel substantially different in behavior from a CO2 molecule from a breathing human or animal or a CO2 from a non-fossil fuel.

    Surely what matters are the quantity of CO2 produced by all means vs the quantity of CO2 consumed by all means and the quantity of CO2 sequestered in the oceans. There are multiple factors and no single “forcing” will necessarily dominate unless the mechanisms to consume and sequester CO2 are overwhelmed.

  14. Norm Kalmanovitch

    The question that needs to be asked is how much of the annual increase of 2ppmv/year is due to humans. There is no published literature that demonstrates conclusively that this is any more than 5% with at least the other 95% being naturally sourced.
    http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
    The seasonal variation in atmospheric CO2 concentration is in the order of 6ppmv over the course of a year and since this is entirely natural and three times the year to year increase of 2ppmv it seems reasonable that using 95% for the amount of increase in CO2 being naturally sourced is likely valid.
    If this is the case then the annual contribution from humans would be no more than 5% of 2ppmv/year or just 0.1ppmv/year.
    Over the past ten years there has been no detectable increase in global temperature on all five global temperature datasets (NCDC, HadCRUT3, GISS, RSS MSU, and UAH MSU) in spite of a 20ppmv increase in atmospheric CO2, so it is highly unlikely that 200 years of human emissions of 0.1ppmv/year producing the same 20ppmv increase in atmospheric CO2 will have any detectable temperature effect.
    One must remember that the global temperature is an absolute temperature value and not the temperature anomaly value that is used in climate discussions. The IPCC 2001 Third Assessment Report stated that the global temperature increase was estimated at 0.6°C +/- 0.2°C per century. This is only 0.006°C/year.
    The absolute global temperature from NCDC plotted by Junk Science
    http://junksciencearchive.com/MSU_Temps/NCDCabs.html
    shows that the annual seasonal variation in global temperature is in the order, of 3.9°C/year or 650 times greater than the year to year increase of jusr 0.006°C/year.
    Since this seasonal variation is entirely natural and due to the seasonal effect from the significantly larger Northern Hemisphere Landmass, it would only take a change of 1/650 in the completely natural seasonal variation to account for the entire annual change of 0.006°C without invoking any change to the greenhouse effect from the annual 0.1ppmv increase in CO2 from fossil fuel emissions.
    To carry the argument one step further the increase in atmospheric CO2 concentration from 337ppmv in 1979 to the 390ppmv concentration today according to the CO2 forcing parameter of the IPCC climate models should cause a reduction in OLR of precisely 0.782Watts/m^2 or just 0.025watts/m^2/year
    The measurement of OLR (www.climate4you.com under the heading “global temperatures” titled “Outgoing longwave radiation Global”) shows that the annual seasonal variation in OLR is in the order of 10Watts/m^2 which is 400 times what the IPCC states would be the effect from the 2ppmv increase per year and 8000 times greater than the portion attributed to the 0.1ppmv/year human contribution.
    What makes the whole thing totally ridiculous is that even in spite of the mostly natural 2ppmv/year increase in CO2 and the 57.1% increase in CO2 emissions over the past 31 years; there is no detectable decrease in OLR and in fact the OLR has increased over these 31 years proving conclusively that there has been absolutely zero enhanced greenhouse effect from CO2 increases human sourced or otherwise

    • Man is responsible for 4 ppm/yr based on Gt of fossil fuel burning, so it looks quite easy to conclude who is responsible for the 2 ppm/yr.

      • Jim D

        A bit too “easy”, I’d say. [For an alternate "conclusion", check the suggestion by Murry Salby.]

        Max

      • Max,

        You’re making tha same mistake as Nick Stokes in thinking the Earth “knows” that humanity is reponsible for a 4ppmv/yr contribution, and has somehow decided to help us out by absorbing 2ppmv/yr !

        The absorption rate is determined by pC02 as Nick rightly says. That’s just another way of saying that the natural absorption rate is proportional to atmospheric CO2 concentrations and not the rate of human emissions.

        Concentrations would not change that much, in the course of one year, if human CO2 emissions were halved. The natural absorption rate would stay approximately constant, and, so would CO2 concentrations. As Jim D says ” it looks quite easy to conclude who is responsible for the 2 ppm/yr.”

        There are really no prizes for getting the answer right to that one!

      • tempterrain,

        If humans add 4 ppmv/year and one measures an increase of 2 ppmv/year, then nature is a net sink of 2 ppmv/year. Thus humans are fully responsible for the increase. If humans were to reduce the emissions to 2 ppmv/year, then we would see stable levels in the atmosphere, but until now human have emitted twice the amounts as measured as increase over the past 150 years, with a slightly exponential increase per year over the years. That is the reason that there is no leveling off of the increase in the atmosphere and that the sinks also follow the emissions at a near constant rate.

      • The question that arises is why did the sink efficiency increase by a factor of 3? eg Sarmiento 2010.

        Except for interannual variability, the net land carbon sink appears to have been relatively constant at a mean value of −0.27 PgC yr−1
        between 1960 and 1988, at which time it increased abruptly
        by −0.88 (−0.77 to −1.04) PgC yr−1 to a new relatively constant mean of −1.15 PgC yr−1 between 1989 and 2003/7 (the sign convention is negative out of the atmosphere). This result is detectable at the 99% level using a t-test

      • Mr. Englebeen,

        how long do you think we will be able to continue putting 4ppm into the atmosphere even without moronic politicians interfering in drilling and useage?

        Would you estimate even 1000ppm total?? I personally would be very happy if we gould get the CO2 level on earth to 1000ppm. Unfortunately the capacity of the earth to use CO2 is not going to allow it unless you have suggestions for how we can process caco3 or other compounds economically to release the co2?

      • kuhnkat, with the newest drilling techniques (horizontal shale fracking) gas reserves increased with many decennia and oil is going the same direction. Thus it looks that there still is cheap energy available for the foreseeable future, including for expanding countries like China, India, Brazil,…

        Thus as long as the emissions are slightly exponentially increasing, I expect that the airborne fraction in the atmosphere remains a fixed percentage of the emissions, thus also slightly exponentially increasing. If the year by year emissions don’t increase, we may expect a fixed level in the atmosphere somewhere in the future (when sinks equal emissions). If we halve our emissions (not very likely) at the current sink rate of 4 GtC/year, then the increase in the atmosphere would be zero, etc…

      • kuhnkat, with the newest drilling techniques (horizontal shale fracking) gas reserves increased with many decennia and oil is going the same direction. Thus it looks that there still is cheap energy available for the foreseeable future, including for expanding countries like China, India, Brazil,…

        The alternative natural gas deposits show large initial production rates but deplete rapidly. Oil has the same problem; Bakken depostis have a short production life. And where we do find large deposits, such as the tar sands, these require significant amount of natural gas to achieve an energy return on investment. Hard oil shale is the worst.

        The cheap and plentiful fossil fuel energy is still coal, and it is getting dirtier with time as we continue to access lower grades of coal.

      • A good example of lower grade of coal is lignite, which is classified somewhere between ancient peat moss and carbonaceous mud.
        When the future is exploiting lignite, we know prospects are bleak:
        http://arkansasnews.com/2011/05/16/lawmaker-sees-promise-in-lignite-as-alternative-fuel/

      • Mr. Englebeen,

        glad to hear you are not a peak oil fan!! 8>)

      • Webby,

        “When the future is exploiting lignite, we know prospects are bleak:”

        I hear ya. The most successful renewable energy user in Europe counts burning wood chips as renewable and produces more useable energy from them than their enormous wind capacity!!!

      • Norm Kalmanovitch

        If you check the CO2 emissions data you will see that because of the skyrocketing oil price CO2 emissions decreased from 1979 to 1980, and again from 1980 to 1981, and again from 1981 to 1982.
        If you check the CO2 dfata from Mauna Loa Observatory this had no effect on the year to year increase in atmospheric CO2 concentration.
        If a decrease in CO2 emissions from fossil fuels three years running does not even propduce even a deflection in the atmospheric CO2 concentration curve it must be concluded that the observed increase is definitely not primarilt from CO2 emissions from fossil fuels.
        Oceans are a very large depository foe CO2 containing far more CO2 than the atmosphere.
        The Argo Buoys deployed in 2003 show a slight overall cooling in the sea surface but an overall increase in the heat content of the oceans.
        Oceans are saturated in CO2 and this saturation is controlled by temperature and pressure with the deep ocean containing virtually all ofm the CO2 because of the high pressures. The overall increase in heat content of the oceans leads to increased outgassing of CO2 as the saturation point gets lowered by the increased heat. This is the primary source for the increase in atmospheric CO2 concentration.
        A close look at the infamous Al Gore demonstration of the 650,000 years of ice core data showing warming and cooling cycles and changes in CO2 will show that temperature leads CO2 by about 800 years. This is because it takes time to heat the oceans which then outgas CO2 creating this 800 year lag.
        There is plenty of physical evidence to demonstrate this; do you have any actual physical evidence to demonstrate your conjecture that “Man is responsible for 4 ppm/yr based on Gt of fossil fuel burning”?

      • 25 Gt CO2 annually emitted from fossil fuels is nearly 1% of what is in the atmosphere, hence 4 ppm/yr. 6 Gt CO2 from the US alone. These are known numbers you can find anywhere for yourself.

      • Norm Kalmanovitch

        The atmosphere contains approximately 2800 Gt of CO2 and each year approximately 750 Gt is added and an approximately 750 Gt of CO2 is removed. The annual difference between what is added and what is removed resdults in an increase or decrease in the year to year concentration. The question is whether changes to this 750 Gt from changes in the 33.158 Gt (2010) human portion of this is primarily responsible for the observed near linear average increase in CO2 concentrarion of just over 2ppmv/year for the past dozen years. If your contention is correct then year to year changes in CO2 emissions should be seen as year to year changes in CO2 concentration. The devil is in the details so I have listed the details that cover the data for the past five years.
        Current levels of CO2 emissions from fossil fuels for the past five years are: 33.158 Gt (2010) 31.3387 Gt (2009) 31.915 Gt (2008) 31.641 Gt (2007) 30.667 Gt (2006) and 29.826 Gt (2005). (BP Statstical review 2011)
        http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
        Shows that the annual average concentration for these years has been:
        389.78ppmv (2010) 387.36ppmv (2009) 385.57ppmv (2008) 383.72ppmv (2007) 381.86ppmv (2006) and 379.78ppmv (2005).
        If your conjecture that fossil fuel emissions are the dominent source for the increase in atmospheric CO2 concentration then year to year changes in CO2 emissions should match precisely the year to year changesm in CO2 concentration.
        From 2009 to 2010 emissions increased by 1.820 Gt and CO2 increased by 2.42ppmv.
        From 2008 to 2009 emissions decreased by 0.577 Gt but CO2 increased by 1.79ppmv.
        From 2007 to 2008 emissions increased by 0.274 Gt and CO2 increased by 1.85ppmv.
        From 2006 to 2007 emissions increased by 0.974 Gt and CO2 increased by 1.86ppmv.
        From 2005 to 2006 emissions increased by 0.841 Gt and CO2 increased by 2.08ppmv.
        The last two values show that a smaller increase in emissions of 0.841 Gt produced a larger increase of 2.08ppmv than the greater increase in emissions of 0.974 Gt which only produced a 1.86ppmv increase in atmospheric CO2 concentration. This would not happen if CO2 emissions from fossil fuels were the dominant source for observed increases in atmospheric CO2 concentration.
        Far more obvious is the demonstration that the 1.86ppmv increase in concentration and the 0.974 increase in CO2 emissions from fossil fuels is very similar to the 1.79ppmv increase in CO2 concentration from 2008 to 2009 but during this period there was a year to year reduction in CO2 emissions from fossil, fuels of 0.577 Gt!!
        If CO2 emissions from fossil fuel were in fact the prime source for increase in atmospheric CO2 concentration; a decrease in emissions would cause a proportionate decrease in CO2 concentration and since this is not the case it is an absolute certainty that some other source within the 750 annual addition of CO2 to the atmosphere is the primary source for the observed average 2.046ppmv increase in atmospheric CO2 concentration of the past six years.
        Don’t make statements that you can’t back up with hard physical evidence!

      • If your conjecture that fossil fuel emissions are the dominent source for the increase in atmospheric CO2 concentration then year to year changes in CO2 emissions should match precisely the year to year changesm in CO2 concentration.

        I don’t think you understand how response functions work. To first-order you have to convolve the emissions amount with the CO2 impulse response function. This will smooth out the atmospheric concentration much like a signal processing filter works.

        Don’t make statements that you can’t back up with hard physical evidence!

        That is not necessarily the way that research works. A theorist can spend lots of time coming up with a great explanation for some behavior. Since he may be talented at that aspect but perhaps not at running experiments, the researchers who are better at experimental work take on the challenge of proving or disproving the theorist’s assertions.
        More importantly, I would add that one shouldn’t make claims without having a basic understanding of the physics.

      • Norm Kalmanovitch

        There is a seasonal variation in atmospheric CO2 concentration of approximately 6ppmv over the course of a year. This is easily seen on the detail measurements from MLO http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
        If you look closely at this graph or the actual monthly data you will see that the variation is not a smooth sinusoid demonstrating that MLO can pick up changes over time periods as short as a month so year to year changes are therefore perfectly represented.
        Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration and since this is not the case changes in something much larger than the CO2 emissions from fossil fuels is the primary source for the observed increase.
        IOf you average the data and smooth out the variations all you do is remove the evidence that CO2 is not the prime contributor.
        By the way there is a three year period from 1979 to 1982 during which there was consecutive year to year decreases in CO2 emissions from fossil fuels with no visible change to the rate of increase in atmospheric CO2 concentration proving once again that CO2 emissions are not the prime source of the observed increase.
        If you want some more proof the rate of increase in CO2 emissions changed slope around the year 2000 with the previous 10 years having a significantly lower slope than the later 10 years. If you take the slope of the first ten years and compare it to the increase in CO2 concentration you should be able to calculate a rate of increase in concentration per Gt of emissions increase. If CO2 emissions from fossil fuels are the prime driver this same relationship should hold for the ten years after 2000; if it doesn’t then emissions from fossil fuels are definitely not the primary source for the observed increase. I will leave it up to you to do the exercise and try and prove yourself correct.
        If you do not want to go to that trouble just look at the longterm record.
        The increase in CO2 emissions from fossil fuels was essentially zero prior to 1850 but the CO2 concentration was already increasing at a slowly accelerating rate. How do yopu explain a hundred years of CO2 concentration on the same accelerating curve that was in place until reaching the current asymptote trend of 2ppmv/year that occured before there were increasing CO2 emissions from fossil fuels.
        I can demonstrate this half a dozen different ways but if you are a true scientist you only need one to abandon this error ridden concept and research the actual source of the CO2 concentration increase. If you are not a true scientist but merel;y a researcher with a preconcieved notion then you will simply dismiss physical evidence and attempt to justify what is clearly false.
        The basic physics is that the CO2 molecule is linear and symetrical and therefore doesn’t have the necessary permanent dipole moment tyo allow interaction with all wavelengths radiated by the Earth and is limited to just a single resonant wavelength band centred on 14.77microns. Because clouds and water vapour account for well over 90% of the Earth’s 33°C greenhouse effect the remaining 3.3°C greenhouse effect possibly arrtibutable to CO2 represents at least 80% of the energy within this 14.77micron band already being accessed leaving only 20% of 3.3°C fruther possible effect from CO2 regardless of how large the concentration becomes. This is basic physics and basic physics trumps fabricated computer models and unfounded estimates of the source of observed CO2 increases.

      • “Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration”

        There would be change in the year to year CO2 concentration even if CO2 emissions from fossil fuels did not change from year to year. Which may very well make it impossible to detect what you are trying to detect.

      • Norm, you are kind of all over the map here. Especially at the end you bring up CO2 absorption characteristics which has nothing to do with its residence time.
        The small sinusoidal ripple is very easy to fit as it is modeled as a steady state response to a long-term natural variation. In other words, according to signal processing theory, a response function applied to a sinusoidal function in the steady state will result in a scaled version of the original sinusoidal signal. This is one of those mechanisms that is so common that an engineer or scientist should not even bat an eye in doing the analysis.

        From the Fourier transform of the impulse response, one can accurately predict the scale of the periodic excursions of the natural CO2 emissions prior to the filter being applied. Read the amplitude response at the natural frequency and that is the filtering scale factor.

        As far as the noise is concerned, knock yourself out trying to figure out what causes it. It could be measurement noise after the response occurs which would make it problematic to distinguish from random natural events.

      • Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration and since this is not the case changes in something much larger than the CO2 emissions from fossil fuels is the primary source for the observed increase.

        Norm, I’m on your side on this one: CO2 emissions from fossil fuels do indeed change from year to year, and we should indeed see the effects of these changes within a year, or even six months.

        But let’s sit down and do the math here. The atmosphere weighs 5140 teratons (aka exagrams), and a mole of air weighs 28.97 g, so the volume of the atmosphere is 5140/28.97 = 177 examoles, agreed? A millionth of that is 177 teramoles. So a fluctuation of 6 ppmv is 6*177 teramoles or about one petamole of CO2.

        Since the carbon in a mole of CO2 weighs 12 g, a petamole represents 12 petagrams, aka gigatons, of carbon.

        Now the annual carbon emissions from fossil fuel in 2009 amounted to around 9 gigatons (this year it should hit 10 GtC). So in order to compete with the annual 6 ppmv CO2 fluctuation, mankind would have to suspend all fossil fuel emissions for the year, and even then that would only get you a 4.5 ppmv reduction.

        I don’t know what fluctuations you had in mind, but if you look at the record of global fossil-fuel CO2 emissions maintained by the US Department of Energy’s Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory, you can easily see that any departures from a smoothly growing curve are less than 100 megatons or 0.1 gigaton.

        I fully agree with you that the CO2 level at Mauna Loa should fluctuate with fluctuating fossil fuel emissions. This fluctuation will however be less than 1% of the annual 6 ppmv fluctuation, or at most .06 ppmv.

        Since the month-to-month fluctuations at Mauna Loa are much greater than this, what you’re looking for will be completely masked by random noise.

      • Vaughan,

        Is this a strawman? The change in the year to year CO2 concentration is the annual growth rate. It looks like this:
        http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf

        Does it look like the growth is caused by anthropogenic emissions? Do the math.

      • Edim – are you a strawman? Let me suggest you open an Excel worksheet. Column B will be the years 1960 to 2010. Column C should start at 2.8, and increase by 0.025 a year. Column D should be =2*RAND()-3. E1 should be C1+D1, E2 should be =E1+C2+D2.

        Here, C is meant to represent human emissions, D to represent ocean and ecosystem uptake, and E to represent concentrations. If you then make a column F which is E2-E1, this represents the annual mean growth rate. Plot F versus B. You’ll get something that looks a lot like the annual mean growth rate plot that you link to.

        Now, does it look like the growth in CO2 is due to column B? Do the math.

        -M

      • Does it look like the growth is caused by anthropogenic emissions? Do the math.

        Yes, let’s resolve this by doing the math. The year-to-year fluctuations in that graph are on the order of 0.5 ppmv or approximate 1 GtC. Annual fossil fuel emissions are 9 GtC. Nature’s contribution is around 210 GtC, so the total natural and human CO2 emissions come to around 220 GtC.

        So even before looking at anthropogenic emissions, these half-ppmv fluctuations in that graph represent 1/210 < 0.5% of the total natural emissions.

        Do I understand you to be telling nature she’s not allowed to fluctuate by 0.5% of her total annual emissions?

        I don’t know what you’re seeing in that graph that proves your point, but what I’m seeing there is a 0.5% noise level in natural CO2 emissions. Though small in the big picture, this is nevertheless large enough to dwarf the fluctuations Norm is looking for. The signal is simply too noisy to allow him to tell whether those fluctuations are present.

      • Norm, you have echoed the Salby argument about correlations. The data on CO2 rise fit the anthropogenic emissions extremely well when you take into account that the ocean/biosphere sink becomes less efficient when it is warmer. In warm years CO2 rises faster even if emission stays fairly constant because a warmer ocean can’t take up so much.

      • Jim D,

        You need to prove CO2 has reached or near saturation in the ocean in order to make such a statement ‘a warmer ocean can’t take up so much’. The ocean is far from CO2 saturation. I thought AGWers say warmer induce more water evaporation and more precipitation. More precipitation will absorb more CO2 in the air.

      • The ocean doesn’t need to reach “saturation” to lower its rate of CO2 uptake. You’re applying only chemical stoichiometry, but failing to consider chemical kinematics (rates of chemical reactions).

      • settledscience,

        I hope you digest what I said and what Jim D said. Did I mention it did not change rates? Do you know the ocean CO2 absorption rates between 1C SST difference? If you know, you would make such a statement.

      • Sam NC, it is not saturation, it is the equilibrium ratio, that depends on temperature. Warmer temperatures favor keeping more CO2 in the atmosphere versus in the ocean (much like water vapor in this way).

      • Digest this, Sam.

        The ocean doesn’t need to reach “saturation” to lower its rate of CO2 uptake. You’re applying only chemical stoichiometry, but failing to consider chemical kinematics (rates of chemical reactions).

        Jim D told you this too.
        “Sam NC, it is not saturation, it is the equilibrium ratio, that depends on temperature.”

        The equilibrium ratio changes as the rate of emission exceeds the rate of absorption.

        Finally, your claim that due to warming, higher levels of “precipitation will absorb more CO2 in the air” is irrelevant because it is not quantified.

        I thought AGWers say warmer induce more water evaporation and more precipitation. More precipitation will absorb more CO2 in the air.

        Do you believe that absorption is higher than the amount emitted? Based on what study?

        Nothing. You just pulled it out of your arse.

      • Jim D,

        Ah, so now you realize that CO2 in the atmosphere is not entirely due to man made.

      • Sam NC, maybe you just realized that. Everyone else knew that 280 ppm was the natural level, and this cycles between the atmosphere and ocean.

      • Norm,

        Nice analysis. AGWers are having a big difficult time to respond adequately.

      • Quite the opposite: the calculations above show that the fluctuations at Mauna Loa due to the fossil fuel emission fluctuations Norm asked about, which are on the order of 100-150 megatons of carbon (= 350-550 megatons of CO2), can be responsible for at most 1% of the annual oscillation in the Keeling curve. The noise in the Keeling curve is substantially more than 1%, making a 150-megaton variation in annual carbon emissions essentially invisible.

        Incidentally one thing I forgot to take into account in my calculations is that only half the emissions remain in the atmosphere. So my 1% should have been 0.5%. That only makes fuel fluctuations even less visible.

        Norm is talking about something too small to be observable in the Mauna Loa data.

      • Strawman. Norm is talking about the annual growth rate of atmospheric CO2 and the rise of anthropogenic CO2 emissions (~3 Gt in 1960 and 10 Gt now, according to your link).

      • Yes, Norm is indeed talking about those, as am I. How does that make my calculations a strawman? Are you saying I made a calculation error somewhere?

      • It seems that you are talking about anthropogenic emissions fluctuations (100-150 MtC). He’s talking about the anthropogenic CO2 increase (~4 GtC in 1960 and ~9 GtC now, according to your link). He compares it with the annual atmospheric CO2 growth rate. Does it look like human emissions are causing the atmospheric CO2 growth?
        http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf

      • Edim

        It looks like CO2 annual mean growth varies with temperature.

        Who of thunk it?

        Cheers

      • Does it look like human emissions are causing the atmospheric CO2 growth?

        Yes it does, on the assumption that nature is drawing down 45% of fossil fuel emissions (not all of our emissions remain in the atmosphere). Let’s look at the emissions for 2005 since that’s the middle of the rightmost decade in this graph. That was 7.97 GtC. The 55% of that left in the atmosphere is 4.38 GtC. Now 1 ppmv of atmospheric CO2 equals 5.140/28.97*12 = 2.124 GtC. So we should have seen 4.38/2.124 = 2.06 ppmv for 2005.

        Looks like that to me. Are you sure we’re looking at the same graph? Or have I misunderstood what you meant?

      • It looks like CO2 annual mean growth varies with temperature.

        I think CH wins this one. I lined up the NH temperature graph from WoodForTrees for 1960-2011 with Edim’s CO2 graph and put the result up at http://thue.stanford.edu/TempCO2corr.JPG .

        While the CO2 isn’t perfectly tracking temperature (after all there are other natural contributors to CO2 such as volcanoes that would throw off any such correlation), it’s still a pretty impressive correlation.

        One very noticeable place where the temperature rises while CO2 goes down is 1990-1991. The cataclysmic explosion of Pinatubo on June 15, 1991 in the Philippines, 5000 miles west of Mauna Loa, might be relevant. In 1992 the NH temperature declines, consistent with a massive ash cloud lasting a couple of years. The CO2 then rises in 1993. Hard to tell what’s going on there. The rest of the half-century doesn’t seem to have as big a reverse correlation as 1990-1991.

  15. Little busy, but have a look at this paper; especially Figure 3.

    http://www.whoi.edu/cms/files/Zhang_et_al_12Feb08_final_DSRII_34850.pdf

    Marine photosynthetic microorganisms fix more than 10x their body mass each year. They move up and down the ‘mixing layer’ during a full seasonal cycle. Moreover, the feces of the animals which eat them fall as ‘snow’ and are oxidized all the way down to the bottom.
    Treating the oceans as a chemical onion misses the whole oceanic biosystem.

  16. If you could follow an isolated CO2 molecule through the atmosphere, ocean, biosphere, etc you would find that it would exchange reservoirs rapidly (of order years or less). However, exchange does not necessarily imply net CO2 drawdown.

    The relevant piece of information for climate change is moreso involved with the perturbation timescale of the excess CO2 (i.e., if we throw an extra slug of 100 ppm of CO2 into the atmosphere, how long before it decays back to pre-perturbation levels?). An individual number as an answer to this question is not incredibly useful due to removal processes that act on multiple timescales, ranging from decades to hundreds of thousands of years, and the governing processes range from ocean chemistry to silicate weathering. In fact, it would take many millennia to completely draw down all the excess CO2, as indicated in carbon cycle models and past analogs (e.g. PETM, which took some 150,000 years to recover). In fact, a recent NAS report was dedicated to precisely this issue, looking at the long-term impacts of excess CO2. This page summarizes the relevant timescales and removal processes.
    http://www.nap.edu/openbook.php?record_id=12877&page=75 (and following pages)

    Another review of this topic is in

    David Archer, Michael Eby, Victor Brovkin, Andy Ridgwell, Long Cao, Uwe Mikolajewicz,Ken Caldeira, Katsumi Matsumoto, Guy Munhoven, Alvaro Montenegro, and Kathy Tokos, Atmospheric lifetime of fossil-fuel carbon dioxide., Annual Reviews of Earth and Planetary Sciences 37:117-134, doi 10.1146/annurev.earth.031208.100206, 2009.

    The timescale commonly cited (~100 years) by Fred and others, is in fact a poor representation of the carbon cycle and based on largely on application of linear kinetics that tell only a part of the story. Equilibration with the ocean takes a couple centuries (depending also on the size of the perturbation), while something around 25% of the excess CO2 is fated to be removed by slower chemical reactions with CaCO3 and igneous rocks. This is especially relevant for slow feedbacks like ice sheet responses.

    On the extreme short side, the “5 year” timescale for CO2 lifetimes are often meant to imply that if we stopped burning CO2 today, we’d return back to 280 ppm in a few years. None of this is in line with anything we know about carbon cycle physics, nor can make sense of in observed records of CO2 levels. It’s just as nonsensical as the “anti-greenhouse effect” stuff played up by Claes Johnson or Postma. In fact, a substantial fraction of anthropogenic CO2 will persist in the atmosphere for much longer than a century.

    This, and the underlying long-term temperature response was also the subject of a very good paper by Matthews and Caldeira a couple of years ago, on why it takes near zero emissions to stabilize atmospheric CO2
    https://www.see.ed.ac.uk/~shs/Climate%20change/Data%20sources/Matthews_Caldeira_%20Instant%20zero%20C%20GRL2008.pdf

    • Chris – I haven’t cited a time scale of 100 years, but rather emphasized the multiple trajectories with their different timescales, including the very long tail of the distribution. However, for readers interested in a single number, we have to come up with something, and a number in the order of about 100 years is not unreasonable to convey a sense of the long residence time, even though it has no formal mathematical meaning No single number can do that, but 100 years may not be too far off from the time it would take about half of the excess to be absorbed, even though that is not a “half life” in the exponential decay sense, and understates the slowness by which the remaining half would disappear. I don’t think there is much disagreement about the actual nature of the reduction in CO2, as your comments, plus those of some others of us below make clear.

      • However, for readers interested in a single number, we have to come up with something, and a number in the order of about 100 years is not unreasonable to convey a sense of the long residence time.

        I cringe when I read things like this. It gives science a bad name to overstate its confidence like that. If science doesn’t have a number it shouldn’t make one up.

        The problem with a made up number is that people may start repeating it and pretty soon you’ve got everyone agreeing that that must be the correct number, for no better reason than that everyone says it is.

      • It is not a matter of overstating confidence. It is a matter of what is the best value to characterize a highly non-exponential process. I.e., the main problem is not a lack of knowledge but a lack of a way to express that knowledge by a single number.

      • Agree with Fred and Joel. To sate my curiosity, I created a coupled set of differential equations described with a diagram below in this thread.

        Fat-tails are not well described by conventional statistics as they do not present any of the conventional statistical moments such as mean, variance, etc. The best I have been able to come up with is presenting the shape of the curve along with a characteristic time that is usually related to a median value. Again this is less than ideal because you might find a characteristic time of 20 to 30 years but the fat-tail still dominates. So we need a canonical representation of the impulse response.
        I would suggest a hyperbolic function as an impulse response curve. Previously I was able to take the fossil fuel emission curve and convolve with this impulse response and found that it kept track of the atmospheric CO2 over the industrial age.

        From oil discovery to oil production to emission and to sequestering, everything is a compartment model, and it is all connected. It’s kind of magical that the math works on a statistical scale but not surprising.

      • Vaughan,

        do you ever have nightmares of people in white smocks surrounding your home with burning torches chanting??

      • I know the kind you’re thinking of, kuhnkat, but I seriously doubt enough of them could afford the bus fare to California to be much of a nuisance.

        Anyway we’re used to that sort of thing here around Christmas. After a couple of chants we hand out cookies and they move on to the next house.

        However if by some remote chance Ottawa’s ferocious Greenfyre were to come upon the two of us conspiring to confuse the public, I wouldn’t have to be able to run faster than him, only faster than you.

      • Not a problem. I am protected by the Second Amendment. Have fun leading them on a Long Beach or San Francisco Marathon.

      • Dr. Pratt, I feel your pain. I cringe when when people use control theory and global averages with simplified radiative physics neglecting varying responses temporally and spatially of conduction and convection impacted by pseudo-cyclic perturbations. :)

      • You must be on an appointments and promotions committee, Dallas, you sound like you’ve read one tenure case folder too many.

        Athletes get tendonitis, A&P committee members get tenuritis. Who can afford to pay attention to the rule that the candidate’s case be presented in language that the Board of Trustees can understand? Most institutions would have a hard time maintaining a reasonable senior-junior faculty ratio if they took that rule seriously.

      • I have never been on a blog comment thread for this long where the insight keeps coming. And it really doesn’t matter if it is good insight or misdirection — like entropy and disorder and disorder getting in the way, one has to be able to reason around the perturbations.

      • Sorry, I was attempting a pithy summary. :)

      • I am agreeing with you. You have some good ideas, and so keep them coming.

    • In fact, it would take many millennia to completely draw down all the excess CO2, as indicated in carbon cycle models and past analogs (e.g. PETM, which took some 150,000 years to recover).

      Chris, how do the carbon cycle models account for the 4 GtC/yr increase in natural removal of CO2 from the atmosphere, offsetting our 9 GtC/yr by nearly 50%? And how confident are they of their accounting?

      Also is PETM a good analogy to the present? The current rate of increase is over 2 ppmv/yr, and the drawdown is tracking that increase at around 1 ppmv/yr. Unless PETM witnessed something remotely resembling that rate of increase it may also not have experienced the rapid drawdown we’re currently in the middle of.

      We may be entering a time when nothing in the past is a reliable analogy. For all we know a speedy rise may be followed by a speedy decline.

      Or not, but absent suitable precedents it’s hard to say one way or the other. This is a relatively speculative corner of climate science, unlike better understood things like the rate of onset of CO2 and temperature.

      • Vaughan: around 20 to 30% of the CO2 we emit will stay in the atmosphere until slow processes like sedimentation, weathering, and carbonate formation can act – that’s tens of thousands of years.

        The other 70-80% will eventually end up in the oceans and ecosystems, but even that takes time, and is controlled by diffusion rates in the ocean and convective currents – so even to get down to that 20-30% takes decades to centuries.

        I did a rough back-of-the-envelope calculation once, and think I decided that we’re already emitted enough CO2 so that if we stopped emitting today, we’d slowly relax back to a lower limit of 310-350 ppm CO2 (low end assumes that land-use change emissions are reversible and only 20% stays in the atmosphere, upper end includes land-use change emissions and a 30% persistence). The initial drop would be at current rate of natural uptake, but would slow as it asymptotically approaches that lower limit over centuries… and then after tens of thousands of years it would start dropping below that lower limit as it presumably would eventually return to the 280 ppm preindustrial level.

        -M

      • M, the fallacy in any reasoning that talks about “20 to 30% of the CO2 we emit” is that, as soon as it’s emitted, it is indistinguishable from the 210 GtC nature emits in parallel with our 9 GtC/yr contribution.

        For that reason it only makes sense to talk about 1-1.5% of atmospheric CO2, not about the 20 to 30% of what we no longer own. It doesn’t exist as “ours” any more.

        Nature is currently removing 214 GtC, up 2% from what it was a century ago, and if that were to increase by another 0.5% for some reason, right there we’d see the rate of increase decline by 1 GtC/yr or around 20% of the 5 GtC/yr that carbon is currently increasing by.

        Columbia’s Klaus Lackner, as well as Exxon, have been working on directly extracting CO2 from the atmosphere. Leveraging Nature’s 214/yr GtC removal program strikes me as a potentially powerful alternative to Lackner’s very labor-intensive approach.

        … if we stopped emitting today … the initial drop would be at current rate of natural uptake,

        Apologies for continuing to contradict you, I’m starting to feel like a wet blanket. However if you have a human pouring 300 tons/sec into a leaky bucket that has a natural leak of 100 tons/sec, and the human turns off the hose, the rate of increase does not drop by 100 tons/sec. It drops by 300 tons/sec because the rate of increase went from +200 to -100.

        If the hose output is reduced suddenly, the filling rate drops initially by whatever the hose rate dropped by, up to and including all of it. Nature has no say in that.

        we’d slowly relax back to a lower limit of 310-350 ppm CO2

        Assuming exponential decay, that’s two parameters: the asymptotic limit (which you gave) and the rate it is approached (which you didn’t). It would be very interesting to see how you arrived not only at the number you gave but also the one you didn’t. (Error bars would be even better but I’ll settle for just the two numbers for starters.)

        For all I know you may have a compelling analysis. I don’t like trying to second guess these things.

      • ” if we stopped emitting today … the initial drop would be at current rate of natural uptake,”

        I think you misread me, and we actually agree on this point. What I meant was that if we see 5 GtC natural uptake per year today (along with our 10 GtC emissions), that if we eliminate human emissions, atmospheric loading of CO2 will drop by 5 GtC per year, because the rate of uptake is controlled by the difference between the atmospheric concentration and the concentrations in the various ecosystem, soil, and ocean reservoirs, so, to a first order, the uptake rate in any given year is independent of human emissions in that year (though it does depend on how much was emitted in the previous decades, which controls how far out of equilibrium the atmosphere is).

        “M, the fallacy in any reasoning that talks about “20 to 30% of the CO2 we emit” is that, as soon as it’s emitted, it is indistinguishable ”

        I’ll disagree with you here. Dollars are indistinguishable, but if my bank account is approximately in a steady state (income in equals rent plus food plus entertainment money out), and then my grandmother gifts me $100, I can perfectly well note that 20% of that $100 will end up permanently increasing my retirement account, but the other 80% will go towards more entertainment and food (rent being fixed). There’s a fixed quantity of labile carbon in the ocean/atmosphere/ecosystem/soil system. If I dig up coal and burn it, I have increased that total amount of carbon. I can calculate how that extra carbon will distribute itself between the reservoirs at equilibrium. Therefore, I can talk about how 20% of that extra carbon will remain in the atmosphere after the system reaches its new equilibrium.

        “Nature is currently removing 214 GtC”:
        I also disagree with the way this is formulated: I’d argue that Nature is currently removing only a few GtC. The rest of it is in balance – plants breathe in and breathe out, leaves grow, fall, and decay, carbon enters the ocean and leaves the ocean. An increase in atmospheric concentration disturbs the balance a bit – a little more enters the ocean than leaves, plants grow a little bigger than they would have otherwise – but only a bit. Yes, you can calculate a Gross Primary Productivity of the ecosystem of 120 GtC per year, but I think the net is more informative than the gross for most purposes. Especially when you are talking about the 70 GtC going into (and out of) the ocean. It might be possible to change this, but only at great expense: iron fertilization the oceans probably won’t actually work (and would be ecologically disruptive), and we are already doing a bunch of terrestrial ecosystem harvesting and storage (eg, making wooden buildings) but the sheer volumes of matter we’d have to store/bury would be… well, in the gigatons per year to make a difference. That’s a lot of trees buried. I’m also unconvinced that Lackner’s work will ever be practical due to thermodynamic energetic constraints as well as the volumes of material the process would create. (I like Caldeira’s thinking: any carbon-capture system that can compensate for our emissions will have to be at least as large as our current fossil infrastructure, if not 3 times as large if you have to deal with the CO2 and not just the carbon)

        “It would be very interesting to see how you arrived not only at the number you gave but also the one you didn’t.”

        My “20-30%” comes from Archer et al. 2009 (http://geosci.uchicago.edu/~archer/reprints/archer.2009.ann_rev_tail.pdf). The fossil & land-use emissions I used come from CDAIC (http://cdiac.ornl.gov/). I assumed the unperturbed CO2 concentration was 280 ppm. So, 280 ppm + (347 GtC)*0.2/(2.12GtC/ppm) equals about 310 ppm (lower bound) (upper bound uses 0.3 and included land-use change emissions). I didn’t give an asymptotic approach rate: Archer says “2 to 20 centuries”, but probably looking at the actual model results in his paper will give you a better feel for those rates.

        -M

      • I think you misread me, and we actually agree on this point. What I meant was that if we see 5 GtC natural uptake per year today (along with our 10 GtC emissions), that if we eliminate human emissions, atmospheric loading of CO2 will drop by 5 GtC per year, because the rate of uptake is controlled by the difference between the atmospheric concentration and the concentrations in the various ecosystem, soil, and ocean reservoirs, so, to a first order, the uptake rate in any given year is independent of human emissions in that year (though it does depend on how much was emitted in the previous decades, which controls how far out of equilibrium the atmosphere is).

        So far I’m not convinced we agree, but perhaps you can persuade me otherwise. Continuing with my 300 tons/sec bucket example, I believe you’re saying that turning off the 300 tons/sec when there’s a leakage of 100 tons/sec will take 200 tons/sec off the loading. That is, we were adding 200 tons/sec and now we’re not.

        Where we disagree is on the importance of the 100 tons/sec continuing to leak. Whereas you don’t want to count that, I do.

        Let me stop for a second and see whether you still think there’s no difference between our respective viewpoints.

      • Okay. We have a bucket with 100 stones in it. Every day, I add 10 stones, and nature takes away 5 stones. So, under current conditions, the bucket increases by 5 stones per day (eg, it would be at 105 stones tomorrow). If I stop adding stones, nature is still taking away 5 stones per day, so the “loading” of the bucket is decreasing by 5 stones per day (eg, it would be at 95 stones tomorrow).

        Eventually, nature’s uptake will drop below 5 stones per day, because it turns out that the natural uptake is controlled by the difference between bucket 1 (with 100 stones) and bucket 2 (with 50 stones), and once the two buckets are equalized, nature will stop taking stones out of the first bucket. But, to first order, nature’s stone removal tomorrow is not dependent on whether or not I’ve added another 10 stones today: nature will take 5 stones out either way.

        I do think we’re still in agreement: to use your 300 ton/sec minus 100 ton/sec, we’re going from +200 ton/sec to net -100 ton/sec: the 100 ton uptake is constant (in the short term), regardless of whether we’re adding 300 tons or 0 tons. Right?

      • Good information, thanks.

        When I first looked at the IPCC Bern profiles, I thought the CO2 impulse response appeared as a diffusive fat-tailed curve. From what I have learned from you and Bart and Manacker, I thought to bit the bullet and create a mesh of first-order rate equations to model the diffusion to the deeper sequestering sites.
        http://img534.imageshack.us/img534/9016/co250stages.gif
        This model goes on for about 50 stages, with the steady state showing an equal amount of carbon at each stage. The interesting feature is the shape of the atmospheric CO2 curve; this indeed shows a 1/(1+k*sqrt(t)) dependence which is very close to the IPCC Bern model.

        I also now agree with Bart and Vaughn and a few others that think that the definitional residence time of the CO2 in the atmosphere is a bit of a red herring. The residence time in the atmosphere may be 10 years but when one factors in the interchange of the CO2 between the boundaries of the system, the actual impulse response does get these fat-tails.
        I think it explains much of what is happening.

      • The interesting feature is the shape of the atmospheric CO2 curve; this indeed shows a 1/(1+k*sqrt(t)) dependence which is very close to the IPCC Bern model.

        This would be very reassuring if it included estimates of (i) any recent increase in vegetation biomass, and (ii) how quickly that vegetation will throw in the towel upon commencement of a program of starving it of CO2.

        If the answer to (i) is “negligible” I would be more comfortable with the estimates people are coming up with based on exponential decays. However in my experience life is not into exponential decay, quite the opposite in fact according to Malthus. (The Black-Scholes-Merton partial differential equation for the price of a financial asset is similarly opposite to what happens in physics.) And not just for human populations but all populations of living creatures including vegetation.

        This is why I’m suspicious of these arguments about residence time: they start from the premise that there is no life on Earth at all, let alone intelligent life. This may be the impression created by Internet blogs, but for my money some vegetables are pretty intelligent. Those Venus flytraps can outsmart flies for example, and generally speaking flies seem smarter than many bloggers, present company excluded of course.

      • (The Black-Scholes-Merton partial differential equation for the price of a financial asset is similarly opposite to what happens in physics.)

        Black-Scholes is derived from the Fokker-Planck which is the so-called “master equation” used quite often in physics. Financial returns randomly walk about some central value. The problem is that Black-Scholes like many formulas don’t work when game theory strategies and human psychology is involved.

        But that’s beside the point. The set of equations I solved was precisely the Fokker-Planck formulation with drift removed. It is a purely diffusional process set up as a staged slab model. It is only odd in that I have the top layer with different rates than the diffusional slab layers.

        And I agree with you about the residence time. What I devised was a set of somewhat stiff equations, and the stiffness is due to a faster rate interfacing the atmosphere than the slower rate between the slab layers.

        This is a fascinating problem to model and if we don’t do it, the engineers will. If any engineering is done with respect to sequestration they will likely do it, because it is all about solving problems that their bosses lay in front of them. In other words, engineers use models out of necessity and wanting to keep getting paid. I think we are here because we see it as a challenge.

      • WHT, your diagrams are a tad cryptic but I get some of the idea.

        Have you looked into reconciling your modeling with results such as those in the Domingues et al paper I cited earlier?

        The connection I intended between Malthus and Black-Scholes is that both are simply time-reversed versions of what we can observe in simple physical situations, respectively exponential decay and diffusion (but you knew that).

      • Thanks for the ref. The paper does talk about doing detailed balance, which is what I am trying to. Master equations at the most fundamental level are about doing mass balance and accounting for conservation of material.
        Domingues also includes sequences of volcanic events as disturbances which means that they are doing more than a single impulse response, and they are looking at more than just CO2 concentration change. Also, if they are looking at heat content of oceans then that is inferring to a couple of steps down the road.

        Somebody else mentioned pH with ocean depth, which would tell us about CO2 diffusion through the slabs. That would seem a better comparison, as CO2 would random walk down from the atmospheric disturbances.

      • “I would be more comfortable with the estimates people are coming up with based on exponential decays”

        It is important to remember that the real carbon cycle models are NOT based on exponential decays. The four term exponential Bern cycle approximation is NOT the Bern cycle model: it is a four term exponential FIT TO the real Bern cycle model, for a case where the system is in equilibrium and a pulse of carbon is being added. Real carbon cycle models have plant types which respond to increased CO2 concentrations by changing the modeled stomatal conductance (basically, they can expend less energy, and use less water, to fix the same amount of carbon). Sophisticated models also include nitrogen in the modeling, so that the fertilization effect from increased CO2 drops when nitrogen becomes a limiting factor: but, even more sophisticated models include the fact that as temperature increases, the rate of decay in leaf litter increases, which makes more nitrogen available. The oceans are modeled at a similar level of detail, with mixed layers, diffusion rates, thermohaline circulation currents into the deep oceans, and biological pumps where organisms suck carbon out of the upper mixed layers and then sink (upon depth) into deeper waters where they turned back into dissolved inorganic carbon through their own decay rates.

        This is like claiming that IPCC models are wrong because they don’t account for the diurnal cycle because all the Sky Dragon people have looked at are the 1-D approximation equations. No, the real models are actually quite sophisticated – not perfect, but reasonably physically realistic – and it is only the teaching tools that are stripped down to their bare minimums…

      • It is important to remember that the real carbon cycle models are NOT based on exponential decays. The four term exponential Bern cycle approximation is NOT the Bern cycle model: it is a four term exponential FIT TO the real Bern cycle model, for a case where the system is in equilibrium and a pulse of carbon is being added.

        That is a good point. Since most of the papers are vague about the origins of the Bern model, I probably got the impression that the fat-tails were caused by a superposition of a few exponentials. But after looking at it from the perspective of a diffusional slab model going into the earth interacting with a faster steady-state balanced flow between the surface and atmosphere, I can see why they made up that heuristic. It is all a matter of matching some simpler expression to the detailed model.

        Consider that my slab model has 50 layers below the surface and so includes 100 rate flows between the layers, all balanced out in a master equation. Of course this would create an ugly analytical expression for the CO2 impulse response. So instead of selecting a few exponentials to match the response profile, it makes a lot of sense to fit the curve to the natural controlling factor. From the model, this factor is diffusion into the more permanent layers, so we can try something from the 1/sqrt(t) family. This heuristic certainly works well to explain both the Bern impulse response curve and the impulse response from the multiple layer model.

        In my opinion, the exponentials are still there but they are buried in a mesh of layers. The mesh is regular but that doesn’t make the approximating heuristic any easier to derive.

  17. Just a few quick points:

    (1) The concept of a single decay time for CO2 is not very good because we use such decay times to characterize exponential decays. The decay of a slug of CO2 is highly non-exponential: almost half of it partitions into the other easily-accessible reservoirs (ocean mixed layer, biosphere) in a matter of months or a few years at most but the rest decays very slowly…and very non-exponentially, so that even after, say, 1000 years there is still expected to be something like 25% of the original amount (I think “the original” after the initial rapid partitioning). You need to read David Archer’s book or papers to understand this.

    (2) Jeff Glassman, up to his usual tricks, tries to complain that something that even a hardened skeptic like Willis Eschenbach (and Hans Erren) agree with is somehow unfathomable: Why does a CO2 pulse decay less rapidly than CO2 exchanges between the atmosphere and the hydrosphere / biosphere? It is really not that complicated. The point is that the ocean mixed layer, the biosphere, and the atmosphere form a subsystem where the CO2 rapidly exchanges back and forth between these reservoirs. This means that when you add a new pulse of CO2 to the atmosphere, it doesn’t take long until it has partitioned between these three reservoirs. However, from then on, the decay rate out of this subsystem is governed by slower processes like exchange between the ocean mixed layer and the deep ocean or absorption by the lithosphere.

    (3) An analogy is useful: Imagine you have three connected containers containing water (and let’s have some sort of pump that mixes the water around between the containers just to make even the exchange of water molecules between containers reasonably rapid). If you add water to one of the three containers, what happens is that the level in all 3 containers goes up. Now, what Jeff would want you to believe is that if you measure the residence time of the molecules in the container that you added water to to be, say, 3 minutes, then the level of the water will decay back down with a characteristic decay time of that value. However, in reality, the level of the water in this example won’t decay at all…except at the much slower rate determined by evaporation of the water out of the containers!

    • Joel- The multiple trajectories by which excess CO2 declines toward an equilibrium baseline have been analyzed by David Archer, as you mention. The different rates have been estimated by a variety of models, as the linked article indicates. However, it is easy to assign a minimum lifetime for excess CO2 from a simpler set of observations (Archer mentions this). The current CO2 concentration slightly exceeding 390 ppm is about 110 ppm above a baseline for climates of previous centuries. From observational data we know that current emissions, if not absorbed by sinks, would add about 4 ppm per year, but the observed rise has only been about 2 ppm, with the remaining 2 going into sinks. Since the absorption into sinks is a response to the 390 ppm (the sinks don’t care whether they are absorbing old or new CO2), a linear return to baseline would require 110/2 = 55 years. The true rate is of course considerably slower due to the asymptotic nature of the approach to baselines values and to various climate feedbacks. It therefore appears that an “average” value of about 100 – 200 years is a reasonable estimate, as long as it is realized that the true rate involves a long “tail” that declines over many thousands of years.

      Interestingly, of the six links in Dr. Curry’s original post, plus the link to the Dyson/May dialog she added, there is really no substantial disagreement about this long lifetime. Rather, some of the links focus instead on the exchange rate of individual CO2 molecules, which involves a much shorter lifetime, some on the decline of an excess concentration as discussed above, and in the case of both Essenhigh and Dyson, both phenomena are acknowledged and distinguished from each other.

      • “a linear return to baseline would require 110/2 = 55 years”

        That is just awful reasoning. The effective gain is independent from the time constant.

      • What are you referring to? 110/2 = 55 years is simply the arithmetic that would describe the time for all the excess to disappear if its disappearance rate were 2 ppm/year, which is about the current rate. No-one argues that to be the case, but it’s easy to argue that the disappearance rate won’t be faster.

      • The quandary: to explain and have you not understand, or blow it off and allow you to imagine you are anywhere close to being right?

        It’s too late to worry about tonight…

      • Yep. Given realistic emissions scenarios, it looks like we’re heading for 800ppm in a century or so. Dyson has a proposal to mitigate that that does not depend on stifling industrial society, but rather on a worldwide planting program. His proposal relies on the short residence time of a single CO2 molecule in order to achieve its results, and for his purposes, that’s the correct residence time to use.

        The long-term residence of “net” CO2 is only the “relevant” number if your only policy for mitigation is shutting down fossil-fuel burning. The insistence that this longer duration is correct and the shorter one is incorrect is a sign of ingrained policy bias. If the world environmental community had launched a crusade for planting lots more carbon-sequestering plants, and tried to get an international treaty to set targets for that, etc., then the short-term number would be the “orthodox” answer given to the public.

        Optional Study Question: Since Dyson’s proposal acts fairly quickly, is reversible, and doesn’t require the abandonment of high-density energy sources (and the concomitant impoverishment of the world), why isn’t there more emphasis on figuring out how to do it practically?

      • Where is he gonna go stick it?

      • The insistence that this longer duration is correct and the shorter one is incorrect is a sign of ingrained policy bias.

        I didn’t realize that CO2 follows the rules of political science more closely than it does physics and chemistry.

      • Your sarcasm is misplaced. Read the argument–Dyson’s response to his critic–and then reread what I wrote.

        No one disputes that a given CO2 molecule circulates out of the atmosphere in about five years. The dispute is whether that residence time is relevant, and it is indeed not the relevant time if we want to know how quickly a cutback in CO2 emissions will take effect on the atmosphere

        But exactly which laws of physics are relevant depends on the proposed mitigation policy. Dyson’s proposal for carbon-eating plants makes the five-year molecular residence time the relevant one. If that were the “baseline” policy proposal on the table, then that would be the “standard” answer to the CO2 residence question. Your inability to understand this point on the first pass bespeaks your intellectual entrenchment in a single policy position.

    • Joel Shore 8/24/11, 9:48 pm, CO2 residence time

      JS: (1) The concept of a single decay time for CO2 is not very good because we use such decay times to characterize exponential decays. The decay of a slug of CO2 is highly non-exponential: almost half of it partitions into the other easily-accessible reservoirs (ocean mixed layer, biosphere) in a matter of months or a few years at most but the rest decays very slowly…and very non-exponentially, so that even after, say, 1000 years there is still expected to be something like 25% of the original amount (I think “the original” after the initial rapid partitioning). You need to read David Archer’s book or papers to understand this.

      You suggest some arbitrary representation, when the result is from first year physics:

      The formula for the residence time from any reservoir, M, at a rate S, is T = M/S. AR4, Glossary, Lifetime, p. 8. Since S = -dM/dT, the formula provides a differential equation for M: dM/M = -dt/T. The solution is M = M_0*exp(-t/T).

      The decay of a slug is exponential and it has one decay time.

      This formula conflicts with the Bern formula also used by IPCC.

      I have read Archer’s work. He appears to have been responsible for the material on carbonate chemistry in AR4, Chapter 7, where he was a Contributing Author. He erred to rely on the equilibrium in the surface layer, but he was facilitating AGW so it would buffer against dissolution, making the MLO bulge anthropogenic, and making the atmospheric CO2 concentration with its weak greenhouse effect sufficient to cause a calamity in just the right time frame.

      Archer’s Chapter 7 only said that 20% may remain in the atmosphere for many thousands of years, but the Technical Summary said as much as 35,000 years was needed for atmospheric CO2 to reach equilibrium(!). But Archer’s paper addressed the time to completely[!] neutralize and sequester anthropogenic CO2. … The mean lifetime of fossil fuel CO2 is about 30-35 kyr. Archer, D., The fate of fossil fuel CO2 in geological time, 1/7/05, p. 11. Earth’s climate is divorced from carbon sequestration, but IPCC incorrectly converted the ocean sequester time into atmospheric residence time.

      JS: (2) Jeff Glassman, up to his usual tricks, tries to complain that something that even a hardened skeptic like Willis Eschenbach (and Hans Erren) agree with is somehow unfathomable: Why does a CO2 pulse decay less rapidly than CO2 exchanges between the atmosphere and the hydrosphere / biosphere? It is really not that complicated. The point is that the ocean mixed layer, the biosphere, and the atmosphere form a subsystem where the CO2 rapidly exchanges back and forth between these reservoirs. This means that when you add a new pulse of CO2 to the atmosphere, it doesn’t take long until it has partitioned between these three reservoirs. However, from then on, the decay rate out of this subsystem is governed by slower processes like exchange between the ocean mixed layer and the deep ocean or absorption by the lithosphere.

      What was unfathomable was you and Eschenbach both insisting over at WUWT that the lifetime of a molecule of CO2 was somehow different than the lifetime of a slug of CO2 molecules. The former is only deduced from the latter.

      Now you’ve confused yourself. You are correct about the pulse of CO2 rapidly distributing into the three reservoirs. But that’s the end of the problem — you should have stopped there. The question in the introduction is

      How long does CO2 from fossil fuel burning injected into the atmosphere remain in the atmosphere before it is removed by natural processes? Bold added.

      You’re trying to answer a different question: how long does the slug of CO2 take before it is sequestered, before it reaches its final fate. The answer is as I have given previously 1.5 years with leaf water, and 3.5 years without. Obviously, those times obviously include uptake to the land. However, the IPCC model discusses four different time constants for the lifetime of a pulse of CO2 only in the context of the air-sea flux:

      Consistent with the response function to a CO2 pulse from the Bern Carbon Cycle Model (see footnote (a) of Table 2.14), about 50% of an increase in atmospheric CO2 will be removed within 30 years, a further 30% will be removed within a few centuries and the remaining 20% may remain in the atmosphere for many thousands of years (Prentice et al., 2001; Archer, 2005; see also Sections [¶7.3.4.2 Ocean Carbon Cycle Processes and Feedbacks to Climate] 7.3.4.2 and 10.4) Bold added, 4AR, §7.3.1.2, p. 514.

      I didn’t say anything as foolish as you suggest with your unrecognizable analogy. Kindly quote what you want to critique.

      • Jeff,

        As usual, you have added nothing useful to the discussion…just confusion and obfuscation. I think the posts of Chris, myself, Fred, and the others who are here to enlighten rather than to obfuscate stand on their own.

        It is sad to see people so wedded to their ideology that they are willing to sacrifice science on the alter.

      • Joel Shore 8/25/11, 9:00 pm CO2 residence time

        JS: As usual, you have added nothing useful to the discussion…just confusion and obfuscation. I think the posts of Chris, myself, Fred, and the others who are here to enlighten rather than to obfuscate stand on their own.

        It is sad to see people so wedded to their ideology that they are willing to sacrifice science on the alter.

        Unable or unwilling to defend his ideology as science, Dr. Shore, physicist, takes refuge in an unsupported, vitriolic attack. This response once again reveals how he, along with a few others, some of whom he is willing to out, post here to enlighten, to Spread the Word, according to the Gospel of IPCC and the tracts in advocacy journals, and to bring Salvation to the Unwashed. They are here neither to defend their new religion, nor to engage in any substantial debate over it.

        Uncomfortable in his compromised position, Dr. Shore tries to strengthen it by analogy, comparing criticism to the attacks on evolution by fundamentalism. Just in a single thread, he wrote:

        JS: You really haven’t a clue what you are talking about. You are just throwing around phrases … that seem to be as ignorant as when a Young Earth creationist says that evolution violates the Second Law. Slaying the Greenhouse Dragon. Part IV, Joel Shore, 8/14/11, 10:31 pm.

        JS: While there are some legitimate scientific issues … , most of this is not about science at all. It is simply an attack on science that is inconvenient for some people to accept, just as is the case for evolution… . Id., 8/15/11, 10:22 pm.

        JS: [I]mportant discussions occur in the scientific literature. It is understandable that when bad science, be it challenging evolution or challenging AGW, fails in those venues (or is so bad it could never even get into those venues), the proponents try to take their case directly to the public. It is a way to replace public policy based on science with public policy based on ideologically-inspired nonsense. Id. 8/16/11, 4:32 pm

        The parallel is quite the reverse of what Dr. Shore imagines. Fundamentalism (belief) is to Evolution (science) as skepticism (science) is to AGW (belief). Like the other AGWers, he learned his physics without learning science.

      • Joel Shore 8/25/11, 9:00 pm CO2 residence time

        Errata: The last paragraph should read:

        The parallel is quite the reverse of what Dr. Shore imagines. Evolution (science) is to Fundamentalism (belief) as skepticism (science) is to AGW (belief). Like the other AGWers, he learned his physics without learning science.

      • The parallel is quite the reverse of what Dr. Shore imagines. Evolution (science) is to Fundamentalism (belief) as skepticism (science) is to AGW (belief).

        Yeah…Right. With evolution, you have the National Academy of Sciences and all the other academies and the various scientific professional societies on one side … and for AGW, the same thing is true. Alas, it is not in the direction that you claim it to be. So, the question is: Whose interpretation of the science are you going to believe, these societies or a few ideologues like Jeff who want you to believe that their obvious Right-wing views have nothing to do with their conclusions and that all of the scientific societies have somehow been corrupted and only he and his fellow travelers can see the light!?!

        Oh, and it doesn’t help your case that one of the few “skeptical” scientists who is not talking complete nonsense is on record as saying “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism.” ( http://www.ideasinactiontv.com/tcs_daily/2005/08/faith-based-evolution.html )

      • Joel Shore 8/27/11, 5:39 pm, CO2 residence time

        JS: Yeah…Right. With evolution, you have the National Academy of Sciences and all the other academies and the various scientific professional societies on one side … and for AGW, the same thing is true. Alas, it is not in the direction that you claim it to be. So, the question is: Whose interpretation of the science are you going to believe, these societies or a few ideologues like Jeff who want you to believe that their obvious Right-wing views have nothing to do with their conclusions and that all of the scientific societies have somehow been corrupted and only he and his fellow travelers can see the light!?!

        Oh, and it doesn’t help your case that one of the few “skeptical” scientists who is not talking complete nonsense is on record as saying “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism.” ( http://www.ideasinactiontv.com/tcs_daily/2005/08/faith-based-evolution.html )

        We already figured out that Dr. Shore believes in consensus science, and to the exclusion of science. Slaying the Greenhouse Dragon. Part IV. 8/16/11, 10:15 pm. Because it’s important to him, he should keep on repeating the mantra.

        For other readers, science has nothing to do with consensus forming, voting, or endorsements — nor anything to do with personal foibles of supporters or detractors of one model or another. It has nothing to do with political orientations, left or right, toward models or modelers. Science is about models of the Real World that (1) violate no facts in their domain, and which (2) make predictions for fresh facts. The AGW model fails on both counts.

        Advancements in science always evolve from one person with one idea and no support.

        All the societies and professional journals in the world, every school of science or otherwise, and every media outlet and commentator could be unanimous to a man about a model, and fall, defeated by one person pointing out one error. In AGW the task is complicated only by which fault to chose among a dozen or so, first magnitude, fatal errors. Click on my name in the header, and follow the links to IPCC Fatal Errors and SGW.

        But AGW supporters, committed to a scientific-like model as a matter of belief, find themselves heirs to a multitude of errors from the model owner, IPCC. Constitutionally and professionally unable to admit their mistakes, they respond to scientific challenges not with technical point and counterpoint, but with digressions, irrelevancies, and insults.

      • Jeff: Your dichotomy between “consensus science” and “science” is a false one. You are right that one person can overturn a consensus. However, in order to do so, that person must convince his or her fellow scientists of their point of view. And, until such time, it is imperative that public policy be decided by what the scientists judge to be the best science. The only other alternative is to have science politicized as each political group believes their own “pet” scientists who support their ideologically-driven point-of-view.

        Of course, the fact is that the overwhelming majority of AGW “skeptics” are not really trying to convince scientists, probably because they know that their arguments are too weak to be convincing to scientists…In some case, like the Slayers, Postma, and arguments that man is not responsible for the current CO2 increase, they are ridiculously so! So, instead, they (you) try to take their case to the public where the techniques of sophistry compete much better against science!

        Let’s face it, your arguments are losing badly in the scientific community (for good reason!), which is why you are trying to go the way of all pseudoscience and say that the public should accept your view of the science and discard the view of the scientific community. And, they should ignore the fact that your ideology is much, much stronger than your science.

      • Joel Shore 8/25/11, 9:00 pm CO2 residence time

        JS: “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism” followed by a link.

        Dr. Shore doesn’t say that his citation is from an article posted on the blog Ideas in Action with Jim Glassman (no relation), dated 8/8/05, which acquired no comments. The author was Roy Spencer.

        Check this:

        “Those who cavalierly reject the theory of evolution,” writes Spencer, “as not adequately supported by facts seem quite to forget that their own theory is supported by no facts at all.["] Scopes Trial Transcript, 1925, p. 262, quoting Herbert Spencer, 1852.

        If Roy is descended from Herbert, we have evidence that evolution has no preferred direction.

      • Dr. Shore doesn’t say that his citation is from an article posted on the blog Ideas in Action with Jim Glassman (no relation), dated 8/8/05, which acquired no comments.

        And, that’s relevant or changes how the statement should be interpretted how exactly?

      • Joel Shore 8/28/11, 2:29 pm CO2 residence time

        JS: You are right that one person can overturn a consensus.

        I didn’t say that, and wouldn’t. Who cares what fiction the people believe? I addressed the failure of the model believed by the consensus.

        JS: However, in order to do so, that person must convince his or her fellow scientists of their point of view.

        Where did you get such a notion? Do you know of any example, and take the most famous of all, where the overturning scientist had to go around convincing his fellow scientists? Poppycock.

        JS: And, until such time, it is imperative that public policy be decided by what the scientists judge to be the best science. The only other alternative is to have science politicized as each political group believes their own “pet” scientists who support their ideologically-driven point-of-view.

        You not only believe in consensus science, but in technocracy! What a horrible, and thoroughly discredited, idea! Publicly funded academics in charge of public funds. A little healthy skepticism, augmented by a little science literacy, is enough to move the swing Policymakers, those in the middle, to reject AGW.

        JS: Of course, the fact is that the overwhelming majority of AGW “skeptics” are not really trying to convince scientists, probably because they know that their arguments are too weak to be convincing to scientists…In some case, like the Slayers, Postma, and arguments that man is not responsible for the current CO2 increase, they are ridiculously so! So, instead, they (you) try to take their case to the public where the techniques of sophistry compete much better against science!

        You make the case for why we don’t vote in science. You also show no skill with distinguishing signal from noise.

        JS: Let’s face it, your arguments are losing badly in the scientific community (for good reason!), which is why you are trying to go the way of all pseudoscience and say that the public should accept your view of the science and discard the view of the scientific community. And, they should ignore the fact that your ideology is much, much stronger than your science.

        There you go again, keeping imaginary score. Because I reason against your simplistic, left wing financial catastrophe, foisted on “Policymakers” to prevent a fantasized calamity, you assume I am following some right wing agenda. I follow no agenda, nor do I deny my reasoning to any, left, right, up, or down. Let science and objectivity prevail.

  18. Chris and Joel give very good explanations for the CO2 residence time.

    The long residence time is a fat-tail effect, very close in temporal behavior to what you would find in radioactive waste decay. The collection of different rates of decay leads some people to believe that it is a fast removal, due to the initial high slope decay. However, the fat-tail response is where the lengthy decay takes precedence. (That’s why Fukishima radiation went down fast but will be hanging around for hundreds of years, not the same physics but similar temporal response)

    The key math is when you do a convolution of a CO2 emission forcing function with a fat-tail impulse response. The result will show this peculiar lag that will generate a continual increase in CO2 concentration, long after the forcing function is removed. That is what has everyone spooked — even if we can immediately remove CO2 emissions, the CO2 will continue to increase for years.

    • WHT – It is hard to see a reason for CO2 to increase much or for long if emissions cease – see for example Climate Change Commitment . I can envision perhaps a brief increase if the warming from the residual forcing (before it disappears) causes some efflux from sinks, but even that is likely to be minimal. Warming itself should not last long before transitioning to a very gradual cooling that maintains an elevated temperature for millennia.

      On the other hand, if all anthropogenic emissions cease, we do have reason to expect a persistent increase in temperature for a while due to the reduction in cooling aerosols – see Climate Change Commitment 2.

      • Alexander Harvey

        Fred,

        I think that I am fairly consistent in my saying that if we can do little else we should do something about the risk of an aerosol overhang.

        If we face the bizarre prospect of having to use renewable energy to pump sulphates up chimneys then that needs fixing.

        Alex

      • Fred,
        You are looking at the temperature response with that link. That is not the same as the CO2 response. The CO2 response is very latent and has an inertia against change, just check against Archer’s papers. The link you provided also mentions Wigley, and his CO2 curves are shown here:
        http://www.globalwarmingart.com/wiki/File:Carbon_Stabilization_Scenarios_png
        You can definitely see the CO2 concentration continues to increase even if the fossil fuel emissions are cut back. If the temperature is not as sensitive to CO2 then that will not show as strong a latency.

        I have done the calculations myself, and this is what I get if I keep emissions constant.
        http://img39.imageshack.us/img39/9001/co2dispersiongrowth.gif
        The chart also shows first-order kinetics for CO2 sequestering, demonstrating the difference between classical exponential decay and a fat-tail response. Wigley shows pretty much the same thing but he does not have a constant emission scenario.

    • It’s important to distinguish between zero emissions and stabilized emissions.

    • Webby,

      “The result will show this peculiar lag that will generate a continual increase in CO2 concentration, long after the forcing function is removed. That is what has everyone spooked — even if we can immediately remove CO2 emissions, the CO2 will continue to increase for years.”

      ghosts always spook superstitious people. Where is the empirical evidence to suggest that CO2 would have a fat tail impulse response? Pekka didn’t even pull citations to support the Revelle Buffer Myth!!

      • Where is your evidence that it is thin-tailed?

        Another gas that is showing a hockey stick rise, Methane, is thin-tailed because it is exothermic and will decompose in a few years, 2 to 10 years is the quoted number I see.
        In comparison, CO2 is relatively inert and is endothermic in breaking down, so it needs special pathways to sequester out of the system. The skeptics quote CO2 residence times also at 2 to 10 years, same as Methane.

        My mind sees an inconsistency here. I would expect CO2 to have a much higher residence time than Methane. Perhaps that CO2 might be closer to the residence time of a relatively inert gas like Nitrous Oxide (N2O) which is quoted anywhere from 5 to 200 years. That will make it a fat-tail because of the large uncertainty in the mean.

        I would guess that the empirical evidence comes from historical forcing functions, such as volcanic events, that generated large impulses of CO2 into the atmosphere. Sample records would show a slow decline over time. I don’t know of any citations off-hand though. So shoot me.

      • Webby,

        I make no claim. YOU claim it is fat tailed. Let’s have something other than arm waving.

      • Chief Hydrologist

        kuhncat,

        This fellow is nothing but arm waving. He assumes a function – applies it in his imagination – and lo and behold there it is exactly as predicted with a fat head – I mean fat tail.

        He guesses that we can find out from valcanic emissions. I think he is a candidate for being thrown in the volcano.

        Cheers

      • CHIEF HYDROLOGIST said this:

        I think he is a candidate for being thrown in the volcano.

        When the death threats start, I know I am touching some nerves.

      • Not really – it is an in joke about not throwing virgins into the volcano to placate the climate gods. Useless gits however qualify.

        You flatter yourself that I would give a rat’s arse about any of your pointless and distracting comment. My only concern is whether I am getting too bored to continue with this nonsense. Well I certainly am – but it is a question of whether the field should be left to the a few noxious individuals intent on play a spoiling role.

        Just now the Numbnut gang is dominating the threads with almost 100% of the recent comment between them. Most of it simply nonsense and insults. I don’t know what the solution is – but it is getting to be uncomfortable scrolling through the large number of rude and abusive posts.

      • I make no claim. YOU claim it is fat tailed. Let’s have something other than arm waving.

        There are two options, thin-tailed and fat-tailed. You asked me to prove that something is fat-tailed, while the thin-tailed advocates haven’t demonstrated any agreement with data. It is all indirect hand-waving evidence by the thin-tail advocates.

      • No – the natural variability is big enough to swallow anthropogenic emissions tail and all.

      • Ah, so now you believe in natural variability.
        So, say you have a single CO2 molecule in the atmosphere. The thin-tail theory is that this molecule only has a 2% chance of still being in the atmosphere after 20 years (given a 5 year residence time). In other words, it is 98% likely to be sequestered out.
        The fat-tail version is that it will have a 50% chance that it will be removed after 20 years. But in the first few years the fat-tail distribution could have a faster apparent sequestering rate.
        It is an interesting premise that the CO2 could bounce around the atmosphere, occasionally reaching the surface resulting in a wide dispersion and natural variability in residence times.

      • Chief Hydrologist

        ‘And at some I didn’t believe in natural variability?’ You are the one who assumes a steady state – it is not true even remotely in biological or other complex systems. You follow up with made up numbers.

        http://www.nature.com/embor/journal/v9/n1/full/7401147.html

        ‘Feldman remembers watching the first dramatic example of SeaWiFS ability to capture this unfold. The satellite reached orbit and starting collecting data during the middle of the 1997-98 El Niño. An El Niño typically suppresses nutrients in the surface waters, critical for phytoplankton growth and keeps the ocean surface in the equatorial Pacific relatively barren.

        Then in the spring of 1998, as the El Niño began to fade and trade winds picked up, the equatorial Pacific Ocean bloomed with life, changing “from a desert to a rain forest,” in Feldman’s words, in a matter of weeks. “Thanks to SeaWiFS, we got to watch it happen,” he said. “It was absolutely amazing — a plankton bloom that literally spanned half the globe.”‘ http://www.sciencedaily.com/releases/2011/04/110404131127.htm

        You need to understand some science – and not just make it upo as you go along.

      • You need to understand some science – and not just make it upo as you go along.

        I am learning more as I go along, that’s for sure.

        In my comment I was just talking about the unlikelihood of a specific CO2 molecule being conclusively removed from the atmosphere after some lengthy time. I have an alternative derivation that only incorporates a statistical mechanics POV (which I believe is real science), read this
        comment in a thread below.

      • Webby,

        again you make an unsupported statement. That it has to be either fat or thin. No support, just assertion. Bye.

    • Fanney, please read mine, Joel’s, Fred’s, and Nick Stokes comments above. This isn’t even close to getting a cigar.

      • There he goes on about that tobacco industry again. :)

        The answer is that they used 100 years or more to confuse and mislead people. Now they are trying to back track.

      • Chris,
        I think part of the problem with the skeptics is that they only use the tools in their toolbox. So they immediately think response times have to be exponentially damped, and so look at the initial decay, plot that on semi-log paper and come up with residence times on the order of a few to 10 years. The reality as you and others indicate is that the fat-tails completely screw up the classic extrapolation. It’s that guy Segalstad who originally plotted these short residence times (here http://www.co2web.info/ESEF3VO2.htm) and really the embarrassment is on him and the scientists that he listed.
        The concept of fat-tail statistics is not something that you immediately pick up on in school and from shrink-wrap tools, but it comes with experience, and from watching how disorder and randomness plays out in the real world.

    • It appears this C3 link is also confused about the so-called missing carbon emissions and where that has gone. I did my own convolution-based modeling while carefully accounting for the actual fossil-fuel carbon emissions and discovered that very little CO2 has gone missing since the industrial age has started. To get a really good fit, I did have to raise the CO2 baseline to 294, but that may actually be the correct value.
      The writeup originally appeared on my blog:
      http://mobjectivist.blogspot.com/2010/05/how-shock-model-analysis-relates-to-co2.html

      Could it be that the missing carbon is caused by the assumption of an incorrect residence time on their part? With a long residence time, the impulse response will store much of the CO2 in the atmosphere, which is only slowly sequestered over time.

  19. Essenhigh’s abstract is from: Potential Dependence of Global Warming on the Residence Time (RT) in the Atmosphere of Anthropogenically Sourced Carbon Dioxide Robert H. Essenhigh, Energy Fuels, 2009, 23 (5), pp 2773–2784, DOI: 10.1021/ef800581r, April 1, 2009

    using the combustion/chemical-engineering perfectly stirred reactor (PSR) mixing structure or 0D box for the model basis, . . . With the short (5−15 year) RT results shown to be in quasi-equilibrium, this then supports the (independently based) conclusion that the long-term (100 year) rising atmospheric CO2 concentration is not from anthropogenic sources but, in accordance with conclusions from other studies, is most likely the outcome of the rising atmospheric temperature, which is due to other natural factors.

    Such quantitative combustion/chemical engineering approaches including ALL the sources and sinks are essential to get a grasp of the overall fluctuations and consequent net differences.

    Essenhigh references Tom Segalstad http://www.co2web.info/
    Carbon cycle modelling and the residence time of natural and anthropogenic atmospheric CO2: on the construction of the “Greenhouse Effect Global Warming” dogma.

    The apparent annual atmospheric CO2 level increase, postulated to be anthropogenic, would constitute only some 0.2% of the total annual amount of CO2 exchanged naturally between the atmosphere and the ocean plus other natural sources and sinks. It is more probable that such a small ripple in the annual natural flow of CO2 would be caused by natural fluctuations of geophysical processes.

    Se also Tom V. Segalstad: Correct Timing is Everything – Also for CO2 in the Air
    Referencing Essenhigh see:
    Ryunosuke Kikuchi, External Forces Acting on the Earth’s Climate: An Approach to Understanding the Complexity of Climate Change, Energy & Environment, Volume 21, Number 8 / December 2010

    The Intergovernmental Panel on Climate Change defines lifetime for CO2 as the time required for the atmosphere to adjust to a future equilibrium state, and it gives a wide range of 5-200 years; however, a number of published data show a short lifetime of 5-15 years. This implies that anthropogenic emissions of CO2 are sequestrated more easily than expected,

    Alan Carlin, A Multidisciplinary, Science-Based Approach to the Economics of Climate Change Int. J. Environ. Res. Public Health 2011, 8, 985-1031; doi:10.3390/ijerph8040985
    Arlan summarizes Essenhigh & Segalstad etc.

    SOURCES AND SINKS OF CARBON DIOXIDE by Tom Quirk, Icecap.us

    The results suggest that El Nino and the Southern Oscillation events produce major changes in the carbon isotope ratio in the atmosphere. This does not favour the continuous increase of CO2 from the use of fossil fuels as the source of isotope ratio changes. The constancy of seasonal variations in CO2 and the lack of time delays between the hemispheres suggest that fossil fuel derived CO2 is almost totally absorbed locally in the year it is emitted.

    Fred Haynie posts detailed models of CO2 driven by natural causes, especially polar fluctuations, not anthropogenic. His analysis of the different shapes between Arctic, tropics and Antarctic is thought provoking as the primary CO2 drivers. http://www.kidswincom.net/climate.pdf

  20. Isn’t CO2 residence time a property of the system rather than an inherent property of CO2 molecules? If so, then it will vary and can be made to vary.

  21. Chief Hydrologist

    It is a simple stocks and flows problem – that could in principle be modelled with commencial software such as STELLA.

    The sources are:

    ‘80.4 GtC by soil respiration and fermentation (Raich et al., 2002)
    38 GtC and rising by 0.5 GtC per annum by cumulative photosynthesis deficit(Casey, 2008)
    by post-clearance deflation (See Eswaran, 1993)
    7.8 GtC (IPCC, 2007 – Needs peer reviewed reference)
    2.3 GtC by process of deforestation (IPCC, 2007; Melillo et al., 1996; Haughton & Hackler, 2002)
    0.03 GtC? by Volcanoes
    by Tectonic rifts
    by multi-cellular Animal Respiration
    by multi-celluar Plant Respiration

    The sinks are:

    120 GtC by Photosynthesis (Bowes, 1991)
    By Ocean Carbonate Buffer

    Source: Wikipedia – it’s seems a reaonable list

    I added multi-cellular to distinguish betwewen complex and simple organisms. The animal plant distiction applies to both complex and simple – as it is based on the distiction between trophic (food) sources. Food from photosyhthesis or other by eating other organisms – autotrophs and heterotrophs.

    You would have to make some fairly heroic assumptions about how these things change with time, temperature and silicate weathering. One thing is for sure – this is not a problem of average decay rates or long tailed statistics. It is – as I said – a problem of stocks and flows.

    Consider the negative feedback of carbonic acid in rainwater – increases silicate weathering and therfore carbonate buffering in the oceans – decreasing atmospheric CO2 concentration and deep sequestration at the same time – reducing carbonic acid in rainwater etc. Hydrogeologically speaking the time scale could be significant.

    There seem quite a few back of the envelopes blowing through this thread – but I doubt that the quality of the data supports a number low or high to the level of certitude shown for the insustantial fluff seen in this post.

    As far as I am concerned this is more angels on pinheads discourse by the usual suspects. Perhaps that should be pinheads on angels – as I obviously have yet to renounce my general distemper at these proceedings.

    • Quantum Gravity Treatment of the Angel Density Problem

      http://improbable.com/airchives/paperair/volume7/v7i3/angels-7-3.htm

      • Chief Hydrologist

        ‘The only reason angels can fly is that they take themselves so lightly’ – so I would dispute the mass of angels assuption. But then I guess I’m being generally disputative.

    • STELLA can’t solve this problem because it doesn’t allow fat-tailed impulse responses and no way to add dispersion.

      One thing is for sure – this is not a problem of average decay rates or long tailed statistics. It is – as I said – a problem of stocks and flows.

      That is essentially what is known as compartment modeling. The two usual approximations to compartment modeling is either to assume a constant flow or to assume a flow proportional to an amount remaining (the stock). If CO2 is modeled as being fairly inert to sequestering, it ends up showing the fat tails which is neither constant or proportional flow. If users of STELLA could dial in one of these lengthy impulse response and then run a simulation, they would likely see a non-converging solution. Those same users would probably scream at the results and then toss STELLA in the garbage, not realizing that is the correct solution.
      The point is that you can’t blindly use these commercial tools unless you have a good understanding of the underlying physics model. They will blithely sweep all the interesting behavior under the rug.

      • Chief Hydrologist

        No – you would need to specify changes in flux with temperature and silicate weathering as I said. To do that – you would have to know what the rate of change of these variables was over time – so we are not talking physics at all but biology, hydrogeology, chemistry, geology etc – those processes are determined outside of a simple stock and flow model and only a simple rate function, look up table, spreadsheet, whatever is plugged into a stock and flow model such as STELLA.

        The actual software is irrelevant – and I mention STELLA only as an example of the type of model I am thinking of – one that has stocks (of CO2 for instance) and flows (CO2 for instance) on some time increment.

        What is specified is the flux based on the physical, chemical and biological constraints. The answer is the changes in the stores – atmosphere, terrestrial vegetation, ocean – over time. These can’t diverge from anything because they are based on physical and scientific realities and not just numbers pulled out of your arse. If the results were unrealistic – you would check assumptions and data.

        The point is that you are thinking of this as a radiation decay type problem – and it clearly is not. It is a stock and flow problem – in which there are compartments and flows between them. You need to think simply about the problem – what are the sources and sinks and what influences the flow between them. This shows what data is required (and the deficit of data) to make good estimates of rates of flow and therefore changes in stocks.

        All of the various processes could be plugged into a more complex model of course – but modelling will not help if the data is incomplete or wrong. We would still need to plug in a rate constant, look up table, whatever – representing actual real world processes and not just the product of physicists pulling numbers out of their fundament.

      • Dear Chief Hydrologist,
        From your title, you must know what breakthrough curves are. You also must have run across the idea of dispersive transport in porous media. The point is that material transport is often very disordered in its natural state and that you really can use some clever stochastic math to understand how long it takes material to get from point A to point B along a tortuous path. Do a literature search on this topic and you will find that the data shows that it’s a fat-tail effect, and the tail is fatter the more disordered the media (underground) or pathway (i.e. runoff) is. Bear in mind that you will find a lot of the civil engineers and geologists studying this will look like they are scratching their butts trying to figure out what’s happening. But then again you have to remember that they are civil engineers and geologists after all :)=

      • Chief Hydrologist

        My title derives from Cecil (he spent four years in clown school – I’ll thank you not to refer to Princeton like that) Terwilleger. Springfield’s Chief Hrydrological and Hydraulical Engineer

        But I am trained both in hydrological engineering and environmental science and have decades of experience running computer models of various types and in analysis of complex real world environmental problems. People find it funny – but I started programming with punch cards and computers that filled a room.

        ‘Breakthrough Curve: A plot of column effluent concentration over time. In the field, monitoring a well produces a breakthrough curve for a column from a source to the well screen.’ It generally has a Gaussian normal distribution. They can be used for instance in tracing pollution sources in groundwater. Groundwater movement is commonly modelled using partial differential equations that conserve mass across finite grids.

        This has no application to the problem at all – there is not even a fat tail in any of that. Which is commonly understood to involve a preponderance of rare and extreme events. Rainfall distribution has for instance a skewed power series distribution – commonly of a log-Pearson.

        Essentially you just using names for things that you seem hardly to have any understanding of and suggesting that some stochastic maths (mathematicians pulling numbers out of their arses) can solve the problem of CO2 residence times without any reference to scientific data on how these physical Earth systems actually work.

        I think I have your measure now – you are an uncommon idiot hoping to bamboozle your way through a technical discussion by using terms you don’t really understand in the hopes of confusing the picture in favour of your tribalistic leanings.

        I assume it has worked elsewhere for you – but I assure you that you are playing with the big boys now.

      • ‘Breakthrough Curve: A plot of column effluent concentration over time. In the field, monitoring a well produces a breakthrough curve for a column from a source to the well screen.’ It generally has a Gaussian normal distribution.

        Wrong. They rarely have a Gaussian profile and almost always have a significantly asymmetric tail that drops off slowly with time. Instead of going to the Yahoo Answers response, you might want to dig a bit deeper.

        I think I have your measure now – you are an uncommon idiot hoping to bamboozle your way through a technical discussion by using terms you don’t really understand in the hopes of confusing the picture in favour of your tribalistic leanings.

        I think you are confusing me with Claes or Oliver or some other crackpot that comments on this blog.

        Essentially you just using names for things that you seem hardly to have any understanding of and suggesting that some stochastic maths (mathematicians pulling numbers out of their arses) can solve the problem of CO2 residence times without any reference to scientific data on how these physical Earth systems actually work.

        Yes, identifying a crackpot is in the eye of the beholder. Lots of us are in this together and we get insight from each other and especially from those who have a slightly different perspective. The trick is to figure out who the crackpots are and who have genuinely unique insight.

        I think I have your measure now – you are an uncommon idiot hoping to bamboozle your way through a technical discussion by using terms you don’t really understand in the hopes of confusing the picture in favour of your tribalistic leanings.

        The picture is confusing and can continue to get confusing until it starts to clear up. Of course tribalism will works its course as we gravitate toward supporting other commenters that are providing insight. That is the way that citizen-based science works.

        I assume it has worked elsewhere for you – but I assure you that you are playing with the big boys now.

        Yes, indeed there are a quite a few commenters who try to invoke flowery and ornate dialog like this is some sort of Shakespearean literary outpost, but that is just fluff and in the end we are just trying to hammer home some logic and pragmatism. I am a fan of Jaynes, who has justifiably said that “probability theory is the logic of science”.

      • Chief fancies himself quite a poet.

      • As proven by comments on a blog.

      • Both when and when he isn’t insulting people and discussing his TV viewing habits, things he does with his laptop, and how he cleans it up afterwards (no, really).

        Chief Hydrologist | August 22, 2011 at 1:53 am |
        Mike

        I feel qualified to answer this – I saw a bit of a documentary last night in between bits of CSI:NY and doing unspeakable things with my laptop.

        [...]

        Hope this helps – and if you have any other questions I can clean up my laptop and see whatever other ‘documentaries’ are available.

      • When I’m no being a comedian. Here’ the whole post – in which I am making fun of myself.

        http://judithcurry.com/2011/08/19/week-in-review-81911/#comment-103189

      • Good bye Joshua – it has been a distinctly creepy experience engaging with you.

      • Chief – don’t forget to take your ball with you.

        If you’re going to dish it out, Chief, you should be able to take it.

        Anytime you want to engage in mutually respectful posts, I’m more than game, but I’m afraid that would require you to leave your “pissant leftist”-type insults out of the discussion. I won’t take offense if you tee off on “numbnut,” and you don’t take offense when I laugh at Bruce.

        Anytime you’re ready. As much as I enjoy trading snark, I enjoy trading respectful dialogue more.

      • Now let me think – what is the best response to the Numbnut gang of pissant progressives? The best they can organise is an electrical engineer who cannot put an idea together on physical oceanic and atmospheric systems without numbscull ideas about stochasticity, breakthough curves, groundwater diffusion and ‘maximum entropy’. The rest of a poor lot cannot string any idea of any substance together at all. Incoherency is us – but feel free to drop in with a distracting and pointless troll any time at all.

        ‘Anytime you want to engage in mutually respectful posts, I’m more than game, but I’m afraid that would require you to leave your “pissant leftist”-type insults out of the discussion. I won’t take offense if you tee off on “numbnut,” and you don’t take offense when I laugh at Bruce.’

        Feel free to take offense whenever you like – but there is no licence to laugh at anyone. Perhaps my first instinct was right – it is wrong to leave natural born bullies to prevail.

      • Well – that didn’t last long did it, Chief.

        Like putty in my hands.

      • The best they can organise is an electrical engineer who cannot put an idea together on physical oceanic and atmospheric systems without numbscull ideas about stochasticity, breakthough curves, groundwater diffusion and ‘maximum entropy’.

        And your knee-jerk response to everything is chaotic systems? For making progress, that is as fine a strategy as punting on first down.

      • WebHubTelescope, Do you ever wonder, if the progressives will even get around to building a ‘Strategic Hamlet’.

      • It is a bell curve with an exponential rise of a solute in soil water and an exponential decrease – and still irrelevant to carbon dioxide.

      • The mathematics of dispersion still applies. That approach to math modeling is not taught in schools.

      • Chief Hydrologist

        What – precisely – do you mean by dispersion? Plume dispersion commonly is modelled by the Reynolds averaged Navier-Stokes non-linear partial differential equations. That is taught in engineering school – and is the standard method for floods, storm surge, tsunami, pollution plumes and groundwater movement. They are used as well as in atmospheric and oceanic simulations.

        Non-linearities might emerge – but in that case you adjust the time step for the numerical solution because an unstable solution for these things in the real world is useless.

        The Earth system as a whole has multiple negative and positive feedbacks with little understood thresholds and is in fact non-linear at a number of time and spatial scales. The correct model here is forcing with non-linear responses at critcal thresholds – just like earthquakes.

        Now we can define a power function for just about anything – which I assume is what you are talking about – but we do need real world data to make it meaningful and not just some assumed function.

        I can for instance fit a log-Pearson type 3 power function to 100 years of hydrological data and use it to calculate a 1 in 10,000 year storm. How accurate that is is a matter of conjecture – but it gives at least an agreed starting point which is all that matters in engineering public safety. It is a number that seems adequate from the experience and judgement of myself and my peers.

        I have my doubts as extreme events – ‘dragon-kings’ – at times of chaotic bifurcation are unlikely to fit the power function distribution. I very much doubt – 99% – that there is enough data to adequately define a power function for CO2 residence in the atmosphere. Basic science is required – rather than as I put in my crude Australian way – pulling a number out of your arse.

      • Take the ordinary Navier-Stokes equation and vary the values of the diffusion coefficients and convection/drift terms. That produces the degree of dispersion that I am talking about, and it doesn’t have to show turbulent behavior.

        The ordinary Navier-Stokes equation is also known as the Fokker-Planck equation. Solving these for disordered systems is a hobby of mine and it does show some very interesting agreements with experimental evidence. As far as I know, no one is working at it from the same angle I am.

        Basic science is required – rather than as I put in my crude Australian way – pulling a number out of your arse.

        Why do you continue to think that I am not applying a rigorous scientific analysis? All you have to do is take a gander at the link in my comment handle. Again, I thought this blog is partly about coming up with some potentially concise and neat ways of thinking about environmental phenomena.

      • Chief Hydrologist

        Archer et al keeps being raised – but this is a crude model study substituting crude values for parameters for which data is not merely inadequate but entirely lacking.

        I don’t run hydrological models without calibration against real world data. There is a diversity of results amongst the Archer models as one would expect – but is this the reflective of the actual range of variation in natural systems? How do we know what natural variability is without good long term data – or indeed for the most part without any data at all? How could you calibrate these models at all except through crude ensemble methods that might owe more to groupthink than anything real? How would you know what the results of sensitivity analysis might be? It is all hopelessly stupid.

        Without a deep discussion of the data and the limitations thereof – and of the assumptions made and the limits of error – and of the limitations of the models – there is every reason for scepticism. Models are not science and you seem to assume that science can proceed by modelling with broad assumptions and without data at all. It is not true – and is profoundly unscientific.

      • Without a deep discussion of the data and the limitations thereof – and of the assumptions made and the limits of error – and of the limitations of the models – there is every reason for scepticism. Models are not science and you seem to assume that science can proceed by modelling with broad assumptions and without data at all. It is not true – and is profoundly unscientific.

        Perhaps this intersects with the decision making under ignorance post, but assuming disorder is always a conservative approach. What is completely unscientific is those analysts that empirically guess at a residence time is 2 to 10 years by assuming that it follows an exponential decline.
        Look, I laid out a rigorous propagation of uncertainty analysis downthread that can explain significant dispersion in the CO2 residence times. Do those guys that generate the 2 to 10 year values do this? No freaking way!

      • Chief Hydrologist

        I should really say that models are analytical science at all – but are examples of synthesis and their proper use can only be understood in that framework.

        Where there is a lack of an anlaytical underpinning – they are almost guaranteed to be wrong.

        ‘Assuming disorder is always a conservative approach.

        A ‘rigorous propagation of uncertainty analysis downthread that can explain significant dispersion in the CO2 residence times.’

        You discussed a simple function of some sort – I can’t imagine what you mean by disorder. I don’t know what you could possibly mean by a ‘rigourous progagtion’ of uncertainty. I can only imagine from some of your other posts that it is not science at all that is under discussion – but some fervid incarnation of the climate wars. Oh – I forgot – I should try to be less literate.

        ‘Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values.’
        http://eprints.lse.ac.uk/24569/

      • Chief Hydrologist

        er… not analytical science at all…

      • You discussed a simple function of some sort – I can’t imagine what you mean by disorder.

        Entropy is generally considered as a measure of the amount of disorder in a system. A well-ordered subsystem such as a crystal lattice will have low entropy because of the predictable arrangement of the atoms. It becomes disordered if that lattice melts and the information entropy increases. There are accepted ways of characterizing this disorder — I don’t understand why you have difficulty with accepting this concept. If you mix a dye in a bucket of water, the disorder increases until it completely disperses through the water. This is modeled by what is called a uniform probability density function. Do you have difficulty even accepting this borderline pedantic example? Disorder can occur in spatial or in temporal terms, and even if you don’t accept it mathematically, there are certainly intuitive notions that you can apply.

      • Yes I know what entropy is – but fail to fail to see any particular correspondence with a bucket of water and Earth systems. The dye is mixed by simple Brownian motion – the simplest of the stochastic processes. But asserting an application of this to complex systems – and one significantly mediated by biology and therefore negative entropy – is crazy BS.

        You are either deliberately misleading or obsessively unbalanced – either way I don’t give a rat’s arse.

      • There is no such thing as negative entropy. Entropy is defined by the negative log of a probability. Since probabilities are strictly between 0 and 1, entropy can never go negative.

      • Entropy is defined by energy flow in the 2nd law of thermodynamics. Maximum entropy is the state where every point has the same potential energy and thus there is no more energy flux. The probability of this state occurring is unity. Alternatively there is a 100% chance of entropy happening on average outside of a very specific circumstance.

        The very special circumstance is life – life is self organising. The act of getting and eating my breakfast – tea and crumpets – for instance is an example of negative entropy. It is an example of energy flowing into a more highly organised system.

        I expend energy to gain more energy (negative entropy – or negentropy) and avoid dying (maximum entropy).

      • OK, the negative entropy is just a relative or differential entropy where a negative sign is slapped in front of the actual positive entropy.

      • you wrote: People find it funny – but I started programming with punch cards and computers that filled a room.

        I don’t find that funny. There are a lot of us out here that did that.

      • HEH,

        you were the smart ones. I just wired the sorters and collators to sort and rearrange your cards and clean up the mess when you dropped a box or two!! 8>)

      • Chief,
        Please take a look at limnology, and some recent papers on both the number of freshwater bodies and their part in the carbon cycle.

      • Hi Hunter,

        Limnology was a favourite topic many years ago. I love water. I love being on it, in it and under it. Biogeochemical cycling is my speciality and the carbon cycle because of the importance in trophic networks and in diverse chemical reactions is complex.

        I didn’t distinguish between fresh and salt above – and I am quite out of date. I will have a look.

        Cheers

      • Chief,
        You may well find this of great interest then.
        Dr. John Downing was interviewed recently and I was able to hear it.
        He thinks freshwater systems have been neglected in important ways.
        Here is his homepage:
        http://www.public.iastate.edu/~downing/
        He made some very interesting points about recent research results.
        Here is a transcript of the interview:
        http://www.loe.org/shows/segments.html?programID=11-P13-00033&segmentID=2
        I am not able to find the specific paper he is referring to. I think it might have some important implications, however.

    • David L. Hagen

      Chief Hydrologist & Webhubtelescope
      Please drop swords and explore how to model/distinguish causes.
      Chief – please review Webhub telescope’s models, especially peak oil. I think I know enough math that those models look significant.

      Webhubtelescope.
      Please explain further on fat tails and what might drive them vis CO2 with dominant natural sinks/sources >> anthropogenic causes.

      What if we have nonlinear bio feedback with increasing CO2.
      e.g. some trees plants show very rapid increase in growth rates with increased CO2. How is that modeled?

      What if Anthropogenic CO2 is rapidly absorbed in plants?
      What if most CO2 comes from temperature changes.
      e.g. see arctic pulsing.
      IF clouds dominate temperature and cosmic rays & solar modulation controls clouds, then natural causes could dominate temperature changes and thus CO2 form the ocean.

      How would you statistically model, test and distinguish such causes?

      cf Tom Quirk and Fred Haynie and Roy Spencer.

      I see difference in pulsing shapes for Arctic, tropics Antarctic.
      Different bio response terrestrial No hemisphere vs So. Hemisphere.
      Diff bio responses ocean biomass No. hemisphere vs so.
      Diff fossil emissions no hemisphere vs so.

      Differences in clouds and cosmic rays/solar ;during solar cycle; forbush events

      (PS Forget 52 pickup. With punch cards, I seem to remember not wanting to play 520 pickup.)

      • Daivid,
        I think those are good directions to push forward in.

        I have a very simple model that works to explain fat-tails in a number of situations. The fat-tail is mainly with respect to some quantity observed over time. The general idea is that the observable changes transiently in response to some stimulus. As a premise, suppose that the response is largely driven by a velocity or rate, r, that follows an exponential decline: F(t | r) = exp(-r*t)
        Next consider that the rate is actually a stochastic variate that shows a large natural deviation. It will follow an exponentially damped PDF if the standard deviation is equal to the mean:
        p(r) = (1/R) * exp(-r/R)
        If we then integrate over the marginal :
        F(t) = ∫ F(t | r) * p(r) *dr
        this results in F(t) = 1/(1+R*t)
        I consider this a general dispersive result that demonstrates how a measurement which will normally show an exponential decline will switch to a 1/t decline with sufficient disorder. The 1/t dependence is a power-law decline, categorized as a fat-tail because it only shows a median value and no mean value. (This is OK because the physical variate, the rate, r, has a well characterized mean and the higher order moments are also bound. The lack of moments is a common characteristic of reciprocated variates and what causes endless consternation to the statisticians. NN Taleb dances around this topic in his book The Black Swan).

        If you look at the IPCC residence time impulse response curves or Archer’s curves, they are even more strongly fat-tail, fitting to a curve that looks more like 1/(1+k*√t) — which is the reciprocal of the square-root of time. My feeling is that the square-root of time dependence comes from a diffusional growth law mixed in with the dispersion. Fickian growth laws are not linear but grow slowly as a square root of time due to diffusion. It also could be due to a chemical rate equation that proceeds more slowly than first-order kinetics. In either case, the value of k generates the dispersion that smears the dynamics.

        Now what the short-residence-time skeptics gloss over is the quick transient that one will see if this curve is plotted. Yes it does drop down relatively quickly, but the tail is significant and it won’t fall off anywhere near as fast as an exponential.
        Comparisons to the Bern SAR response curves

        So that is an applied mathematics argument as to how a behavior can change from a thin-tail exponential decline to a fat-tail power-law decline through the introduction of disorder or randomness. What this is not is any kind of critical phenomenon that most scientists ascribe to power-law behaviors. Research scientists tend to drool over critical phenomena and become disappointed when it gets explained to them that garden-variety disorder can lead to the same result.

      • Your comments on the almost ubiquitous prevalence of power laws reminds me of Benoit Mandelbrot even more than Nassim Taleb. My own thinking accepts that only in part. I agree fully that Normal distribution with is very thin tails is overused and that tails are often fatter over a wide range of values.

        I’m, however, much more doubtful on the generic value of any specific alternative formula and any specific class of distributions. The empirical evidence is usually reasonably accurate over a narrow range of values, which allows fitting it with many different fat-tailed distributions. Formulations presented by Mandelbrot or by you provide useful parametrizations over ranges where they have been verified empirically, but have little predictive power outside that range.

        As an example, Mandelbrot has looked at several distributions that are known to have a strict extreme limit finding a power law that “predicts” breaking that limit with non-negligible probability. In such cases we know that the power law will fail, when the limit is approached, but we don’t know, where the failure starts to be significant. Finally we can ask, do we really have any power law in the actual distribution at all, or is the power law only consistent with data as long as the accuracy requirement is very modest.

        Similarly I don’t have any reason to doubt the practical value of your approaches, but I’m doubtful as soon as the approach is assumed to have a precise theoretical basis. It’s rather a good rule of thumb that fat tails are very common and can be parametrized with some power law over a limited range. There are many reasons for that. Many of the reasons have a nature similar to what you describe, i.e, the overall distribution can be considered as a combination of many different distributions, some of which have very large variances. The combination may be formed in various ways, one of which applies to the persistence of CO2 in atmosphere.

        More specifically any agreement in far tails with results of Archer is likely to be dependent on, how Archer did his work rather than on the real world facts, because the empirical support for any conclusions on the far tails is statistically extremely weak. Relevant empirical data exists only from very distant past, and the the results on far tails are determined by the methods used to extract information from the far tails. A very small change in the methods makes the outcome totally different.

      • Pekka,
        What people miss is that just taking the reciprocal of an underlying stochastic variate is enough to cause a power-law fall off distribution. That is what I tried to show with my short derivation. Mandelbrot would be the last person to want to admit that something this straightforward could explain the ubiquity of fat-tails, as it would tend to marginalize the mystery behind his fractal story (it doesn’t matter anymore since he recently died).

        Like Normal distributions and other well-accepted approaches, these have significant predictive power, but are curiously rarely applied. If you want to find more, you can look up the term ratio distributions in the applied mathematics literature.
        http://en.wikipedia.org/wiki/Ratio_distribution

      • I agree that Mandelbrot has been careful to avoid presenting anything like simple derivations for power law tails, but I don’t think that he has been as careful in commenting on the empirical evidence on the existence of power laws. Many of his examples have been rather poor from the empirical point of view, i.e. the power law is seen on a too narrow range of values and with insufficient accuracy to tell, whether it’s really a power low at all or just some other fat tail.

      • Agreed, Mandelbrot and Taleb are horrible at actually trying to match their ideas to any empirical evidence. They are pretty sly at not being caught in a situation of having to defend some assertion (Taleb in particular in his books relies on hand-waving).

        In many cases it just takes time to accumulate enough data to show a power-law dependence. The physicist/applied statistician Cosmo Shalizi wrote an article some ten or more years ago claiming that web-link statistics did not show a power-law dependence. Over the ensuing years as more evidence has become available, the actual statistics have converged to a power-law over many orders of magnitude and a very straightforward interpretation (IMO). That said, even if you say that it is power-law over 8 orders of magnitude, the wise-guy will point out that it is not provable because it doesn’t extend to 9 orders…

        Again the subliminal tendency by physicists is to reserve power-laws for critical phenomena and they don’t necessarily want to ascribe it to a pedantic explanation. That is my conspiratorial rationale, otherwise I can’t explain the dismissive attitude that many of them display.

      • Webhubtelescope
        Thanks for clarifying issues.
        What if we have multiple chemical, bio and human emissions?
        e.g. temperature-co2 from arctic pulsations
        Temperature-CO2 from the ocean.
        Ocean algae – about 50% of total biomass net productivity.
        Temperature & soil moisture together for:
        Tropical vegetation & agriculture.
        Temperature vegetation, trees
        Temperature annular agriculture.
        Then weight temperate into No. vs So. by land etc.
        Then anthropogenic CO2.
        The anthropogenic fuel consumption in turn varies by resource/country with growth/depletion curves and economic development. e.g. see
        Hook et al. Descriptive and predictive growth curves in energy system analysis Natural Resources Research, Volume 20, Issue 2, June 2011, Pages 103-116, http://dx.doi.org/10.1007/s11053-011-9139-z

        There should be some predictability there to fit CO2 and distinguish natural annular cyclic, vs natural solar/cosmic cyclic vs anthropogenic.

        However, each would have stochastic disorder components on top of overlapping variable physical causes.
        Would those together give the fat tail distributions you note above?
        What then the impact of the “fatness” on the long term increase/decrease of atmospheric CO2?

      • Would those together give the fat tail distributions you note above?
        What then the impact of the “fatness” on the long term increase/decrease of atmospheric CO2?

        Anything that shows dispersion or significant variation in rates can lead to fat-tails. The fat-tail is in those paths that show a slow uptake in CO2 or a long time to get to the sequestering points.
        The impact of the fatness long-term is that the slow pathways continue to build-up over time. That is what the convolution model reveals.

      • Chief Hydrologist

        David,

        Production curves for oil have been discussed since the 1950’s – http://en.wikipedia.org/wiki/Hubbert_curve

        The issue is that it is a one dimensional understanding of a multi-dimensional economics problem better understood through the idea of substitution. For instance – I commonly use a 10% ethanol blend in my Mercedes SLS AMG. It is made from Queensland sugar residue, is cheaper (not subsidised) and works better. The evidence is that substitution is happening as the oil price remains high.

        There are multiple substitutions possible at some price point. For instance we could with cheap enough electrical energy take CO2 from the atmosphere, blend it with hydrogen from water and make a liquid fuel.

        There are no fat tails just a trailing skinny in an imaginary function.

        Your greenhouse carbon example occurs with unlimited nutrients, light and water. That does not happen in the wild. Terrestrial plants are commonly water limited. They respond to elevated carbon in the atmosphere by reducing the number and size of stomata – limiting gas exchange but also water loss. This is not neccessarily a hydrologically or ecologically good thing.

        Most of the recent warming happened in ‘climate shifts’ in 1976/77 and 1997/1998. NASA/GISS says that most of the rest was caused by cloud changes – especially in the tropics. Jim Hansen doesn’t believe them- but it fits into a pattern of decadal observations of cloud in the Pacific in particular.

        I’m not sure about cosmic rays and clouds – difficult to distinguish from other solar modulated changes in e.g. the NAO, SAM, ENSO, PDO, QBO, PNA.

        There are as well plenty of cloud nucleation sites over oceans in dimethyl sulphide emmissions from phytoplankton – which in turn respond to changes in upwelling of frigid and nutrient rich water at various location around the globe – but prominently in the eastern Pacific.

        Dropping your punch cards is not a recommended SOP.

        Cheers

      • The issue is that it is a one dimensional understanding of a multi-dimensional economics problem better understood through the idea of substitution.

        No, oil discovery and production is a stocks and flows problem, which is exactly how I laid it out in the analysis I call the oil shock model.

      • Chief
        Re: ” a one dimensional understanding of a multi-dimensional economics problem better understood through the idea of substitution.”

        May I encourage looking at both “multi-Hubbert” curves AND economic substitution.
        Economics: Globally we see long term shifts from wood to coal to oil to gas over centuries to decades.
        Peak response: M. King Hubbert modeled US and world oil production as a logistic curve. This fits pretty well for each geographic area. e.g. see Tad Patzak’s multi hubbert curves for US production.

        For such Hubbert analysis, we need to think of a given resource for a given technology within a given economic regime. Then apply multi-Hubbert analysis to summing across multiple regions.
        Constraints on availability raise prices which can push to substitution. Webhubtelescope then adds economic shock impacts.

        With sufficient high demand and prices, that justifies development of alternative resources. However, each region/resource/technology still follows the Hubbert type model pretty well.

        For a good overview, see: Tad Patzek in Peaks Everywhere

        Forecasting World Crude Oil Production Using Multicyclic Hubbert Model
        Ibrahim Sami Nashawi, Adel Malallah and Mohammed Al-Bisharah Energy Fuels, 2010, 24 (3), pp 1788–1800

        g once new data are available. The analysis of 47 major oil producing countries estimates the world’s ultimate crude oil reserve by 2140 BSTB and the remaining recoverable oil by 1161 BSTB. The world production is estimated to peak in 2014 at a rate of 79 MMSTB/D.

        When Kuwaitis get very similar results, it is time to wake up and take notice.

    • So why don’t you just run along and do that?

      It is a simple stocks and flows problem – that could in principle be modelled with commencial software such as STELLA.

      You say you can do quite a lot, but you haven’t shown anything you claim to be legitimate.

      Compliments on your irony, though.

      As far as I am concerned this is more angels on pinheads discourse by the usual suspects. Perhaps that should be pinheads …

      Yes, you are.

  22. Hal asks:

    How long does CO2 from fossil fuel burning injected into the atmosphere remain in the atmosphere before it is removed by natural processes?

    This question presupposes that the IPCC “mainstream” view is correct, i.e. that Earth’s natural CO2 cycle is in “equilibrium”, which is perturbed over longer periods only by human CO2 emissions.

    IOW, it presupposes that the recent suggestion by Professor Murry Salby regarding natural causes for changes in atmospheric CO2 levels is false.

    But back to Hal’s question: how long does CO2 “remain in the atmosphere”?

    Tom Segalstad has compiled several independent estimates of the residence time of CO2 in the atmosphere, arriving at an average lifetime of around 5 to 7 years.
    http://folk.uio.no/tomvs/esef/ESEF3VO2.htm

    In addition [to geologic processes] there is a short-term carbon cycle dominated by an exchange of CO2 between the atmosphere and biosphere through photosynthesis, respiration, and putrefaction (decay), and similarly between aqueous CO2 (including its products of hydrolysis and protolysis) and marine organic matter (Walker & Dreyer, 1988).

    Is this “residence time” of CO2 in the atmosphere as reported by Segalstad equal to the “residence time in our climate system”? The mainsteam view (IPCC) does not believe so.

    The Lam study cited by Judith states:
    http://www.princeton.edu/~lam/TauL1b.pdf

    There exists no observation data to validate this value. In fact, it is not possible to experimentally measure the value of τL (and the claim of its constancy) unless reliable data taken over many centuries (with constant emission rate) are available.

    τL ≈ 400 years is the consensus value of all the published IPCC
    models.

    As Willis Eschenbach has pointed out on other threads, model outputs are only as good as the assumptions, which were programmed in.

    Since “there exists no observational data to validate this value”, it is meaningless as empirical scientific evidence.

    But it does show the basis for the IPCC model assumptions.

    IOW, if the actual value turned out to be significantly different from 400 years, then the IPCC model projections for future temperature rise, based on various assumed model scenarios and storylines (assuming these are correct), would be too high or too low.

    The first IPCC assessment report (Houghton et al., 1990) gives an atmospheric CO2 residence time (lifetime) of 50-200 years [as a "rough estimate"]

    Table 1 of the IPCC TAR WG1 report says that the CO2 lifetime in the atmosphere before being removed is somewhere between 5 and 200 years with the footnote:

    No single lifetime can be defined for CO2 because of the different rates of uptake by different removal processes.

    This appears rather “loosy-goosy” to me. Yet the IPCC models have apparently used a “lifetime” of 400 years as the assumed value as is suggested by the Lam study cited above.

    Zeke Hausfather presented data at a recent Yale Climate Forum, which points to a half-life of CO2 in the climate system of around 80-120 years, with a suggested long tail: i.e. ~80% gone within 300 years and a small residual remaining thousands of years.
    http://www.yaleclimatemediaforum.org/pics/1210_ZHfig5.jpg

    In an earlier exchange, Fred Moolten cited a model-based study by David Archer et al., which points to pretty much the same conclusion as reached by the ZH data, also pointing out that the CO2 lifetime has a long “tail”.
    http://geosci.uchicago.edu/~archer/reprints/archer.2009.ann_rev_tail.pdf

    Let’s ignore the long tail for now and let’s assume that the half-life of CO2 in the climate system is 120 years, or at the upper end of Zeke’s curve.

    In actual fact, no one knows what the residence time of CO2 in our climate system really is, because there are no empirical measurements to substantiate this, as Lam has stated.

    Now let’s switch from model estimates to actual physical observations and check out the ZH estimate.

    The first thing we observe is that there is absolutely no statistical correlation between the annual increase in atmospheric CO2 and the total annual human CO2 emission (over the years this has varied from 15% to 88%, with the balance “missing”).

    [In fact, there is a much better correlation with the change in average annual global temperature from that of the preceding year than with the human emission, which would suggest that the conclusion reached by Professor Murry Salby (that it is natural and temperature-related) is valid, but let’s ignore that for now.]

    Since the annual human emissions show no correlation with the annual change in atmospheric concentration, we have to look at a longer-term average.

    Over the past 10 years, humans have emitted around 305 GtCO2 from all sources (fossil fuels, cement production, deforestation, etc.).
    [data from various sources: be glad to provide links if anyone is interested]

    The mass of the atmosphere is 5,140,000 Gt.

    So, if we assume that CO2 is a “well mixed GHG” and that the Earth’s entire carbon cycle is in “equilibrium” except for human emissions (as IPCC does), we should have seen an increase in atmospheric CO2 concentration of:

    305 * 1,000,000 / 5,140,000 = 59.3 ppm(mass)
    = 59.3 * (29 / 44) = 39.0 ppmv

    Yet over this same time period we only saw
    389.1 – 370.3 = 18.8 ppmv

    IOW 18.8 / 39.0 = 48% of the CO2 emitted by humans “remained” in the atmosphere, and the remaining 52% (or 2.2 ppmv/year) is “missing”.

    This would suggest that the half-life estimate of Hausfather is correct, and that the “missing CO2” is leaving our climate system. At a half-life of 120 years, this would represent an annual decay rate of 0.58% of the concentration or around 2.2 ppmv, which happens (by coincidence?) to be equal to the amount of “missing” CO2 today (based on the data cited above).

    The same calculation holds for the period from 1958 (when CO2 measurements at Mauna Loa were first recorded) to today, with a calculated 49% of the emitted CO2 “missing”.

    So this raises the basic question, which is as yet unanswered: is the “missing CO2” disappearing from the climate system and, if so, where is it going?

    The question concerning the ”missing CO2” remains a mystery, if we believe that human CO2 emissions have been the primary cause for increased atmospheric CO2 levels. [Of course, if it turns out that the postulation by Salby is correct, then this is all a rhetorical discussion.]

    But let’s assume that the mainstream view on this is correct.

    Where is the “missing” CO2 going?

    No one really knows because there are no physical measurements enabling the calculation of a real material balance.

    Some hypothesize that the ocean is absorbing most of it and it is being buffered by the carbonate/bicarbonate cycle there.

    Others believe that much of it may be converted by increased photosynthesis, both from terrestrial plants and marine phytoplankton.

    Since photosynthesis absorbs around 15 times as much CO2 as is emitted by humans, once can see that a slight increase in photosynthesis resulting from higher concentrations could well absorb a significant portion of the human emission.

    On an earlier thread Pekka Pirilä mentioned the exchange rate between the upper ocean and the immense CO2 reservoir of the deep ocean as a possible long-term “hiding place”.

    This reservoir is so vast that the increase from the “missing CO2” would hardly be noticeable, even if human emissions continued over centuries. In addition, there is a finite amount of carbon in all the fossil fuels on Earth; even if these were all burned and the CO2 ended up in the deep ocean, it would barely be noticeable.

    So, in summary, irrespective of the Salby postulation (which, if validated, would make this a rhetorical question), I believe we are unable to answer the question of the “missing CO2” today, despite several alternate hypotheses.

    And as a result, we are unable to answer Hal’s question.

    If anyone here has any hard data to refute this conclusion, I would be delighted to see it.

    Max

    • “Since photosynthesis absorbs around 15 times as much CO2 as is emitted by humans”

      I have not previously seen that figure (my ignorance, I suppose)

      Do we know this for reasonable certainty ?

    • Max, nice post, thanks very much for this. Rob

    • Nicely written argument but several points need to be made. For a fat-tail response, any notion of a mean needs to be reconsidered. As I indicated in a comment upthread, power-law distributions do not show a mean or for that matter any higher order moments. However they can mimic exponentials over the initial transient, and a “half-life” number does appear, but it has nothing to do with conventional notions WRT to exponential decline. That is where Segalstad made his mistake.

      Also significant is this point you made:

      The first thing we observe is that there is absolutely no statistical correlation between the annual increase in atmospheric CO2 and the total annual human CO2 emission (over the years this has varied from 15% to 88%, with the balance “missing”).

      I would change this to state “The first thing we observe is that there is almost perfect statistical correlation between the annual increase in atmospheric CO2 and the total annual human CO2 emission”.

      If you do the convolution of the actual emissions with a fat-tail impulse response, the agreement is stunning. I have done this myself and don’t expect the skeptics to do the same because they don’t believe in the fat-tail residence time. (I also argued this upthread)

      Other than that, good try.

    • Manacker, missing CO2 is your own invented uncertainty. There is no debate in the science community, or even blogosphere, about missing CO2. Ocean uptake and acidification, of course, accounts for it. The carbon cycle explains the flows and balances well. It would actually be more surprising if none of the emitted CO2 went into the ocean because an equilibrium is maintained in their water/air surface ratio, and I don’t think you know about this part from reading what you wrote.

      • Sorry, JimD, there are lots of hypotheses but no empirical data that verify your statement that the “mystery” of the “missing CO2″ has been solved. [If you have such data, please correct me.]

        WebHubTelescope

        I’m less interested in the “fat tail residence time” (a rather hypothetical discussion IMO) than in the current “rate of decay”. If, indeed, the 120-year half life is correct, as postulated by the data presented by Zeke Hausfather, which I cited, this would point to a theoretical annual reduction equivalent to 0.58% of the concentration, which just happens (by coincidence?) to be the average annual level of “missing” CO2.

        The observation that the amount “remaining” in the atmosphere seems to correlate on an annual basis with the change in global temperature from the previous year (more “remains” in atmosphere if temperature has warmed, less if temperature has cooled) would point to the suggestion that there is a temperature correlation and that the ocean is absorbing more or less depending on temperature changes (Salby?)

        There are apparently still a lot of unknowns on CO2 residence time in our atmosphere.

        Max

      • Manacker, you need to point to someone who sees the carbon budget as not being closed. Where are people discussing this mystery you speak of?

      • Manacker, check my analysis on CO2 rise and you can see where it fits in. This is the result:
        trend

    • Max,

      I skip now totally our disagreement on the reliability of the main stream view on the persistence of CO2 in atmosphere over the first 100 or 200 years. I only note that I keep to what I have written earlier.

      Concerning the role of deep ocean, I’m still looking for any analysis on it’s real effective size and on the reliability of the estimates on the rate of exchange of CO2 with the deep ocean.

      One comment on its size is that the total amount of carbon in deep oceans is given approximately as 40 000 GtC or more than 50 times that of present atmosphere. In comparison, the total amount of fossil fuels underground is given in IPCC material as 4000 gtC. This is certainly very inaccurate. Whether that much can ever be extracted and burned is certainly questionable. If the Revelle factor of deep ocean is 10, that 4000 GtC added to deep ocean would be in balance with the atmosphere that has doubled its concentration from the level in balance with 40 000 GtC, or most probably the preindustrial level. That’s, however, almost certainly a gross overestimate, of the ultimate achievable extraction of fossil fuels. Even half of that is a high estimate.

      Whether the Revelle factor of deep ocean is 10 or significantly different is unknown to me. Estimating that would require fair knowledge on the chemistry (specifically pH and buffering) of deep ocean.

      An additional question is the speed of transfer of CO2 to deep ocean, which is likely to be slow with an effective time “constant” of hundreds to thousands of years, but again not well known.

      • Pekka

        Let’s address your points one at a time.

        I doubt that we have a disagreement on the “persistence of CO2 in atmosphere over the first 100 or 200 years”. I simply stated that there are no empirical data to substantiate this, as the Tam study concluded (and you have been unable to show me any such empirical data).

        The role of the deep ocean as a sink for future carbon emissions could be immense and the chemical and biological processes involved largely unknown or poorly quantified. We apparently agree.

        We disagree on the amount of carbon remaining in all the optimistically estimated fossil fuels inferred to be in place (whether extractable or not).
        I have cited 2010 estimates from the World Energy Council, which tell me that this could reach a maximum atmospheric CO2 level of around 1,000 ppmv – you have cited other estimates, which put this at a higher level.

        On the other points, we apparently agree.

        Max

      • Max,

        My point was not an actual disagreement, but I wanted to emphasize that the amount of carbon that the deep ocean will absorb given enough time is badly known. It’s not as huge as the present amount of carbon would indicate, when the Revelle factor is not taken into account, but it’s certainly very large even so. When a low value is used, the share of CO2 remaining in the atmosphere in balance with oceans may be around 20%, for a larger estimate the ratio of atmospheric carbon would be smaller.

        I would like to find scientific work that has studied this question, but I haven’t seen such. The answer influences also the reasonableness of ideas to store CO2 from sequestration in deep ocean. This idea was discussed widely at one stage, but not recently. I have failed in my search for proper justification for this change. Underground storage alternatives are certainly preferred at first, but the question of the suitability of deep ocean for storage is an interesting question as well.

  23. Every year, from May to September/October there’s a 5 – 6 ppm drop in atmospheric CO2 (in dry air). The efflux from the atmosphere is very large during this period.

  24. Nick,

    Any argument that relies on the Earth “knowing” anything is a bit suss, IMO.
    If you aren’t convinced by my few lines of argument , maybe you should take a look at:

    http://www.ipcc.ch/pdf/assessment-report/ar4/wg3/ar4-wg3-ts.pdf

    I’m not saying anything different to what the IPCC have already said. They make the point that to reduce CO2 concentrations, human CO2 emissions have to be reduced by more than half.

    • Atmospheric CO2 concentration will be reduced according to the change of climatic factors. When the cooling really gets going (SST decrease, sea ice increase), CO2 will be reduced. Human CO2 emissions are almost irrelevant.

  25. Edim,

    You’re obviously of the school of thought that carts push horses rather than horses pull carts. Your opinion is noted but I think we’ll have to agree to differ on that point.

    • Yes, we will agree to differ on that point.

      By the way, in your analogy temperature is the horse and CO2 is the cart.

  26. Dr Curry: Between them Nick Stokes (August 24, 2011 at 6:08 pm) and Fred Moolten|(10.39 pm) seem to have answered your question, at least to a ballpark figure. If annual anthropogenic CO2 emissions are sufficient to increase atmospheric content by say, 3ppm per year, but the actual increase is, say, 1.5ppm, then the remainder is being absorbed by sinks at a rate of 1.5ppm per year. If this process is occurring when CO2 is at around 390ppm, while the background before notable anthropogenic emissions was about 280ppm, then it will take (390-280)/1.5 years (so 73 years) for the anthropogenic addition so far observed to be absorbed by sinks. However as Fred has pointed out, if emissions ceased now it would take longer than that to revert to the background value, as the process would likely be logarithmic.
    With the above scenario, if emissions were halved now, the atmospheric level would stabilise at the present value.
    The mean residence time of CO2 in the atmosphere is scientifically interesting, but a separate issue. It may be about 5 years.
    There are plenty of weaknesses in the IPCC account of climate science, but this issue is not favourable ground for climate skeptics to attack the orthodox position.

    • You continue to make the impossible assumption that sources and sinks aren’t variable. You persist in simple calcs – simple linear concepts – for complex systems. It is all a very silly charade.

      • Have you actually done the convolution calculation yourself?

        The concept that you and others seem to miss is that probability and statistics in the large can often simplify complex systems.

        You continue to make the impossible assumption that sources and sinks aren’t variable.

        Using probability models allow us to model this variability. You seem to be very conflicted in your stance.

      • Chief Hydrologist

        The mad theory that you perpetuate is that data is need to define probabilities and statisitcs.

        I had to look this up.

        ‘In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original functions.’
        http://en.wikipedia.org/wiki/Convolution

        No it is just nonsense – to think that we could apply probabilities without experience has no possible meaning. The probability that it varies x is this y that and z a herd of giraffe – great use for the infinite improbability generator.

        Why do you think you are entitled to bother me with nonsense?

      • Chief Hydrologist

        Definitely annoyed and incoherent –

        The mad theory that you perpetuate is that data is not needed to define probabilities and statisitcs…

        I have had enough of your insanity as well.

      • It may not be your fault that you haven’t been exposed to the approach. I have noticed that certain branches of engineering and science never introduce the topic of convolutions. I am not exactly sure why this is but it is certainly short-sighted. Convolutions form the basis for everything from the Central Limit Theorem (applied probability) to linear and nonlinear response functions. The fact that it gets used for to explain both deterministic and stochastic behavior makes it a very general approach.

        “It is difficult to exaggerate the importance of convolutions in many branches of mathematics.” — William Feller, “An Introduction to Probability Theory and Its Applications”

        You may want to bone up on the topic. Climate scientists use the approach all the time because they realize that forcing functions are not idealized delta impulse functions and thus they need to convolve the input with the response function to model the result.

      • You might want to be a little less condescending to people like Pekka. ‘

        ‘More specifically any agreement in far tails with results of Archer is likely to be dependent on, how Archer did his work rather than on the real world facts, because the empirical support for any conclusions on the far tails is statistically extremely weak. Relevant empirical data exists only from very distant past, and the the results on far tails are determined by the methods used to extract information from the far tails. A very small change in the methods makes the outcome totally different.’

        ‘In probability theory, the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability density function of a sum of random variables is the convolution of their corresponding probability mass functions or probability density functions respectively.’ A simple concept involving the sum of two or more nominally random variables.

        The key term however is having empirical support for observations of what Pekka calls the far (as opposed to fat) tail. Without observation there is no rational basis for applying probabilities.

        Show me first the variability of the specific CO2 compartments – and then the sum of those in the atmosphere? Rather – don’t bother because I won’t believe you without more detailed data than we have available. We can fit a curve to anything – or make one up out of blue sky – but it is just maths and not science.

      • The Taylor series approximation for exp(-k*t) and a power law like 1/(1+k*t) is exactly the same to first-order.
        That explains why the short residence crowd gets the time constant wrong.

        The next issue is that the convolution of random variables can go to one of two classes of stable distributions as a central limit, one is thin-tail and the other a fat-tail.The Normal distribution is considered a thin-tailed stable and it requires thin-tail distributions for the input variates. The other analytic stable distribution are the Levy and Cauchy, which both have fat-tails and require fat-tails as input. The Cauchy PDF has a power law which follows an inverse square dependence.

      • Is this a game? Throw in a few inapplicable functions and pretend it is meaningful? Fool the sceptics because they are idiots and won’t know any different?

      • Is this a game? Throw in a few inapplicable functions and pretend it is meaningful? Fool the sceptics because they are idiots and won’t know any different?

        Listen buddy, I am serious as a heart attack about this stuff. I have no idea what your deal is, but you quote something and I can’t tell if you are referring to some imaginary friend. I do the best I can to respond.

      • Passionate about applying functions to a problem with no empirical base. One of them might fit – but how would you know?

        I keep saying that and you refer somewhere else to volcanic emissions – without reference to anything – but they are so small and the problem remains is the old one of too many unknowns in too few equations. We need more knowns and not more functions.

        Now there is no solution in what you say – just another theorist driven mad by the climate wars with a line of enquiry that no one appreciates.

      • Passionate about applying functions to a problem with no empirical base.

        What are you talking about? The empirical base is all around us. We are soaking in it. Egad man, do you not see the record of CO2 increase. Do you just want to close your eyes and stick fingers in your ears?

        One of them might fit – but how would you know?

        That is the role of a probability analysis, as one tries to understand the situation with limited information.

        I keep saying that and you refer somewhere else to volcanic emissions – without reference to anything – but they are so small and the problem remains is the old one of too many unknowns in too few equations.

        You do not understand how impulse response functions work. I surmised this when you had to look up what a convolution was. The records of a volcanic event can be used to judge the response function independent of the size of the event. Maybe you can also learn something about signal processing.

        Now there is no solution in what you say – just another theorist driven mad by the climate wars with a line of enquiry that no one appreciates.

        It must make you very angry that I am a citizen scientist. My interest in the environment is broad reaching and this is confirmed in the fact that I spent only 17 pages of my 750 page book on energy on the topic of CO2 rise and residence time (I didn’t write anything on AGW temperature rise since I am still learning). You like to cast aspersions my way but all I can say is that I have a serious interest in this subject and am not beholden to anyone’s agenda. I am also a fan of human behavior and I find it interesting to see how wound up some people can get when another person just talks.

      • ‘That is the role of a probability analysis, as one tries to understand the situation with limited information.’

        I think I am getting a clue as to the role of ‘probability analysis’ – it is filling in the gaps with imaginary data and then believing it absolute certainty.

      • Just getting up to speed eh? Nowadays news weather forecasting is 100% probability analysis. All insurance is probability analysis. All betting is probability analysis. These all use limited information because they are trying to anticipate something that will happen in the future and no behavior will repeat in exactly the same way .

      • Weather forecasts are based on initialised models – accurate for a week at most. Long range forecasting of rain is sometimes expressed as a broad probability – if we have a La Nina there is a probability of higher than average rainfall. This is based on correlation with persistent patterns of SST. Insurance is based on demographics and the law of large numbers. Gambling is either a zero sum game or the house takes a slice.

        You have a single variable – the concentration of CO2 in the atmosphere -that you think you can divine by means of an ontological argument alone.
        That is what you think it looks like can be described by a function of some sort – and it must be right because it is based on what you think it should look like.

        I can call black 17 – but it doesn’t mean I am right. That’s called gambling?

      • OK, so it looks like you do believe in the utility of probability.
        Ontologically speaking, the environment is ruled by disorder. Tell me something that doesn’t show disorder in the environment and we can stop using probabilities.

      • No – weather and climate are not predicted using probability at all. You are conceptually all over the in trying to defend a nonsensical, impractical, misguided and deeply unscientific approach. You are defending solutions without data.

        The world is ruled by cause and effect – entropy may occur but it tells you nothing of how energy or matter moves through the world. There is nothing random even in the fall of the roulette ball – it is all deterministically chaotic. We define the statistics of events where we cannot identify cause and effect and it works in simple applications – climate is far from simple.

      • No – weather and climate are not predicted using probability at all. You are conceptually all over the in trying to defend a nonsensical, impractical, misguided and deeply unscientific approach. You are defending solutions without data.

        For as long as I can remember, weather forecasts are presented with probabilities. For example, probability of precipitation (POP) is routine. This is all so common sense to me and so I started looking into this and yes, some years ago “The American Meteorological Society endorses probability forecasts and recommends their use be substantially increased.”
        http://www.ametsoc.org/policy/enhancingwxprob_final.html
        AccuWeather has a thing they call AccuPOP and forecasters seem to use ensemble averaging.

        The world is ruled by cause and effect – entropy may occur but it tells you nothing of how energy or matter moves through the world. There is nothing random even in the fall of the roulette ball – it is all deterministically chaotic. We define the statistics of events where we cannot identify cause and effect and it works in simple applications – climate is far from simple.

        You do not seem to know that of the origins of the master equation, the Fokker-Planck equation, the Navier-Stokes equation and the entire class of problems that introduce the concept of diffusion. All these at the core are based on probability and on conservation of mass, i.e. through the analysis of divergence and gradients. You seem not to realize that a Monte Carlo simulation needs to draw from a probability density function to even work. I suppose you are going to tell me that no one uses Monte Carlo, egad.

        Once again you are engaging in a moot philosophical discussion. Yes, unless you get to the quantum level, actions are deterministic. Most physicists and scientists realize this and then move on and apply practical ensemble approaches.

        I can engage in my own philosophy and find that most of the most ardent determinism followers are also religious dogmatists who believe everything is preordained and there are no shades of gray. I’ve worked with these people and gone to school with them, and I find that there is no reasoning with them. I kind of doubt this applies to you, but you never know.

    • That would imply every other non-human CO2 process is identical from year to year. Hard to believe.

  27. http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf

    The annual CO2 growth rate varies between 0.3 and 3 ppm. In 1998 it was 2.98 ppm and it was the maximum growth (el nino). In 1999 it was only 0.9 ppm. I hope nobody hides the coming decline.

    year ppm/year
    1998 2.98
    1999 0.90
    2000 1.76
    2001 1.57
    2002 2.60
    2003 2.30
    2004 1.55
    2005 2.50
    2006 1.73
    2007 2.24
    2008 1.64
    2009 1.89
    2010 2.42

    • Thanks, Edim for these figures. The observed
      pattern seems reasonable for short-term fluctuations (due to regular (seasonal) and less regular changes in the balance of CO2 sinks and sources)
      superimposed on a steadier, larger amplitude background trend (the addition of new CO2). It’s like waves which modify the water level short-term by small amounts while the larger, longer-term trend is due to the rise or fall of the tide.
      And unless the Mauna Loa data are fudged, there is extra CO2. The important thing is (as Tall Bloke suggests below): does it matter?
      Skeptics and critics of the climate orthodoxy might be better employed in assessing whether the increased CO2 is reflected in worryingly – or even measurably – increased surface temperatures.

  28. Tomas Milanovic

    I cannot believe that people want to tackle this extremely complex and hitherto poorly understood problem with primitive equilibrium models, Henry’s laws and “boxes” with time constants.
    Consider just this paper : http://www.ess.uci.edu/~jranders/Paperpdfs/2001ScienceBehrenfeld.pdf.

    We learn there that the NPP (Net Primary Production of oceanic and terrestrial plants) is around 110 billions tons carbon per year what is about 14 times the CO2 emissions.
    We learn farther that not only is the NPP highly variable by amounts roughly equivalent to the CO2 emissions but also sensible to oceanic oscillations.
    We have here nothing that “averages out” and this is no simple physics with Henry’s law.
    About half of it is phytoplancton. This is a huge living carbon storage governed by biology and metabolism depending on parameters like light availability (e.g cloudiness), nutrient availability (e.g oceanic currents), salinity, temperature.
    This can’t absolutely be treated by a “box” with one time constant.
    It is a huge stock + flow poorly understood system with no conservative parameter, chaotically fluctuating on large time scales (bieannal, decadal and more) on orders of magnitude that are equivalent to the variable we are talking about.

    Of course one must take this paper with a huge grain of salt. The observations cover only a short time period and the models don’t agree precisely with each other.
    My purpose is merely to tell to interested readers that the CO2 fluxes and their dynamics especially for larger (decadal and multidecadal) time scales are very far from naive 2 or 3 “independent box” models each described by one time constant.

    • I agree with Tomas Milanovitch’s assessment of the situation.
      http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104528

      In any case, before we get worried about the residence time of co2 in the atmosphere, we need to work out whether changes in co2 levels are worrying anyway.

      The magnitude of the total energy contained in the ocean is such that minor fluctuations in radiative balance between the components of the longwave flux are pretty much irrelevant except on extremely long time scales.

      http://judithcurry.com/2011/08/19/planetary-energy-balance/#comment-104533

    • The problem with your argument is that the biological carbon storage of oceans is actually very small, only of the order of 0.5% of the carbon of the atmosphere and almost negligible compared to the inorganic carbon of surface ocean. Because it’s so small, it cannot contribute to the changes in atmospheric CO2.

      The biologic processes that remove carbon from the surface ocean and move it down to deeper ocean are important, but the uncertainty in the net transfer is not so large that it could change the overall conclusions.

      The biosphere of land areas is much larger and interacts strongly with the carbon in soil. How much these reservoirs may have changed can be estimated and such limits obtained that conclusions cannot change much from the main stream views. On annual level the variations are large, but over longer periods they cannot be more than a small fraction of the increase in atmospheric carbon.

      • Tomas Milanovic

        Pekka you are being silly now.
        The variation of the yearly fluxes is of the order of the total CO2 emissions and the storage is 15 times that amount.
        You are using in the other direction another silly argument which says that 2 ppm emission per year is so small compared to the carbon in the atmosphere that it can’t possibly matter.
        I am sure that you agree that the second argument is silly so now you only need to realize that the first is equally so.

        I hope you DO realise that a process that removes 15 times as much CO2 from the atmosphere than what you put in it and is highly variable will definitely have an influence on the RATE with which the CO2 concentration in the atmosphere changes.
        And how variable it is, just read the paper I linked.It is not so long.

      • The continuous exchange has a large volume, but it cannot move over longer periods more than the reservoirs can take. Oceans 3 GtC of biological carbon is really small. It’s total amount is recirculated in less than a month. Thus it cannot have much influence even on seasonal level, and really no effect on annual or longer periods.

        The variations in the net uptake in continental biosphere and soil appears to be the main reason fro year to year variability, but that variability cannot continue so strongly in the same direction for long, because the changes in the related reservoir are not changing that much.

        I may have a different opinion on, who is silly and leaves essential factors out of consideration.

      • “Oceans 3 GtC of biological carbon” “It’s total amount is recirculated in less than a month.”

        I would be interested in reading your references if you happen to have them handy.

      • This is interesting. The 105 petagrams is 105 gigatons of carbon or about 315 gigatons equivalent carbon dioxide reduction due to photosynthesis per year. The oceans release a net 50 gigatons carbon per year. So this is headed to the down welling debate issue where its the net that counts. :)

      • Plankton savor the
        Sultry ultry-violet.
        Layers of turtles.
        =========

      • The uv and shorter wavelength impact on the deeper ocean is interesting. But that small percentage, tail if you will, is evidently not as popular as other tails.

      • Pekka Pirilä

        You write to Tomas Milancovic:

        The problem with your argument is that the biological carbon storage of oceans is actually very small, only of the order of 0.5% of the carbon of the atmosphere and almost negligible compared to the inorganic carbon of surface ocean. Because it’s so small, it cannot contribute to the changes in atmospheric CO2.

        It is correct that the carbon sink (storage) in the oceans is estimated to be only a fraction of that on land (including vegetation and solis), but the rate of CO2 absorption (which impacts the global carbon balance and henece the changes in atmospheric CO2) is roughly equal.

        See:

        http://www.pnas.org/content/100/17/9647.full

        A rich diversity of marine phytoplankton, found in the upper 100 m of oceans, accounts only for ≈1% of the total photosynthetic biomass, but this virtually invisible forest accounts for nearly 50% of the net primary productivity of the biosphere ( 1).

        http://www.sciencemag.org/content/281/5374/237.full

        Integrating conceptually similar models of the growth of marine and terrestrial primary producers yielded an estimated global net primary production (NPP) of 104.9 petagrams of carbon per year, with roughly equal contributions from land and oceans.

        Max

      • Max,

        Your quotes confirm what I have written.

      • It might confirm what you say. It rather depends on how quickly it is incorporated into the food chain. The carbon is still locked in biomass regardless of where on the food chain it lies.

      • Pekka

        Right.

        It basically confirms that photosynthesis (terrestrial and marine) removes roughly 15 times the annual emissions from humans.

        Max

      • But it also tells that this cannot affect the CO2 content of the atmosphere nearly as much. In particular the amount of carbon in the marine life cannot vary much, because it’s always so small that there’s almost nothing to vary.

        How can it be that the gross rates are brought up time after time although only net rates matter, and net rates cannot change without corresponding changes in the amount of stored carbon.

        Understanding this cannot be so difficult.

      • Pekka

        Gross rates?

        Net rates?

        What counts is the rate of CO2 conversion on an annual basis (or over any other multi-seasonal time scale).

        This rate of conversion is estimated to be roughly 15 times the rate of human emissions over the same time frame.

        If this rate of conversion shifts (as a result of higher atmospheric or oceanic CO2 concentrations or any other factor), it can become significant.

        What then happens to the terrestrial plants or phytoplankton is another story. Do the phytoplankton end up in the marine food chain, in some cases re-creating CO2 which is absorbed in the ocean’s buffering process or released to the atmosphere or in other cases going into sea shells which end up eventually sinking to the ocean bottom?

        Who knows?

        I’m afraid that the unknowns here exceed the knowns.

        Max

      • I can only repeat:

        It cannot be so difficult to understand.

      • There may be a climatic impact through UV interaction with stratospheric ozone. There may be a climatic impact through UV interaction with oceanic phytoplankton. If both exist, might not they interact in concert with the UV variability?

      • Hi Pekka,

        You are in my world now. The sequestration of carbon in oceans is by two processes – the so called biological and chemical pumps. http://earthguide.ucsd.edu/virtualmuseum/climatechange1/06_3.shtml

        The chemical pump is rate limited by the supply of calcium from rock weathering – which is influenced by temperature, groundwater moisture, carbonic acid in rain and mechanical breakdown by roots and fungi. There is no compelling reason to suggest that this is static.

        The biological pump is limited by macro and micro nutrients – including (important for carbon sequestration) silicate for diatoms and carbonate for coccolithophores. The ecosystem is light limited – reliant on phytoplankton for primary production as the basis of the food chain. The difference between this soup of micro-organisms on the surface of oceans and the terrestrial plants can be explained in terms of life cycle. Some terrestrial plants can live for a 1000 years – when they die the most of the carbon is returned to the atmosphere through the respiration of microscopic heterotrophs. The life cycle of oceanic plankton is a matter of days – whereby some sinks into the depths where it forms thick organic, silcate and carbonate layers at the ocean bottom. The micro-organisms with silicate and carbonate shells sink relatively quickly.

        The abundance of these organisms varies especially as nutrients are returned to the surface in deep ocean upwelling. The major area for this is in the region of the Humboldt Current off the coast of South America.
        Upwelling does change considerably – firstly in the ENSO cycle leading to short term booms and busts in oceanic and terrestrial biology but also in decadal, centennial and millennial timescales that we know of. The well known Pacific Decadal Oscillation involves decadal changes in upwelling in the Pacific north east. The frigid and nutrient rich upwelling is super-saturated with carbon dioxide – but also leads to blooms in carbon fixing organisms.

        All in all – I suspect that an assumption that these systems don’t change over time cannot be substantiated.

      • Rob,

        While I’m certainly not an expert in your field, I’m aware on what you describe, and I have tried to formulate my statements so that they are valid over a wide range of variability and uncertainties of the type you describe.

      • Chief Hydrologist

        Hi Pekka,

        It is a pleasure to hear from you. I hope you are well and happy and take care to stay so.

        Cheers

      • Hi Rob,

        I’ll try to maintain my physical and mental health better next week, which may mean lead to little or no contribution to these discussions over that period.

        I hope that you’ll also take care.

      • Chief,
        I believe that when you read up on what limnologists are doing you will find a new dimension tot he carbon sink question.

      • Dang, Chief, I wanted to put my question here about the possible interactive climate effect of UV on oceanic phytoplankton and stratospheric ozone.
        ========

      • kim, Right on.

      • Chief Hydrologist

        Hi Kim,

        The micro-organisms release dimethyl sulphate into the atmosphere – and so provide their own sunscreen and are thus impervious to UV. Then it becomes very cloudy and rains a lot blocking IR and causing the stratospheric oxone to cool in turn inducing a runaway cooling effect with NAO and PDO feedbacks and the inception of the nest glacial – due in about a week. Don’t forget to rug up.

        Cheers

      • The micro-organisms release dimethyl sulphate

        Do you mean Dimethyl Sulphide?

        BTW, this has a very short residence time.

      • Pekka
        That is a large assumption you are making.

      • Thank you for your reply above

        When I read the 15x figure, I knew it would cause angst

      • Depends on the type of phytoplankton.

        “Until now, it was thought that all the photosynthetic algae and bacteria living in the ocean drew carbon dioxide out of the air and used it to build sugars and other carbon-rich molecules to use as fuel. But two new studies by researchers at Stanford and the Carnegie Institution show that Synechococcus, a type of cyanobacteria (formerly called blue-green algae) that dominates much of the world’s oceans, has evolved a mechanism that short-circuits photosynthetic carbon-dioxide fixation while still producing energy. The alternate approach is found in regions of the ocean where some of the ingredients necessary for traditional photosynthesis are in short supply.

        “The amount of carbon dioxide being drawn down by the phytoplankton in nutrient-poor oceans might turn out to be significantly lower than we thought,” said Shaun Bailey, a postdoctoral researcher working in the Carnegie Institution’s Department of Plant Biology with Arthur Grossman, a staff scientist at the institution and a professor, by courtesy, in Stanford’s Biology Department.”

        http://news.stanford.edu/news/2008/april2/plant-040208.html

    • Thanks Tom, a good starting point of actually discussing the issues, except Cheif has already tried to bring in the the stocks and flow model with variable rates, but it appears, that some do not want to discuss these issues. One you left off your list is the acidic effect on releasing nutrients that tend to increase the rate limiting variable for a net effect of increased flux, that which would indicate astatic constant for the fat tailed Bern model tends to overestimate the residence time. ANd being fat tailed means that small changes can greatly increase of decrease the residence time. Nor do some seeem to appreciate that such stiff equations in a model well past the ability to assume or measure means that such claims are speculative, not verified nor validated.

  29. Here is Railback’s take on residence time. Their take on the issue is certainly interesting.

    http://www.gly.uga.edu/railsback/Fundamentals/AtmosphereCompV.jpg

    The range of 2-10 years is reasonable considering the unknowns involved.

  30. This thread is better described as one discussing the current understanding of CO2 lifetime in the atmosphere. Salby’s paper, while dismissed by many, has not been actually read. Until it is available, it is not reasonable to claim this is resolved.

  31. Even Phil Jones of CRUgate was forced to admit that there has been no significant global warming since 1995. After all of the shenanigans, involving data corruption and data gone missing, the global warming house of cards collapsed in the UK.

    We then learned that the raw data for New Zealand had been manipulated; and, NASA’s data is the next CRUgate: satellite data shows that all of the land-based data is corrupted by the Urban Heat Island effect.

    Manipulation of the data is so bad that the recent discovery concerning a weather station in the Antarctic where the temperature readings were actually changed from minus signs to a plus signs to show global warming almost comes as no surprise.

    And then, there was a peer-reviewed study showing the ‘tarmac effect’ of land-based data in France where only thermometers at airports–in the winter–showed any warming over the last 50 years. Since then, the problem of data corruption due to continual snow removal during the winter at airports where thermometers are located–while all of the surrounding countryside is blanketed in snow–has been shown to extend far beyond the example in France (e.g., Russia, Alaska).

    In reality, there essentially has been no significant global warming in the US since the 1940s. The only warming that can be ferreted out of the temperature records is in the coldest and most inhospitable regions on Earth, such as in the dry air of the Arctic or Siberia where going from a -50 °C to a -40 °C at one small spot on the globe is extrapolated across tens of thousands of miles and then branded as global warming.

    Warming before 1940 accounts for 70% of the warming that took place after the Little Ice Age ended in 1850. However, only 15% greenhouse gases that global warming alarmists ascribe to human emissions came before 1940. Obviously, the cause of global warming both before and after 1940 is the same: solar activity during that period was inordinately high. It’s the sun, stupid. Now we are in a period where the sun is anomalously quiet; and, now we are in a period of global cooling and have been for almost a decade.

    And what about the measurement of atmospheric CO2? We learned that the CO2 readings are based on measurements taken on the site of an active volcano (Mauna Loa) and have been completely fabricated out of whole cloth by a father and son team who have turned data manipulation into a cottage industry for years. (e.g., “Time to Revisit Falsified Science of CO2.” by Dr. Timothy Ball)

    • Wagathon,
      While there is good reason to question the amounts of CO2 i the atmosphere and the accuracy with which they are measured, and even to question the consensus view regarding long term CO2 resdience times, I htink attributing motives to the Mauna Loa group is out of place.
      Additionally, CO2 is measured at several points and these results seem to concur with Mauna Loa.
      Skeptics do not need to argue against the GHG/Tyndel theory to point out that the idea co a cliamte crisis caused by CO2 fails.
      The lack of crisis- the lack of any meaningful trend lines in cliamte events- makes that argument.
      The juicy bits- climategate, Mann’s ‘fudge factor’ hockey stick, the lack of OA, the obvious politial bias of many AGW promoters, the utter lack of any actual mitigation policy/technology, the profiteering, etc. etc. etc. are all icing on the cake.
      If Salby’s paper, when finally available for actual review, holds up, then great.
      But that will not change the basic point, no matter if Salby’s paper survivies or not: the world is not facing a climate crisis, and additionally the AGW community has offered nothign in the realm of reality to mitigate this crisis in the first place.

      • hunter

        Thanks for a very concise summary of the situation.

        Max

      • Do you appreciate by how many parts per million the atmospheric CO2 levels at Mauna Loa can vary–in a single day?

      • Wagathon,
        Yes, but if it trends up or down the daily fluctuations are not important.
        Last time I checked, the other CO2 stations around the world corroborate the trend, but that has been awhile.
        Now that is not directly addressing the change in CO2 measurement regimes from the 19th century to the present. That may be worth revisiting.
        And the persistent underlying probelm of data source credibilty that you bring up is also important.
        But think on this: evenif the climatocracy is getting it wrong and evenmore are indulging in nobel cause corruption that cliamtegate showed, what have they actually demonstrated?
        Nothing that is actually out of historical ranges of climate.
        If there is systemic book cooking in the AGW promotion industry, it will be found out over time. Climategate certainly gives us some pretty strong hints. But be patient. It will come out in the end.
        But even before then, AGW still fails based on simply critically reviewing what is claimed.

      • Facts are facts. “Carbon dioxide is 0.000383 of our atmosphere by volume (0.038 percent). Only 2.75 percent of atmospheric CO2 is anthropogenic in origin. The amount we emit is said to be up from 1 percent a decade ago. Despite the increase in emissions, the rate of change of atmospheric carbon dioxide at Mauna Loa remains the same as the long term average (plus 0.45 percent per year). We are responsible for just 0.001 percent of this atmosphere. If the atmosphere was a 100-story building, our anthropogenic CO2 contribution today would be equivalent to the linoleum on the first floor.” ~Joseph D’Alea

      • Wagathon,
        That is why I think that the carbon cycle story is not well understood yet.
        And I believe that the cliamte is demonstrating significant capability to handle forcings of different sorts with few problems.
        After all, we have been changing land use, vegetation, massively urbanizing, moving surface water around, etc. for a very long time with a climate that has pretty much ignored us.
        I was at meeting in my lovely drought stricken city last night with some very educated people. Many of whom deeply believe in AGW.
        When the panelists, which included the Chairman of the State water planning board, were asked why this current drought was not being included in the planning, the bottom line answer was because this famous drought is no where near as bad as the the one we had ~60 years ago.
        Each and every AGW scare ends with that same whimper. Katrina, Russian heatwave, Pakistan floods, Australian drought and flood, etc.

  32. Hunter

    Well-summarised. Scepticism at it’s healthy best.

  33. The problem with the prevailing paradigm is something which is unobservable to the mainstream of climate science, because they are focusing on deterministic systems with deterministic forcings. But, nature is not deterministic. We are dealing with a stochastic system here.

    One year, there will be a spate of volcanic eruptions. Other years, there will be massive forest fires. Then others, regions will become deserts, while others will bloom. All kinds of variations can be associated with ocean dynamics. Small random events, perhaps. But over time, they accumulate. And, an accumulation of independent random events begets a random walk, whose dispersion from an initial condition grows with the square root of time.

    The notion of a long residence time means that, over a substantial portion of that timeline, the system behaves essentially as a pure accumulator. So, in addition to accumulating any and all excess emissions (distributing them over all reservoirs), it will accumulate any random events as well, and it will show increasing dispersion from the initial condition approximately increasing as the square root of time, and topping out at a level proportional to the dominant time constant in roughly that timeline.

    The variability, the krinkles and so forth you see in the data, should increase within an approximately square root envelope from a given initial time when viewed over a timeline associated with the residence time. Moreover, the one-sigma upper bound should be approximately equal to the year-to-year variability multiplied by the time constant. What we actually see is a variation from the long term quadratic factor of at most about +/- 1 ppm. If we assume half of that variation is going into the land and ocean reservoirs as posited by the reigning paradigm, then we might be able to argue that there is +/- 2 ppm variability. A 100 year time constant would then mean that the year-to-year variability is at most on the order of about 0.02 ppm, or 20 ppb. That appears to me to be an absurdly small value.

    What we see is actually a tight regulation of CO2 levels in the high frequency regime. This is characteristic of a high bandwidth (short residence time) system. In such a tightly regulated system, any large excursion necessarily comes about from a very strong input, or a change in the balance of forces which establish the equilibrium.

    This, IMHO, is what is happening. It could be deep ocean upwelling from the distant past which is driving an increase in atmospheric partial pressure. Or, it could be a temperature dependent swing due to the temperature rise since the LIA. This latter possibility is consistent with the observed interannual variation in CO2 due to global temperature variation if the dominant time constant is on the order of perhaps ~30 years.

    • I can actually understand what you are saying. Have you considered simulating the envelope via a model and then plotting the result?

      • That’s great! You may be the first one.

        However, in recalculating the effect, I must note that I was wrong to say “A 100 year time constant would then mean that the year-to-year variability is at most on the order of about 0.02 ppm”. The proper factor is the square root of the time constant, so this would increase to 0.2 ppm. Perhaps that is not quite so absurd. This, I will have to think about…

  34. Bart,

    “The problem with the prevailing paradigm is something which is unobservable to the mainstream of climate science, because they are focusing on deterministic systems with deterministic forcings. But, nature is not deterministic. We are dealing with a stochastic system here.”

    I think I must be allergic to the word “paradigm”! It usually crops up in these kind of sentences which look like an output of those gobblegook generators and its brings me out in a hot flush!

    BTW. What does the hell does this mean , anyway?

    • I had expected I might need to review Elementary Signals & Systems with some commenters. I have to admit I didn’t prepare to field questions about English 101.

      • Bart,

        The ultimate solution has to involve systemised logistical and optional incremental contingencies using parallel 21st Century modular capability and knowledge-based exploratory innovation. At base level, this just comes down to four-dimensional third-generation concepts…….

        Look, you’ve got me at now!

        For all your talk of “square root envelopes” and “one sigma upper bounds”, what I think you mean in your post is that mainstream science is overconfident in its ability to make accurate predictions. But, because some events are random, such as volcanic eruptions, and therefore associated risks have to be treated statistically, this may well be misplaced.

        Its a convoluted variant of the the “they can’t predict the next week’s weather so how can they predict the next century’s climate” type of argument.

        PS I’ve just noticed, in your post, you’ve used the word ‘paradigm’ twice!

      • No. You aren’t following the argument at all. It has nothing to do with prediction. It has to do with how natural processes evolve, and what properties they tend to exhibit.

      • Emission of large amounts of carbon from burning fossil fuel is not a natural process but it does force the natural system to respond. Understanding how it responds is the same thing as prediction.

      • Again, this is beside the point. I am talking about the dispersion properties from natural variations which should be observed if the residence time were long.

    • It’s inflation. Originally it was “a penny for your thoughts.” Then it went up to “just my two cents.” Kuhn and Lakatos raised the ante to twenty cents.

    • Tempterrain

      You question the meaning of the word “paradigm” when discussing climate science. Let me see if I can give you my take on that.

      Wiki has a fair definition of the word “paradigm”, and it’s meaning in the scientific sense.

      The classical description of how paradigms work in science is given by:
      Kuhn, Thomas S. The Structure of Scientific Revolutions, 3rd Ed. Chicago and London: Univ. of Chicago Press, 1996. ISBN 0-226-45808-3

      A paradigm is neither “good” nor “bad”.

      It can keep people from “re-inventing the wheel” over and over again, but it can also keep people from “thinking outside the box”.

      In the worst case, as described by Kuhn, it can lead to important data points, which lie outside the “paradigm”, being ignored or written off as meaningless “outliers”. This has been described as “paradigm paralysis”.

      The current “paradigm” in climate science is the (so-called) “consensus” view on AGW, as promoted by IPCC.

      As was shown on other threads here, some of this “consensus” may well have been contrived via a corrupted IPCC process.

      If something totally new comes along, which falsifies an existing “paradigm”, this can lead to a “paradigm shift”, whereby the new finding eventually becomes the new “paradigm”.

      A “paradigm shift” usually does not occur smoothly, as individuals who have invested in the old “paradigm” will tend to defend it against new threats from the outside.

      All of this makes sense to me in many fields outside climate science (including business), and I see no reason why it should not apply to climate science equally well.

      Max

  35. Since only about half of our annual CO2 emissions are contributing to the increasing atmospheric CO2 level, where is the other half going?

    If we abruptly stopped emitting CO2 altogether, presumably the half of our emissions that has been accumulating would stop accumulating.

    But would the half that is being removed somehow continue to be removed, or would it equally abruptly stop being removed?

    I don’t see any possible mechanism for the latter, which is surely driven by level of atmospheric CO2 and not rate of our emissions. How could the mechanism removing that half tell that we’d suddenly stopped emitting?

    But if it’s the former, that would imply that when we stopped emitting, CO2 would not remain steady but would decline at 2 ppmv per year, at least in the short term, this being the rate at which the removed half is currently being removed. Or even more if the removed portion is actually 60% rather than 50%, which it may well be.

    That rate of decline would presumably slow down as the level decreased, but would it slow to zero as the traditionally held 280 ppmv level is approached, or would it overshoot?

    I’m not trying to defend any sort of skeptic position here, I’m just doing the obvious math and asking the obvious questions.

    • “But would the half that is being removed somehow continue to be removed, or would it equally abruptly stop being removed?”

      Under the assumption that residence time is long, it would essentially stop abruptly, and very slowly drain away. Under the assumption that the residence time is short, and other processes are responsible for the rise, it would effectively change nothing, or at least very little.

      “But if it’s the former, that would imply that when we stopped emitting, CO2 would not remain steady but would decline at 2 ppmv per year…”

      No, because it would mean that the rise was not a result of human forcing, so the termination of our contribution would change nothing, or at least very little.

      • Under the assumption that residence time is long, it would essentially stop abruptly, and very slowly drain away.

        Since “residence time” is a meaningless concept in other contexts like your bank account, what is different about CO2 that makes “residence time” meaningful in the atmosphere?

        If for the sake of financial equilibrium in your life you balance your monthly expenditures with your monthly income, what is the “residence time” of the money that flows into your account? And is the answer any different if you balance your annual expenditures with your annual income?

        In any cycle that has large flows in and out, the only meaningful numbers for predicting future levels, whether of your bank balance, or of CO2 in the atmosphere, or of heat accumulation in the atmosphere, are net flows.

        For those who believe “residence time” is a well-defined concept independently of the straightforward logic of analyzing net flows in terms of flows in and out, the possibility of CO2 dropping precipitously as a result of abruptly terminating human CO2 emissions will make no sense whatsoever. For me it is residence time that makes no sense.

        No, because it would mean that the rise was not a result of human forcing, so the termination of our contribution would change nothing, or at least very little.

        No, because forcing is a radiation concept. Our understanding of forcing has nothing to do with our understanding of how much CO2 is entering and staying in the air.

        We know that we’re currently adding a little over 9 GtC (gigatons of carbon) to the atmosphere each year, based on fuel records, gas flares in oil fields, cement production, etc.

        And we’re monitoring the CO2 in the atmosphere with excruciating care at Mauna Loa and observing that it is rising by only 4.9 GtC a year. So the rate of accumulation is only 4.9/9 ~ 55% of the rate of emission.

        The other 45% has to be going somewhere. Unless it’s going into outer space, it has to be going into the oceans and land including vegetation and other consumers of CO2. If we reduce our 100% of emissions to 0%, is that 45% that’s currently going into the surface and vegetation going to stop instantly too? How could that happen? Would the planet notice we’d suddenly stopped?

        I don’t see how it could.

      • “…what is the “residence time” of the money that flows into your account?”

        The time it takes for banking fees to drain it.

        “No, because forcing is a radiation concept.”

        I am speaking of “forcing” in a general sense, in this case, as the CO2 input from human activity.

        “The other 45% has to be going somewhere.”

        It is going into the oceans and land. Some gets re-emitted into the atmosphere in a cycle, some gets sequestered (at least semi-) permanently. The question before us is, how quickly is the sequestration process occurring?

        The current mainstream thinking is, very slowly. If the answer is, in fact, quite rapidly, then this demands another mechanism for rapid replenishment into the system to replace, or even surpass, that which is being taken out.

        “If we reduce our 100% of emissions to 0%, is that 45% that’s currently going into the surface and vegetation going to stop instantly too?”

        You have to look at the entire set of reservoirs, atmosphere, oceans, and land. That 100% is divided amongst them. If there is very little sequestration and replenishment going on, as the reigning paradigm holds, then it will just stay in those repositories.

        Under the reigning paradigm, vegetation has negligible net effect, because rotting vegetation is releasing CO2 as quickly as growing vegetation soaks it up. (However, even if that is true to a significant extent, the flows are so large that, even a fairly small deviation would upset all the calculations.)

        Again, under the reigning paradigm, the main repository is the oceans. This reservoir soaks up CO2 in proportion to the partial pressure in the air. If that partial pressure stops increasing, then the accumulated CO2 in the oceans stands pat in equilibrium.

        I hope that clears things up a bit. Keep in mind that in explaining the reigning paradigm to you, I am not endorsing it.

      • The current mainstream thinking is, very slowly.

        That would be the part of the mainstream that can’t subtract 5 GtC from 9 GtC. If you have a big bucket like the atmosphere and you pour 9 GtC into it and 5 GtC remains, 4 GtC has left the bucket.

        Of course by some definitions 4 GtC/yr is “very slowly,” in case that’s what you meant. However any bucket with a 4 GtC/yr leak in it will continue to leak 4 GtC/yr when you suddenly stop pouring in 9 GtC/yr. Instead of the level in the bucket continuing to rise at 5 GtC/yr, it will immediately start falling at 4 GtC/yr.

        I have been trying to visualize a scenario in which what you describe happens. The only way I can do it is if I imagine some mechanism kicking in to replenish the 4 GtC after we stop our 9 GtC. Such a mechanism might be operated by invisible pink unicorns, not sure what else.

        If all the world except me is right on this one, I have very suddenly suffered an unexpected aphasia and had better get someone to drive me to the hospital since I wouldn’t be safe on the road. My brain must be severely damaged!

        I just can’t imagine a rational world in which people can think the way you describe.

      • You’re still not seeing it. Think of it this way. You’ve got three buckets labelled Earth, Wind, and Water, respectively. The Earth bucket is relatively small in diameter, but the Wind and Water buckets are roughly the same diameter in a square root of 55/45 ratio. They’re all connected together with a pipe near the bottom.

        You start pouring water into the Wind bucket. Because of the pipes between them, they all start gaining water. You stop and look, and see that each one holds water at the same height, but you’ve only got roughly 55% of what you poured in in the Wind bucket. But, the volume in all three buckets is the same as the total you poured in.

        That’s how it all works. A more apt analogy would have holes in the Earth and Water buckets, which drain water slowly in the “long residence time” case, and quickly in the “short residence time” case. And, there is an additional external water feed, which adds additional water quickly or slowly, respectively, as needed to maintain the buckets at a particular level.

      • er… And, there is an additional external water feed, which adds additional water slowlyor quickly, respectively, as needed to maintain the buckets at a particular level.

      • It’s not necessary to speculate on all alternatives, because we know something on, what’s going on.

        The atmosphere is almost instantly in balance with the surface ocean. From the point of view of temporal development it’s thus better to look at the combination of the atmosphere and surface ocean. Unfortunately that’s not well defined, as the limit between the surface ocean and the rest of ocean is not precise. Taking the Revelle factor into account the surface ocean may take rapidly 10-15% of the variation. Thus the other uptake processes take perhaps 40% and this part would continue for a while at the same speed even, if the emissions would stop totally. Reducing the emissions to 40% of the present should keep the atmospheric concentration constant as long as the other processes of uptake continue to be as effective as they are now.

        Even emissions at the level of 40% of the present would gradually fill the ultimate potential of uptake. The biosphere cannot grow forever, and even the soil has it’s limits although they are highly dependent on the state of biosphere. The same applies to the deep ocean (the surface ocean was assumed to stay in balance with atmosphere and wouldn’t change in this scenario).

      • “Residence time” is not a meaningless concept in financial contexts. Days of inventory on hand or days working capital are useful metrics, for example. But those figures are most useful when they change; an adverse change can be an early warning of fiscal problems.

        I agree that the absolute estimate of average residence time for CO2 is not very useful…unless it’s changing.

    • Vaughan Pratt

      Let me give you my answer to your question, which is somewhat different from Bart’s.

      Currently, the equivalent of around 4 ppmv CO2 enter the atmosphere from human emissions.

      IF (the big word) we assume that human emissions are the only factor causing change in a global carbon cycle, which is otherwise at equilibrium, the atmospheric concentration should increase by 4 ppmv/year.

      However, it does not.

      It only increases at around 2 ppmv/year.

      Residence time estimates range all over the map (from 5 years to 400 years – see earlier post), but one set of studies puts the half-life of CO2 in our climate system at 80 to 120 years.

      IF (the big word again) 120 years is the half-life of CO2 in our climate system, then the “decay rate” is 0.58% of the atmospheric concentration, which turns out to be around 2 ppmv/year.

      IOW, the current observations would tend to validate a CO2 “half-life” in our atmosphere of 120 years, IF (the big word) humans are the principal cause for the increase.

      IF one assumes a “half-life” of 120 years, and no further additions into the system, it is easy to calculate how long it would take until CO2 levels came back down to pre-industrial values. But this is a purely hypothetical calculation, which has absolutely no real importance, primarily because OTHER FACTORS have not been considered.

      Max

      • Max, as you point out we know reasonably precisely how much CO2 we’re responsible for emitting into the atmosphere, and even more precisely how much is staying.

        In any other system of checks and balances where we know the flows in and out, we know very accurately what would happen as a result of a large abrupt change to any flow.

        The mere fact that we are clueless as to the “residence time” of a quantity whose fluxes we know accurately should be one hint that it is a meaningless quantity. Another hint should be that it is an undefined quantity.

        If you define it in terms of how long it takes the level of CO2 to change in response to a change in emissions, you cannot then turn around and claim that knowledge of residence time allows you to calculate how long it will take CO2 to decline, because that’s how you defined residence time in the first place! It would be circular logic.

        The only concept worth understanding here is how fast the CO2 level would change in response to a significant change in emissions. There is no such thing as residence time other than that concept. That concept is fundamental to standard banking practice as well as standard in any analysis of network flows.

        No one talks about “residence time” of the money in their bank account because it’s meaningless unless you define it in terms of response to change in flows. But then that’s the concept you should be using, not “residence time.”

      • Vaughan

        I can agree that there are so many very basic unknowns surrounding our planet’s carbon balance that trying to guess at “residence time” of the small piece emitted by humans seems like looking for a needle in the haystack. [The discussions on this sort of remind me of the heated debates regarding how many angels can dance on the head of a pin.]

        I only wrote that IF we assume that human emissions and natural decay are the ONLY variables, then the current observation would lead to a suggested half-life of CO2 in our climate system of around 120 years.

        But don’t forget the IF.

        That’s the BIG word here.

        Max.

      • This “residence time” stuff is proof positive that if you talk about something as though it has a real existence long enough, regardless of whether there’s any logical sense to it, people start to believe in it.

        That’s how religions get started: if everyone around you is talking about God as a real entity, with us made in His image, it’s hard to resist the impression that God is a real being like us. Only when you try to reason logically about God do you run into worries about whether he’s black or white, etc.

        Augustus De Morgan was far more logical than his colleagues at Cambridge University, who insisted on adherence to theological dogma and gave him a hard time about his atheism. He was therefore very glad to escape to London, where he joined the faculty of the newly started London University, now University College London.

        When it comes to the doctrine of residence time, I’m a devout atheist. The concept makes no sense to me, other than to the extent that it can be defined in elementary bookkeeping terms. Paraphrasing Bertrand Russell, “I have no need of that axiom.”

      • The only need I can think of for a “residence time” is estimating how effective biofuels may be. For that the 5 to 15 year estimate is fine, indicating that older growth trees used as fuel would not be as effective reducing carbon as rapid growth trees of plants. 15 years with an uncertainty fudge implies twenty year old growth is more effective left growing or converted to long term carbon storage commodities.

      • Max, your reasoning is right, except for the fact that you take the total CO2 in the atmosphere, i.e. that CO2 levels could go to zero. The real excess is 100 ppmv above equilibrium, which was 290 ppmv. That gives a half life time of 36 years at the current sink rate.

        The huge tails considered in different models (Bern, Archer), are for enormous emissions, going from 3000 GtC to 5000 GtC, far beyond the few hundreds GtC human have burned in the past 160 years. In that case also the deep oceans are affected, which is hardly the case by now. If you have several decay rates (as is the case), the combined decay rate is faster than the fastest, until saturation is happening, which is the case for the oceans mixed layer at 10% of the atmospheric increase, but by far not (yet) for the deep oceans and long term sequestering by vegetation.

    • Vaughan,

      I think you’ve understood the problem pretty well. I wouldn’t say there is any danger, or even the remotest possibility, of ever getting below 280pppmv in the foreseeable future. Its much more likely that some damage will have occurred to the Earth’s ecosystem, the spring has been overstretched, so to speak, and so the CO2 level wouldn’t go back to pre-industrial level in the same timescale as its risen.

      • Assuming perfect elasticity, I’d envisage an exponential decay back to preindustrial CO2 (280 ppmv if that’s the value) if CO2 emissions suddenly dropped to zero, with the initial rate of decay being 9 – 5 = 4 GtC/yr and slowing at some unknown rate but enough not to go below 280.

        And yes, if some elastic limit or yield point had been reached and plastic flow of some kind had occurred, then the decay would be back to some higher value, but still starting at 4 GtC/yr.

        I suppose if there were such a thing as “complete” plastic flow it would result in virtually no decay at all, the system would just remain at the level we’d brought it to. The 4 GtC/yr decrease would still happen in the beginning, but decay to a zero rate of decrease extremely quickly.

        But even with perfect elasticity the response can depend on inertia, with different components having different inertias and hence different response times to sudden changes.

    • Vaughan, the idea that the 2ppm/yr decline would continue if emission of CO2 stopped is incorrect. The 2ppm/yr is only occurring on condition of a rising amount in the atmosphere. It is maintaining an equilibrium between the ocean (mostly, biosphere partly) and atmosphere and their ratio of CO2. This equilibrium has a fast time-scale (5 years) that maintains it, so it is why the ocean is responding so quickly. You can see that if the CO2 emission stopped, the ocean surface would already be close to equilibrium and not absorb any more. The only way to get additional absorption by the ocean is to bring up water to the surface that has less CO2 in it, which has the time scale of the ocean circulation, maybe decades to centuries.

      • No disagreement there, that could well happen. The important point is to have numbers for these various fluxes and not try to turn them into some notion of “residence time.”

        Incidentally where did you get the 5 year response time for the ocean? This seems to assume a good understanding of what happens all the way to the ocean bottom. I wasn’t aware we had such an understanding. For all I know the ocean time constants could be 20 years or more, I don’t know any way of inferring 5 years.

      • I have described my understanding of the situation in a message of this thread

        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104904

      • The 5 years is the atmospheric response time to the ocean. I believe it is derived as the CO2 in the atmosphere divided by the downward (or upward) sea-surface flux of CO2. The ocean response is harder to determine because it involves its mixing rate and circulation. This means that in 5 years the CO2 has been able to directly interact and equilibriate with the ocean surface. I believe the actual chemical equilibriation is faster at that surface, so the 5 years is the time scale for replacing atmospheric with ocean CO2 as it comes to the new equilibrium.

      • I think you would find that ocean upwelling is much richer in CO2 than surface water and has a much lower pH – it is a result of both respiration by organisms at depth and volcanic venting.

      • If that was generally true of upwelling it removes one of the hopes for long-term sinks to absorb this new CO2, so it only makes it worse.

      • Why would you doubt and not simply look it up?

        ‘Our technology uses kinetic wave energy to bring up higher-nutrient deep water. In the presence of sunlight, and assuming appropriate ocean environmental conditions, the enhanced nutrients generate blooms of phytoplankton which absorb dissolved CO2 and generate oxygen through the process of photosynthesis. When the phytoplankton are consumed by higher trophic levels such as zooplankton and fish, or when the phytoplankton die, some of the absorbed CO2 as well as other biochemical contents sink. Some of this is remineralized and suspended in mid ocean depths, some sinks to the ocean floor, and some is sent back up to the surface by natural upwelling events (currents, storm-generated upwelling, heating/cooling cycles such as El Nino, etc.). This “biological pump” is the principle physical process responsible for the higher concentrations of nutrients, and CO2, which are found beneath the upper sunlit zone (typically 50 to 80 meters) of the ocean. Within the upper ocean’s sunlit zone, however, the nutrients are quickly consumed, with the result that phytoplankton blooms diminish until upwelling brings up more nutrients.

        Until recently, conventional wisdom regarding limits to phytoplankton productivity in the upper sunlit zone of the ocean cited the Redfield Ratio as the limiting factor to how much net benefit could accrue from wave-driven ocean pumps. The Redfield Ratio limits the amount of carbon that each phosphate atom can recycle. For the average of all the ocean is it 106 carbon atoms for every phosphate atom. If CO2 recycling efficiency is limited by phosphate, and deeper water contained proportional concentrations of nitrate, phosphate and dissolved CO2, then net additional absorption from upwelling of phosphate would be balanced by the higher concentrations of CO2 brought upward – at best a zero sum game.’ http://www.atmocean.com/sequestration.htm

        The ‘Atmocean’ geo-engineering proposal involves encouraging nitrogen limited micro-organisms and thus increasing the net carbon sink. These nitrogen fixing organisms are typically very toxic – so I would have my doubts.

        Someone hoping CO2 deficient sub-surface water would rise and scrub the atmosphere?

      • The question is whether the average CO2 in the deep ocean that will eventually surface exceeds the CO2 in contact with a high-CO2 atmosphere, especially as CO2 doubles. I assumed it wasn’t going to match that surface layer CO2, in which case it would be a further sink, but with a long time scale.

      • The CO2 is supersaturated when it upwells – it bubbles out of solution as the pressure and temperature decline. The surface in oceans is nowhere near saturation at most times – although biological activity at times will increase carbon dioxide levels at some locations.

      • You are giving a mechanism by which the ocean returns carbon to the atmosphere, but it is just part of the cycle, not a net source or sink, since the ocean ultimately got this carbon from the atmosphere.

      • No I was addressing your misapprehension aspects of the carbon cycle – but if you want to move the goals to something else – feel free.

        The ocean is a large net sink – through both the biological carbonate and organic C pathways. Some of it is returned to the surface in upwelling.

        You had this simple concept whereby deep ocean water was not exposed to high atmospheric concentrations of CO2 and therefore had lower concentrations. I was simply very politely correcting you without making much of a fuss about it – but if you don’t want correct information and simply want to argue from some preconceived position and then to subtly disparage me – then I cannot be bothered with you either.

  36. How about the chicken or the egg question…..
    Does warming raise atmospheric CO2 levels – by what rate?
    Does raising atmospheric CO2 cause warming – by what rate?

    How about some laboratory experiments to settle this?

    • BLouis79

      Does warming raise atmospheric CO2 levels – by what rate?
      Does raising atmospheric CO2 cause warming – by what rate?

      How about some laboratory experiments to settle this?

      Experimental data show that sea water can dissolve more CO2 at lower than at higher temperature. This gives solubility at various salinities and temperatures:
      http://www-naweb.iaea.org/napc/ih/documents/global_cycle/vol%20I/cht_i_09.pdf

      The amount of CO2 dissolved is also related to the atmospheric concentration or partial pressure of CO2. Some more stuff on this:
      http://cdiac.ornl.gov/oceans/co2rprt.html#co2sysinsea

      AFAIK, there are no empirical data based on reproducible experimentation or physical observations providing evidence that higher atmospheric CO2 concentrations will lead to higher temperature; these estimates are based on model simulations backed largely by theoretical deliberations supported by the GH theory.

      Max

      • Regardless of what’s causing the temperature to rise, we know that it’s risen about half a degree in the past four decades. We can therefore ask whether the oceans emitted additional CO2 as a result of that warming, without having to bring up the scarlet letter A for anthropogenic.

        The estimate of 9 GtC of human emission of CO2 includes things like cement production (about 4.5% of the 9 GtC) and gas flaring (about two-thirds of a percent). However it does not include increased CO2 emission from the oceans.

        I’d be inclined to put that on the ledger as a negative component of the CO2 nature is taking down from the atmosphere. That is, subtracting the known increase of 5 GtC/yr from the known human emissions of 9 GtC/yr gives 4 GtC/yr as downtake, but if a warmer ocean is emitting say 1 GtC/yr more than at its 1970 temperature then the downtake, net of that extra ocean emission, is really 5 GtC/yr. But since ocean temperature changes only slowly, for prediction purposes only the 4 GtC/yr figure is relevant to the question of what would happen if the 9 GtC of our annual emissions dropped suddenly to zero. We’re actively producing the 9 GtC in the sense that it would drop to close to zero if aliens moved us all to a zoo, but we’re not actively causing the ocean to emit extra CO2 in that sense.

      • but not the last decade

  37. The eco-whacko CO2 fearmongering by the secular, socialist neo-communist bureaucracy has finally ‘jumped the shark’ with the UN-approved EPA science authoritarians’ proposed ‘poisonous ‘ CO2 controls.

  38. The concept of residence time is perhaps the strangest in the entire, circus like, debate. It involves the following:
    Instant cessation of human emissions, which is impossible.
    The resulting behavior of all natural sources and sinks for the next 100 to 1000 years, which is unknowable.
    The assumption that the CO2 increase is all anthro, which is controversial.
    A prediction which is untestable.

    Where is the science in this impossible, unknowable, untestable, controversial conjecture? Nowhere, but the propaganda value is high so we read about it all the time. This is the sad state of the science.

    • Talk about exaggerating uncertainty

    • Instant cessation of human emissions, which is impossible.

      It’s certainly impossible for scientists to do the experiment, but that’s true of a lot of climate science.

      It is not impossible however that consuming carbon based fuels more efficiently while also cutting over to non carbon based fuels could reduce CO2 emissions by say 10% or 30% or some number. Calling this impossible is defeatist.

      It is then interesting to know in advance by how much the CO2 level will change.

      The general belief as I understand it is that CO2 will continue to rise at the same rate as when our CO2 emission was at that level previously.

      I would project a quite different outcome: that it would drop considerably faster than expected.

      As I understand things, that puts me in neither camp: I’m neither a skeptic in the usual denier sense nor a “warmist” in the doom-and-gloom sense.

      I do however foresee doom and gloom if the defeatists among us win out.

      • (Just to be clear about “doom and gloom,” I’m only referring to the projected 2 °C rise by 2100 assuming business as usual. I don’t have any strong views as to the benefits and downsides of a warmer planet, which is a lot harder than projecting something simple like temperature.)

      • Peter Davies

        The general state of pessimism that is evident in both camps is symptomatic of existing global recessionist thinking which is affecting consumer behaviour and business confidence world-wide, with the notable exceptions of China and India.

        While I am not over-optimistic that entrenched attitudes can be changed within the next few years I believe that once the economic cycle turns the corner once more, the pessimists will be overidden by the pragmatic and sensible.

        I would personally prefer to see the gradual replacement of dirty technolgy with cleaner and more sustainable processes and a better understanding by western cultures that we need to reduce our environmental footprint.

      • I am not pessimistic. I am happy that the greens are losing their ideological war. As McCauley put it, every political movement ultimately expires from an excess of its own principles. Climate change was that excess.

      • Peter,
        As soon as windmills are seen for the waste of space and resources they are, and food is not being wasted as fuel and the CO2 obsession passes- and that day appears to be coming sooner than later- we can indeed make great progress in improving our stewardship of the planet.

      • Windmills will remain the sad relics of one of the greatest accomplishments of man, the erection of the monumental political and financial structure on the massively inadequate foundation of CO2’s radiative effect on climate. That the financial bubble and the ungrounded power grab were so magnificent is testament to the greatness of man, and to the greatness of his folly.
        =============

      • we don’t need to reduce our environmental footprint. We need to keep our environmental footprint clean. We need to know what is harmful and what is helpful. CO2 happens to be helpful. The best times in earth’s history for plants and animals were during times of higher CO2. Double CO2 and life would get better. Cut it in half and life as we know it would not be sustainable.

      • Herman, you can’t prove that double CO2 will be good anymore than anyone can prove double CO2 will be bad. I can make a better case that double CO2 will increase the chance of a new ice age.

      • Vaughan, I did not say that a, for example, 30% reduction over 40 years was impossible, although I certainly think it unwise and unlikely. I am addressing the state of the science, which on this residence issue is mere speculation, spanning a huge and irreducible range of possible values.

        Speculation is important to science because that is where new ideas come from, but speculation per se is not science. Science is the process whereby speculation is turned into knowledge. In this case there is no knowledge to be had, so claims of knowledge by the IPCC and others are a clear case of false confidence. False confidence is the crime against science that pervades CAGW. This so-called residence issue is the worst case of false confidence that I know of, in the general debate. That is my point.

  39. A contributor to my blog has performed an empirical experiment to determine the degree to which back radiation slows the rate of cooling of the ocean surface.

    http://tallbloke.wordpress.com/2011/08/25/konrad-empirical-test-of-ocean-cooling-and-back-radiation-theory

    • The “experiment” didn’t test that, TB. It wasn’t clear to me why the “IR reflector” should have affected cooling rate much or in which direction. I would have predicted little or no effect, and the small differences recorded may simply have represented experimental variability unrelated to the independent variables. Clearly, back radiation was substantial with or without the “reflector”, and without it (e.g., in a very cold room), the cooling would have been faster, although it would have been hard to separate the roles of conduction, radiation, and latent heat transfer.

      There is no easy but decisive home-made experiment for back radiation, although a number of better ones have been discussed in previous threads.

      • For clarity, my point was that without as much back radiation (as in a very cold room), cooling would have been faster, even if one corrected for conduction/convection and latent heat transfer differences.

      • There is no easy but decisive home-made experiment for back radiation, although a number of better ones have been discussed in previous threads.

        That’s great Fred, which of them have been conducted and had their results published?

      • See the most recent SkyDragon thread and the Postma thread, where I believe you’ll find examples. I know Pekka described one device which utilized back radiation for a substantial heating effect (commercial, though, not home made). I think Eli Rabett described others, and even more were described by other participants.

      • Fred, they are both massive threads. Needle in a haystack doesn’t cover it. But surely a more sophisticated experiment has already been done and published by climate scientists?

      • How do experiments on 200 ml Tupperware containers extrapolate to ocean-sized ones?

        One might expect the response times for a 200 ml Tupperware container to be very different from those for a Tupperware container as deep as the ocean. It will take time for the heat to reach the bottom, and that time will depend on how convection works in the ocean, which might be different from an ocean-depth Tupperware container unless it was also ocean-wide and subject to similar tidal and wind forces etc.

      • Hi Vaughan. All valid criticisms, and I’m sure when you or Fred shows us the better financed and more rigorously conducted experiment that must, surely, have been done by the climate professionals, we’ll be able to learn much about ways in which we might improve the experimental setup.

        Just waiting for the link.

      • Those crickets are chirping loudly again tonight.

      • when you or Fred shows us the better financed and more rigorously conducted experiment that must, surely, have been done by the climate professionals, we’ll be able to learn much about ways in which we might improve the experimental setup. Just waiting for the link.

        Here you go (h/t Michael Roderick).

        http://www.nature.com/nature/journal/v453/n7198/pdf/nature07080.pdf

        Domingues et al, “Improved estimates of upper-ocean warming and
        multi-decadal sea-level rise”, Nature 453 1090-1094 (19 June 2008).

        “To estimate ocean heat content and associated thermosteric sea-level
        changes from 1950 to 2003 (see Methods), we use temperature
        data from reversing thermometers (whole period), expendable
        bathy-thermographs (XBTs; since the late 1960s), modern and more
        accurate conductivity–temperature–depth (CTD) measurements
        from research ships (since the 1980s) and Argo floats (mostly from
        2001).”

        No Tupperware containers, I imagine their budget didn’t run to one that big.

      • Vaughan, rehashing XBT data and splicing in ARGO data is not an empirical test. Moreover, domingues, and Levitus and all the rest have it badly wrong with the pre-ARGO data.

        Read and absorb this when you can find 20 minutes:
        http://tallbloke.wordpress.com/2010/12/20/working-out-where-the-energy-goes-part-2-peter-berenyi/

      • Vaughan,
        The experiment was not designed to replicate the exact conditions of the oceans or isolate the 15 micron LWIR band. The purpose was to see if backscattered LWIR could alter the cooling rate of water that is free to evaporatively cool. The experiment while simple, was able to show that materials that are free to evaporatively cool cannot be handled with black body radiation equations as climate science tries to do.

        The experiment shows a readily detectable difference in the cooling rate of water that cannot cool evaporatively when subjected to differing amounts of backscattered LWIR.

        The experiment shows no detectable difference in the cooling rate of water that can cool evaporatively when subjected to differing amounts of backscattered LWIR.

        I would suggest that while it is highly likely that backscattered LWIR around the 15 micron band does have some effect on the cooling rate of the oceans, the empirical evidence suggests that the effect is far less that climate scientists assume.

      • The role of evaporation in dissipating the thermal energy absorbed from about 330 W/m^2 back radiation plus about 160 W/m^2 absorbed solar radiation is known to be quite a small fraction of the total – about 80 W/m^2 based on empirical data for global evaporation/precipitation rates and the energy involved. That 80 will come from both the solar and backradiated component, but even if it all came from the latter, it would leave most of the back radiation to be absorbed as a contributor to total ocean heat content. In that sense, the described experiment was unnecessary, because we already have quantitative data on the fractional role of evaporation.

        As a test of the ability of backradiated IR to slow cooling, the test is uninformative, because with or without the “reflector”, the water is receiving extensive IR back radiation, and perhaps some “forward” solar contribution from visible light. The reflective material is presumably interfering with some of this while contributing its own IR. The only way to avoid this problem would be to conduct the test in a very cold room with no incident visible light, but then one runs into other problems. The lack of analogy with true ocean conditions of depth, humidity, wind speed, convection, and other variables cited by Vaughan above and others elsewhere also precludes meaningful interpretation.

        Mainly, though, I’m not sure why there is a need for this experiment. The substantial magnitude of backradiated IR absorbed by the oceans is measurable and well established. It’s ability to contribute to ocean thermal energy after absorption in the skin layer is a simply a function of thermodynamics – surface temperature can be thought of as valve controlling the rate at which heat from below can escape via the surface. If anyone doubted this, they could probably confirm it with a simple experiment in which water continually heated from below at a defined wattage is covered with a thin film of material whose temperature can be varied. When the film temperature is raised, one can predict that the temperature of the entire water content will rise, but if someone wants to test this, they should proceed.

      • Fred,
        It is easy and it is decisive. The cost is not high. I urge you to try it for yourself. I specifically designed the experiment to be replicable by others. I have run the experiment many times now and I am confident you will get similar results and that these results are not the result of “experimental variability”. I have found that the mechanism of evaporative cooling for liquid water significantly alters the impact of backscattered LWIR on it’s rate of cooling.

      • I think Pekka’s observation that the air above the ocean is a lot more humid than under the test conditions, thus allowing greater evaporation has to be taken into account. That will reduce the differential found in the tests. I think that you are likely correct that the radiative flux doesn’t restrict cooling as much as is parameterised in the models, but you need to do more runs with a wetter local atmosphere to see what difference that makes.

      • Yes, I would agree that the air over the oceans is more humid than that used in the test, however evaporation would need to be very restricted as in Test B before backscattered LWIR had much influence. From the continuing water cycle on planet Earth we can see that evaporative cooling is alive and well over the oceans. One of the benifits of small 200ml test containers is that they cannot fit too many of Pekka’s red herrings :)

        That said, the entire experiment runs on batteries (even the new peltier cooled “Sky”) and I live two streets away from Sydney harbour. Moist air and sea water is available.

      • Herrings. Lol.

        Well then, what a cool location you live in! The deck of a yacht in Sydney harbour would be the perfect place to re-run the experiment. you can simulate some ocean wave turbulence by wandering across the deck with your camera. Don’t spill any of that G&T into the test fluid though.

        We want pics! ;)

      • Don’t spill any of that G&T into the test fluid though.

        The Gerlich and Tscheuschner paper had always struck me as more the sort of thing a troll would write than serious theoretical physicists. Thanks to your chance remark here, tallbloke, it occurs to me that this pseudonymous troll might well have chosen names whose initials were those of the beverage sustaining him (or her) while he penned his magnum opus.

        This would explain a lot.

      • The experiment may be a step in the right direction but not decisive. The 10mm styrofoam with foil face has a much higher R value than the Saran wrap. It proves the value of foil faced insulation though. To get the experiment to indicate IR impact only, the R values of both “ceilings” should be the same or the difference in R value calculated. You will probably not see a measurable difference. A better test would be Saran wrap versus mylar foil, about the same R value.

      • My comment at !0:47 PM – comment 105157 was written after all the above and was intended as a response to the various comments.

        There is no clearly practical way to duplicate ocean absorption and emission of energy via a simply home-made design, but the described experiment, despite its good intentions, gets the quantitation wrong and is subject to too many confounding variables to tell us very much. If it is an attempt to demonstrate that back radiation doesn’t play a major role in contributing to ocean heat content, it is probably futile, because that role is well established by evidence described in part by me above and by many of us in additional detail elsewhere. If it is simply designed to show that latent heat transport is an energy dissipating mechanism, there is no problem with that goal, but the fractional contribution of latent heat transport is already known, and while not trivial, is relatively small compared with upwelling IR, which is the main mechanism by which the combined energy of solar radiation and backradiated IR is emitted to space.

      • Fred, His experiment won’t prove anything other than radiation is one of the ways heat flows, that has pretty much been figured out already. It may, if he gets it right, show that convection and conduction also are ways that heat flows. That is not particularly a new concept either. Since his experiment is at roughly standard temperature and pressure, it will probably show that the latent heat component of the convection initiated by conduction is the primary source of cooling of the water followed by conduction and radiative cooling. No Nobel prize there either. Now if he could prove that there is a clear radiative window from the upper atmosphere to the surface that did not require radiative energy from the upper atmosphere to excite GHG molecules which are statistically more likely to transfer thermal energy to nitrogen and oxygen molecules which are statistically more likely to transfer that thermal energy to other nitrogen and oxygen molecules, then there may be a prize. I think that is the conduction that Dr. Pratt was talking about.

      • The impossibility of a fictional concept contributing to any real world effect has been comprehensively demonstrated by myself and and numerous others in numerous posts. The fictional concept arises from making a conceptual distinction between radiative upwelling and downwellig radiation. Radiation in the open atmosphere moves in all directions over relatively short distances (the mean free photon path) that varies from centimetres to kilometres – but on average outward from warmer to cooler.

        The concept of directional measurement is misguided. One can point an instrument down and get a measurement, point it up and get another measurement, point it sideways get another. If, however, there is a vector addition of the paths of IR radiative flux – all of the radiative paths sum to loss of energy from oceans to the atmosphere and thence to space.

        The loss of energy from the ocean in the IR is shown in the temperature observation of the top microns of the ocean. IR radiative flux in and out occurs from the top 10 microns or less. The ‘skin’ temperature is typically cooler than the underlying water showing without any doubt that energy in the IR band is being lost by the oceans to the atmosphere. The oceans cannot be heated by IR radiation from the atmosphere because more IR is being emitted by oceans than is being received.

        The ocean is heated by the Sun and loses heat from the surface to the atmosphere through, for the most part, evaporation and net outward radiation – in about equal parts. For some incomprehensible reason agreement on a even a simple concept of energy dynamics seems impossible. There is little purpose in further discussion as this has already been canvassed extensively from many directions. I will leave it to the reader to look at the logic of the evidence and decide who is right or wrong.

      • Fred and Vaughan’s disdain for actual experiments speaks volumes. They firmly believe that the theoretical basis of their belief in the ability of downwelling radiation to directly heat the oceans trumps any possible physical observation. Fred further makes his judgement of the value of this experiment on the basis of what he sees as the two possible motivations of the experimenter, while discounting the possibility that the experimenter is simply motivated by a desire to uncover the scientific truth about the way nature works.

        Climategate in a nutshell.

        The net flux is smaller than the sum of the convective components by a long way. It’s not 80W/m^2 vs 320W/m^2 downwelling IR. It is 80 evapo-transpiration + 20 thermals =100 vs 66 net radiative flux.

        How much this radiative flux actually slows the cooling of the ocean can’t be determined by a simple experiment such as this, but Konrad’s experiment indicates it doesn’t slow the cooling as much as theory based on Fred’s erroneous calculation says it will.

      • Tall Bloke, I don’t think Vaughan has that big an issue with the experiment, just that the conclusions being drawn are a leap. The small water dishes will show a smaller IR impact than a deeper ocean would because of the radiation window of water. There is a big difference in the spectrum of water vapor and water liquid that varies with temperature. The ocean temperature varies a lot with depth. So any experiment will have some limitation. Konrad’s experiment has limits with the size of the source and the insulating value difference of the ceilings. Vaughan boxes has the same, though a little smaller magnitude issues. Using the real salt windows that are very small increases the issue of scale. Eli’s wrapping a light bulb with foil shows that light bulbs get hotter if you wrap them with foil. Foil is a good reflector of light in most wavelengths, so it is not a great experiment to show the radiative impact of foil wrapped around the earth because the Earth emits different wavelengths. Some where with any experiment you have to calculate how big the differences are between the experiment and the real world.

      • Some of the above discussion represents different ways of looking at the same thing, but there are some misconceptions as well. The following, illustrated by the TFK Energy Budget – see, for example Fig. 2b – is one accurate way of looking at the involved processes. I will focus on the oceans, but land data are roughly similar.

        During the day, the ocean is heated (its temperature rises). This comes from absorbed solar radiation (about 170 W/m^2) and a larger quantity of absorbed back radiation from the atmosphere (about 340 W/m^2). This total (about 510 W/m^2) is paralleled by a set of heat loss mechanisms by which the ocean releases heat from its skin layer to the atmosphere and to space. The quantities will vary over the course of a day, but most of the heat loss is via IR radiation (about 400 W/m^2), a smaller fraction by latent heat transfer (about 95 W/m^2), and the rest (about 15 W/m^2) by conduction. It is not unreasonable to subtract the 340 back IR radiation from the 400 upwelling IR radiation to conclude that the net IR flux is upward at about 60 W/m^2, but it is also useful to realize that this is the difference between two large fluxes with different sources and destinations, and that the back radiation is quantitatively the most important heating source during the daytime. It’s also important to realize that each of the heat loss mechanisms releases thermal energy from both the solar and back radiated component and not from either one alone. In other words, the upwelling IR is not a response to the back radiation but to the total absorbed energy, and the same is true for latent heat and for conduction.

        At night, the ocean cools because the solar component is absent, and so the emitted energy exceeds the energy absorbed by back radiation. Here again, it is reasonable to refer to the back radiation as responsible for “reduced cooling”, but this is only true on a statistical level. At the level of individual water molecules, the back radiation is still increasing absorbed thermal energy (each absorbed photon is causing the kinetic energy of water molecules to rise), and the upwelling IR is a separate process by which IR photons are emitted from the surface accompanying a loss of kinetic energy in the water. Thus, the “net” process is informative mathematically, but the separate molecular events paint a clearer picture of what is actually happening.

      • Fred, part of the misunderstanding is the use of the average energy fluxes. You say during the day there is 170 Watts from solar with 340 back radiation. It is more like 340 solar in at the surface plus 140 solar into the atmosphere. In the band between 30N and 30S about 70% of the solar is absorbed, so the individual fluxes per grid cell vary considerable by latitude and by time of day. Most of the warming impact of the atmosphere would be felt in the higher latitudes where the thermal energy is transported by weather patterns. So you are using a macro view to communicate a micro effect.

      • Dallas – Yes, there are variations by latitude. Both absorbed solar and back radiated energy are greater at low latitudes than at high latitudes (the higher the ocean temperature, the greater the upward IR and thus the amount of energy that is redirected downward as back radiation). The 170 W/m^2 solar is an average, as is the 340 W/m^2 back radiation. The ratio between the two will vary by latitude but the total energy absorbed from back radiation is about twice that contributed by the solar component.

      • Regarding time of day differences, solar radiation only contributes to ocean heat content during half a day while back radiation contributes for a full 24 hours (although more during the day than at night). There will therefore be many times during daylight hours, particularly at lower latitudes, where the solar contribution exceeds that of back radiation, and exceeds 340 W/m^2, so that the 170 W/m^2 solar contribution is essentially a 24 hour average as well as a latitudinal average. The ocean ultimately responds to these contributions averaged out over time.

      • Fred,
        are you actually claiming that the phase change of water molecules during evaporation is not the dominant cooling process for the oceans?

      • Konrad, define “dominant.”

        If you’re claiming that, averaged over a day, the ocean is losing heat, then shouldn’t it all freeze after enough days?

        If as one would naturally expect the oceans neither gain nor lose significant heat per day when averaged over say the last million years, then cooling and heating should balance out. In that case one can equally well ask about “dominant heating.”

      • Fred and Vaughan’s disdain for actual experiments speaks volumes.

        Said the man who wrote

        Vaughan, rehashing XBT data and splicing in ARGO data is not an empirical test

        How is that not “disdain for actual experiments”? Are you saying that measuring the temperature of a 200 ml Tupperware container of water is an “empirical test” of the number of years it takes for changes in surface temperature to be felt at a depth of 2 km in the ocean, while actual measurements at depth in the ocean are not?

        Tallbloke, what am I missing here? If there’s an Olympic event for the world’s most whacked-out notions of “empirical test” and “actual experiment,” why are we not seeing it on the sports pages?

      • Vaughan,
        I find your questions of Tallbloke puzzling. If you wanted to test if backscattered LWIR can slow the cooling of water that is free to evaporatively cool how would you do it? Fool around with incomplete ocean temperature measurements from disparate sources involving small quantities of LWIR acting over years? Or would you just directly test the impact of a larger amount of LWIR on a small sample of water that is free to evaporatively cool?

        Climate scientists just assumed that backscattered LWIR could actually slow the cooling of Earth’s oceans. I don’t believe they have done the simplest of checks in the lab.

      • Climate scientists just assumed that backscattered LWIR could actually slow the cooling of Earth’s oceans.

        You have the advantage over me, Konrad, of having read papers by climate scientists who have stated such a thing.

        Planets tend to drift into equilibrium until some asteroid or life form comes along and shakes them out of it. Whatever contribution “backscattered LWIR” might have been making, it would tend to have been balanced by an equal and opposite term or terms as long as the planet remained in equilibrium.

        Had you been speaking of an equilibrium situation, the relevance of your experiment might have been clear. However you’re claiming relevance to cooling, and slowing thereof, which are departures from equilibrium.

        I would be very interested in what perturbations in a 200 ml Tupperware container can tell us about perturbations in an ocean many kilometers in depth. As far as I can tell you have not made this at all clear.

      • Vaughan Pratt says:
        Are you saying that measuring the temperature of a 200 ml Tupperware container of water is an “empirical test” of the number of years it takes for changes in surface temperature to be felt at a depth of 2 km in the ocean, while actual measurements at depth in the ocean are not?

        Hi Vaughan, did you read the article on my blog i pointed you to? The Pre-ARGO measurements and/or the splice between XBT and ARGO are out of whack:
        http://tallbloke.wordpress.com/2010/12/20/working-out-where-the-energy-goes-part-2-peter-berenyi/

      • Both the experiment and the real case of oceans involve several subprocesses. When interpreting the results it’s important to understand the relative strengths of the subprocesses. Evaporation affects all cases, where it’s allowed. In some cases it’s very strong and dominates strongly over all others. Your experiment without cover is such a case. As the evaporation is much stronger than any other factor, it’s difficult to observe the others at all.

        Your conclusion that evaporation dominates is a correct conclusion, when it dominates. It did in your case, but it doesn’t in the case of oceans. It’s significant there as well, but it doesn’t dominate. Therefore your fail to see the influence of IR although your other experiment did show that it does exist. Therefore your case which did allow dominating evaporative cooling is useless for understanding what happens in other situations, which are not dominated by evaporation.

        Open surface does lean do strong evaporation, when the air is so dry that the skin temperature of water is much higher than the wet bulb temperature of air as it’s in your experiment. On ocean surface the wet bulb temperature of the lowest air is close to that of the ocean skin. Then the evaporation is much weaker.

        Try to repeat the experiment in a closed space, where the relative humidity is kept at 95%. That would be more relevant.

      • Pekka,
        The experiment remains relevant for any body of liquid water on the planet where evaporative cooling is possible in some measure. That would include almost all of Earth’s oceans. If backscattered LWIR affected liquid water the way climate models have it then Test A would have shown a faster cooling rate than Test B (which it does), however it would have also shown the same rate of temperature divergence between the two water containers (Which it does not).

        To claim that backscattered LWIR can heat liquid water that can evaporatively cool what is needed is empirical evidence obtained by a controlled lab experiment. I have yet to see such results. At present I have demonstrated that the impact of LWIR cooling of water is readily detectable when evaporative cooling is prevented. I have also demonstrated than the effect of LWIR is far less when evaporative cooling is allowed, possibly negligible .

      • I have also demonstrated than the effect of LWIR is far less when evaporative cooling is allowed, possibly negligible .

        And here’s one of the reasons why:

        Belmiloud, Djedjiga, Roland Schermaul, Kevin M. Smith, Nikolai F. Zobov, James W. Brault, Richard C. M. Learner, David A. Newnham, and Jonathan Tennyson, 2000. New Studies of the Visible and Near-Infrared Absorption by Water Vapour and Some Problems with the HITRAN Database. Geophysical Research Letters, Vol. 27, No 22, pp. 3703-3706, November 15, 2000
        http://www.tampa.phys.ucl.ac.uk/djedjiga/GL11096W01.pdf

        Abstract.
        New laboratory measurements and theoretical
        calculations of integrated line intensities for water vapour
        bands in the near-infrared and visible (8500-15800 cm−1) are
        summarised. Band intensities derived from the new measured
        data show a systematic 6 to 26% increase compared
        to calculations using the HITRAN-96 database. The recent
        corrections to the HITRAN database [Giver et al., J. Quant.
        Spectrosc. Radiat. Transfer, 66, 101-105, 2000] do not remove
        these discrepancies and the differences change to 6 to
        38 %. The new data is expected to substantially increase the
        calculated absorption of solar energy due to water vapour in
        climate models based on the HITRAN database.

        Conclusions
        Table 1 (Final column) also sets out values for the comparison
        of our “best” total intensities of the water polyads
        with those given in HITRAN-COR. It should be stressed
        that the line intensities of our observations and the HITRAN
        database differ from line to line and that the given values
        are only valid for room temperature. The measurements
        at 252K yielded slightly different ratios, but are omitted
        here for clarity. For a major fraction of the lines, the principal
        part of the change takes the form of a re-scaling of
        the data on a polyad-by-polyad basis, we recommend the
        use of the factors set out in Table 1 as an interim solution.
        Other databases, such as GEISA [Jacquinet-Husson et
        al., 1999], are based on the same laboratory data and will
        therefore require the same correction. Detailed line-by-line
        data including both experiment and theory will be published
        [Schermaul et al., 2000].
        There is another lesson to be learned. Making sure the
        database is valid is necessary foundation for all modelling
        of atmospheric radiation transfer, especially so when theory
        and observation fail to agree.

      • That raises a very good point – one which I have alluded to many times in the past – that if back-radiation reduces the net radiative loss from the surface, there simply has to be more joules available for evaporation.

      • Konrad,

        You are plainly wrong. You have not shown, what you claim. There are many effects that limit the evaporative cooling. Your results apply only, when those are not important, but they are important for the real oceans. Therefore your results do not apply to real oceans.

        You haven’t even tried to show that your results would apply to anything else than to your own setup. Showing that requires careful analysis, but you don’t have any analysis at all.

        Your results are in agreement with well known physics to the accuracy that may be expected. That same well known physics tells also, why they do not apply to oceans.

    • David Springer

      Wow. A lot of factually incorrect information regarding oceanic heat budget. Google ‘oceanic heat budget’. A great many sources will be found that show latent heat loss (evaporation) is the predominant mode of ocean cooling. Typically latent heat loss is about 70% of the total, radiative heat loss is 25%, and conductive is 5%.

      Someone upthread mentioned that joules received from back radiation become available for evaporation and that is exactly right. Evaporation is an awesomely efficient means of getting rid of heat which is why we sweat water instead of sand.

      Over land where evaporation is restricted by lack of water and where radiative cooling is similarly constrained by back radiation the only other path is conductive cooling. Conductive cooling is far slower than evaporative cooling of course. Conductive cooling requires a temperature difference between the surface and the air in contact with the surfact. The greater the difference the greater the conductive cooling rate. So over land back radiation blocks the path of least resistance for heat to escape and forces it through the slower conductive avenue. The surface temperature ends up rising to a greater temperature which increases the conductive cooling rate and equilibrium (or something close) is re-established.

      Bottom line – Konrad is correct and is supported by theory, observation, and experiment. Deal with it.

  40. I think those that are ignoring the potential influence of biological processes in the oceans as being to small to matter are underestimating their potential impact on atmospheric co2.

    “A major portion of this primary production is recycled within the food web above the thermocline. The remaining fraction escapes from the upper ocean to the thermocline and below, where most of it is recycled and only a minor fraction is deposited in the sediments. It is the escaping fraction that effectively regulates the concentration of CO2 in the upper ocean. Since this layer is in contact with the atmosphere this quantity has important consequences for the global carbon cycle and climate change. In the absence of ocean primary production, surface total carbon dioxide CO2 would be 20% higher, and at equilibrium with such a surface ocean, the atmosphere would have a CO2 concentration close to double present levels (Sarmiento et al., 1990). Biological processes, therefore, have a profound impact on the carbon cycle yet this impact is very poorly understood and the subject of significant debate (Broecker 1991, Longhurst 1991, Banse 1991, Sarmiento 1991). ”

    CO2 in the Oceans

    • Great point Steven, and nicely put. Your numbers are impressively large. People who think that the only role the oceans play is simple gas exchange are missing most of the biosphere. Moreover, given that these biological populations should oscillate, as all biological populations do, the idea of a steady state anywhere in the system is wrong headed.

      • More generally, I sometimes use a little rubber ball in presentations. The ratio of the ball’s mass to my (200 pound) mass is roughly that of the atmosphere to the ocean. I toss the ball up and down and make the metaphorical point that trying to understand the behavior of the atmosphere, without understanding the behavior of the ocean, is like trying to understand the behavior of the ball without noticing me. Other parameters besides mass are equally applicable, if not more so.

      • I tend to agree, if you want to understand the climate you really do have to understand the oceans. Not that this is a claim that I do of course. I just point and say “look at that”.

      • I agree that the ocean plays an important role in some aspects of atmospheric physics, especial global energy flows. For local energy flows, which are also of interest, oceans play less of a role in say central US or central Brazil.

        Steady state however is an ok concept when applied appropriately. Obviously one doesn’t say the weather is in a steady state, but one might say that the climate is in a relatively steady state.

        At the molecular level air is not in a steady state because its constituents are moving randomly at 500 m/s. On a larger scale however the air in a sealed bottle can be said to be in a steady state, the motion of its molecules notwithstanding.

      • What aspect of climate do you feel is at relatively steady state? Everything oscillates, across a wide range of scales.

      • Everything oscillates, across a wide range of scales.

        For any m << n, the average temperature over m days oscillates more than over n days. The heart of climate skepticism is the denial of this elementary consequence of the law of large numbers.

        As an example, compare the variation in temperature from winter to summer with that from the average of 1900-1950 to the average of 1950-2000. If these two variations were the same we would not recognize seasonal variation as such, and we would be much more inclined to say that the weather had remained in a relatively steady state during the year.

        Climate skepticism requires its own logic in order to succeed. The logic used in science works to the detriment of climate skepticism. Only by insisting that the logic of climate skepticism is the correct one, and that all climate scientists are illogical except a few such as Richard Lindzen, Fred Singer, Roger Pielke, and Tim Ball, can climate skepticism make a compelling case for its position.

      • I fail to see the point you’re trying to make.
        It seems that, by your logic, it should be perfectly safe to grab hold of power lines – after all, the average voltage is zero.

      • Peter, that’s a really excellent point. (Why can’t Arfur, Sam NC, etc. come up with good points like that?)

        It would appear that life adapts to the prevailing routines. If the temperature fluctuates daily and annually, life figures out how to deal with it.

        If power lines kept falling around us and we kept grabbing hold of them, most of us would be killed. But depending on the voltage, a few might survive and breed survivors. However if the voltage were high enough no one would survive.

        One could take this as a model of the failure of life as we know it to evolve on Venus and Mars: the voltages on the power lines were too high, so to speak.

      • Peter317,

        Birds are save grabbing power lines.

      • I don’t see the logic of climate skepticism as being very original at all. It is as basic as having a stock broker that constantly tells you to buy futures and you keep losing money. Buy futures in stratospheric cooling. Buy futures in tropospheric hot spots. Buy futures in 0.2C warming per decade. Buy futures in ocean heat content. Your broker may have perfectly good reasons why he thinks these futures will outperform the market and perhaps they eventually will, but unless he starts to show some reliability in making good predictions you are likely to move your money. The question is, how long does your broker have to be wrong before you switch?

      • Steven, a stock can have great fundamentals but still take ten years before it proves itself. Global temperature is subject to relatively unpredictable events on time scales up to 11 years, in particular solar cycles, El Ninos, etc.

        Beyond that I am unaware of any longer-term climate phenomena that we cannot predict on the nose, provided only that we have good estimates of preindustrial CO2 (currently pegged at 275-285 ppmv) and the time required for a change in CO2 to impact global temperature — 10-30 years is a popular range, with the IPCC liking the 20 year figure in its concept of “transient climate response.”

        As long as voters continue to elect people who think climate is what they experienced in the last decade, the world will continue to be run on the premise that climate is unpredictable and therefore not something it pays to worry about.

      • For any m << n, the average temperature over m days oscillates more than over n days. is simply false. Skepticism has reasons not its own logic.

      • One of the 2 errors of some scientists when it comes to climate.

        Climate is a system with competing feedbacks and little known thresholds subject to abrupt and non-linear change at all timescales. It is dynamically complex – chaotic non-linear temporally and spatially. If you don’t understand that in your bones – and in the scientific literature – e.g http://www.biology.duke.edu/upe302/pdf%20files/jfr_nonlinear.pdf, http://www.nosams.whoi.edu/PDFs/…/tsonis-grl_newtheoryforclimateshifts.pdf – well you can talk about a sceptic mindset but are closed to the superb
        and surprising reality.

        The other error is clouds – to keep pretending that cloud cover doesn’t change except as global warming feedback is incredible.

      • The other error is clouds – to keep pretending that cloud cover doesn’t change except as global warming feedback is incredible.

        Very compelling point, CH. As I look up at the sky I see huge changes in cloud cover from day to day.

        But suppose all 7 billion of us look up at the sky and average the cloud cover we see. Do you have some reason for supposing that this average fluctuates at all? And if so, by how much, would you say?

      • All climate influencing factors fluctuate and oscillate. It’s unbelievable that somone can think that they don’t!

      • Fluctations averaged over n years are only 1/sqrt(n) the magnitude of those averaged over 1 year. So while it’s true that “all factors oscillate,” it is not true that they all oscillate to the same degree.

        Small oscillations are easier to manage than large. Admittedly it is inconvenient to have to wait n^2 times as long to reduce an oscillation by a factor of n, but only if the wait time were exponential in the reduction rather than quadratic would it become completely infeasible.

      • So you use techniques that apply disorder to the system.

      • The papers are about chaos. I really don’t know what you imagine order and disorder is.

        ‘In physics, the terms order and disorder designate the presence or absence of some symmetry or correlation in a many-particle system.
        In condensed matter physics, systems typically are ordered at low temperatures; upon heating, they undergo one or several phase transitions into less ordered states. Examples for such an order-disorder transition are:

        – the melting of ice: solid-liquid transition, loss of crystalline order;
        – the demagnetization of iron by heating above the Curie temperature: ferromagnetic-paramagnetic transition, loss of magnetic order.’

      • Disorder is randomness. A gas is disordered. A gas can be modeled by a probability distribution such as Maxwell-Boltzmann statistics. Put the statements together and you can get the picture.

      • ‘Disorder is randomness. A gas is disordered. A gas can be modeled by a probability distribution such as Maxwell-Boltzmann statistics. Put the statements together and you can get the picture.’

        Too many pointless references and sophistic arguments. Water vapour is more disordered than ice. Maxwell-Boltzmann distributions describe particle speeds in gases. There is no application to climate systems in any of what you say.

        It is a little sad that you are so obsessive – but that is not my problem. Bye.

      • It is a little sad that you are so obsessive – but that is not my problem. Bye.

        You used the definitions of order and disorder from condensed matter physics. Condensed matter are not the only states of matter, which is why I pointed out that gases are also disordered. Why you think that me pointing this out is obsessive, I have no idea.

      • Robert,

        2nd link is bad.

  41. Another point about residence time vs. network flows of the kind found in the carbon cycle, hydrological cycle, etc. just occurred to me.

    In a sufficiently simple cycle the concept of residence time might serve as a proxy for the flows.

    But in a cycle with a number of flows that interact with each other in interesting ways, one may be able to define a notion of residence time in terms of the net effect of those flows, yet be unable to infer those flows from the residence time. If the point of the notion of “residence time” is to give a handle on system behavior, it doesn’t help much for complex systems.

    As a case in point, suppose you have an inflow of 2 gnus/fortnight and two outflows each of 1 gnu/fortnight. This is a system in equilibrium.

    Suppose further that you know that shutting off the inflow automatically shuts off both outflows.

    One can then infer that shutting off the inflow leaves the equilibrium undisturbed.

    But now suppose that shutting off the inflow only shuts off outflow A leaving outflow B running.

    Now the system is out of equilibrium and losing 1 gnu/fortnight.

    The concept of residence time could be defined in terms of the behavior of these flows. However their behavior can’t be inferred from residence time.

    This sort of thing can happen with the carbon cycle because it consists of a number of flows each with its own behavior. CO2 exchange at the ocean-atmosphere interface is governed to a considerable degree by Le Chatelier’s principle, which responds immediately to changes in CO2 level. Vegetation on the other hand consumes CO2 in proportion to its biomass, and while I’m no biologist I would not expect its biomass to respond instantly to a change in CO2.

    Rather than trying to come up with a notion of residence time, it seems to me to be better to understand how the flows interact with each other when trying to predict the result of changing one or more of the flows. One can then define some notion of residence time based on the net outcome if needed.

    What is not possible, except in very simple situations, is to start from residence time and infer the various flows.

    • Le Chatelier describes a chemical equilibria.

      There are processes involving both biology and secondary chemical reaction that lead to carbon depositing on the seafloor in both organic and carbonate forms.

      So there is a flow of carbon from the atmosphere to long term sequestration at the bottom of oceans – rather than an atmosphere/ocean equilibrium.

      One more thing – carbon is not generally a limiting nutrient and a biomass response to elevated carbon in the atmosphere may not be huge.

      • Right, Le Chatelier’s principle governs how adding CO2 to the carbonate-laden water dissolves the carbonate to form bicarbonate, a reversible reaction. There are also other homeostatic processes at work, whence my “to a considerable degree.” You’re right about carbon flowing down, but it’s by no means a one-way street.

        One more thing – carbon is not generally a limiting nutrient and a biomass response to elevated carbon in the atmosphere may not be huge.

        In fact elevated CO2 can even reduce net primary production, as I pointed out back in December at

        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-25854

        “Another beneficiary is plants. Warming along with increased precipitation and nitrogen deposition are all beneficial consequences of increased CO2 for plant life. Since plants utilize CO2 in photosynthesis one might also expect plants to benefit directly from the increased CO2. However experiments at Stanford’s Jasper Ridge preserve over the past decade, described in the very interesting paper Grassland Responses to Global Environmental Changes Suppressed by Elevated CO2, indicate that CO2 in conjunction with the above side effects of CO2 suppresses root allocation thereby reducing net primary production. One possible cause is that the minerals in the soil are already maxed out.”

      • Carbon sinks to the ocean floor – some of this returns in ocean upwelling.

        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104798

      • “One more thing – carbon is not generally a limiting nutrient and a biomass response to elevated carbon in the atmosphere may not be huge.”
        Plants on land are limited by lack of CO2. Plants will grow faster if they have 1000 ppm of CO2. Commerical greenhouse add CO2 so it’s around 1000 ppm. And many studies show enriched CO2 in natural settings increase growth.

        I am aware that most of the ocean lacks needed minerals, and therefore more CO2 would only deplete these minerals to greater extent. So understand how increase CO2 levels wouldn’t make much difference for most of the ocean, but in coastal areas where there are abundant nutrients, is there ever a significant lack of CO2 which inhibits growth?

      • Your 7:28 pm comment crossed paths with my 7:27 pm one, which bears on your point.

        And many studies show enriched CO2 in natural settings increase growth.

        Indeed. But what those studies don’t do is duplicate the conditions resulting from global warming, which includes higher temperatures, humidity, and other factors. These too benefit plant growth; the surprising result however, that I referred to above, is that increased CO2 in conjunction with these other concomitants of global warming, turns out to actually reduce net primary production.

      • ‘Simulated global changes, including warming, increased precipitation, and nitrogen deposition, alone and in concert, increased net primary production (NPP) in the third year of ecosystem-scale manipulations in a California annual grassland. Elevated carbon dioxide also increased NPP, but only as a singlefactor treatment. Across all multifactor manipulations, elevated carbon dioxide suppressed root allocation, decreasing the positive effects of increased temperature, precipitation, and nitrogen deposition on NPP. The NPP responses to interacting global changes differed greatly from simple combinations of single factor responses. These findings indicate the importance of a multifactor experimental
        approach to understanding ecosystem responses to global change.’

        I think you will find that NPP doesn’t decrease in any of the treatments – ‘decreasing the positive effects of…’ As I said – I wouldn’t assume a great change in terrestrial vegetative biomass.

      • I think you will find that NPP doesn’t decrease in any of the treatments

        Agreed. The point as I understand it is that when global warming raises those other factors (warmth, moisture, nitrogen), less CO2 can be even more beneficial. Even if you don’t reduce CO2 there is a net benefit from global warming (your point), but reducing CO2 when that stage is reached further increases that benefit.

        Gotta go, I have to answer a one-word email from Dan Quayle: “What?” ;)

      • Drawing too many conclusions from a single experiment.

      • Good point. Do you have an experiment demonstrating the opposite? I can only work with what has been reported to date, however slim the evidence.

      • The need to examine the strength of evidence remains – including when making generalisations from meso scale biological experiments.

      • Terrestrial plants in the wild typically are nutrient and water limited. That’s why we add nutrients and water to out gardens. Plants decrease stomata size and density in response to elevated carbon dioxide – limiting gas exchange but also water loss. Not necessarily a good thing for ecosystem functions.

      • Plants grow better with less water in higher CO2. How can that not be better?

      • I’m not clear that there is CO2 fertilisation – where other factors are limiting. There is an advantage for the plant in conserving water. On the ecosystem level – what does that imply for terrestrial hydrology? How does this affect recruitment – i.e. seed germination with reduced local moisture? How is surface water affected in for instance rainforests – and what affect does that have on dependent ecologies?

        Are we contributing to CO2 in the atmosphere? Well it is at levels not seen for a long time and we are emitting 30 billion metric tonnes a year – and likely all things being the same to increase 10 fold this century.

        Uncertainty around a range of issues means that we are in unknown territory.

      • “Terrestrial plants in the wild typically are nutrient and water limited.”
        A swamp isn’t :)
        Desert areas are water limited unless near a river or other source of water. And desert areas are large portion of the earth’s land area.
        If you are in rainforest area- tropical or temperature, the shortage of water can be seasonal but most of the time isn’t a factor. In such areas the shortage is sunlight- blocked by other plants. Plants are also engaged in various types of chemical warfare with other plants [and animals/pests] which inhibit plant growth.

        “That’s why we add nutrients and water to out gardens.”
        We also weed gardens.
        And we are generally growing a domesticated plant- genetically selected to increase yield- whether yield is type of flower, or fruit, or foliage. These domesticated plants generally wouldn’t survive well in the wild.
        These plants are also unlikely to be native to the particular region they are being grown in.

        “Plants decrease stomata size and density in response to elevated carbon dioxide – limiting gas exchange but also water loss. Not necessarily a good thing for ecosystem functions.”

        Good for an ecosystem, is like interfering with another country. Or as any governmental economic stimulus program.
        Increasing CO2 levels will have an effect upon the environment, and if any type of change is “not good”, then changing CO2 levels is “not good”.

      • But all change is not bad.

      • Its quite likely that agricultural plants, many of which are annuals, can be selectively bred to suit higher CO2 levels. That may well have happened unwittingly through the selection of the most successful varieties.

        But what about other plants? The generational cycle of a tree species may be measured in decades or even centuries. Yes, they will evolve to cope with changing CO2 levels, in time, but can they do it fast enough?

        I don’t believe there is any evidence to suggest they can.

      • They respond quickly to CO2 changes by changing stomata size and density – http://www.scientistlive.com/European-Science-News/Biotechnology/How_plants_adapt_to_climate/21329/

        This is well known.

      • It’s not only an issue of whether some plants can adapt, but also that some plants will adapt better than others. That might mean a thriving species will out-compete and eliminate one less able to thrive in the new conditions. Much the same issue exists with CO2 fertilization itself – some plants will fare better than others.

        This isn’t some local change of course, CO2 is being elevated at an unprecedented rate planet wide. When you factor in the ocean acidity changes, the plant fertilization changes and of course the radiative effects of the CO2 rise, then life may be in for quite a serious shakeup over the next 100 years.

        Future generations may very well look back and wonder how on earth we allowed such a major perturbation to the carbon cycle take place.

      • CO2 fertilisation is a myth outside of controlled actual greenhouse conditions. Plants reduce stomata size and density to restrict when they can restrict gas exchange and also reduce water loss.

        But you are barking up the wrong tree – so to speak. The contribution of CO2 to ‘recent warming’ is minimal, the world is cooling for a decade or three at least as the cool Pacific mode intensifies, I would worry about the hydrological changes from smaller stomata more as CO2 fertilisation is a myth and ocean pH depends much more on biological activity and deep ocean upwelling than anything we have seen from atmospheric concentrations of CO2.

        We are not sure how much we are contributing to changes in the carbon cycle – and are not inclined to worry about it too much. There are bigger problems in the world.

        What we need is 3%/year growth in energy and food for the rest of the century – about a 15 fold growth. Figure out how we can do that while reducing emissions and we may have some basis for discussion.

        Your so called science is BS – and then you appeal to the authority of the future? Please.

      • The world is definitely not cooling.

        In fact there’s no slowdown in longterm global warming in recent years if ENSO and the solar cycle are accounted for. UAH afterall had 2010 as joint warmest on record despite the El Nino being weaker than 1998 and the recent solar minimum. And the recent La Nina bottomed out very high in UAH plus temperature now during ENSO neutral in UAH seems to be at levels associated with El Ninos in the past!

        The PDO has been dropping for 30 years. Related to this ENSO indexes also show a downward trend since around 1980.

        In my book that’s a cooling impact if anything.

        And yet the world has warmed over that period.
        http://www.woodfortrees.org/plot/jisao-pdo/plot/jisao-pdo/from:1980/trend

      • lolwot

        In fact there’s no slowdown in longterm global warming in recent years if ENSO and the solar cycle are accounted for.

        Nassim Taleb has an expression for this rationalization, which occurs when a prediction has turned out to be totally false (as was the IPCC forecast of 0.2C warming per decade)..

        He refers to it as the “my prediction was correct except for…” rationalization (add in anything convenient after “except for”).

        Max

      • CH,

        “The contribution of CO2 to ‘recent warming’ is minimal…”
        Do you have any credible scientific reference to support this statement?
        And you’re accusing others of BS science?

      • Of course I have peer reviewed scientific references for this. Most ‘recent warming’ happened in 1976/77 and 1997/98. These are ENSO ‘dragon-kings’ as defined by Sornette 2009 – e.g Tsonis et al 2007, Swanson et al 2009.

        But regardless of the reasons for such large variability at those times – just look at the numbers in the temperature record and then at the Wolter MEI – http://www.esrl.noaa.gov/psd/enso/mei/ – to see what was happening with ENSO.

        Then look at e.g. – Wong et al 2006 to see what clouds were doing.

        Or NASA – http://isccp.giss.nasa.gov/projects/browse_fc.html.

        ‘In the first row, the slow increase of global upwelling LW flux at TOA from the 1980’s to the 1990’s, which is found mostly in lower latitudes, is confirmed by the ERBE-CERES records. ‘ Relative cooling in the IR!!!!!

        ‘The overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements.’ Relative warming in the SW!!!!

        ‘The overall slight rise (relative heating) of global total net flux at TOA between the 1980’s and 1990’s is confirmed in the tropics by the ERBS measurements and exceeds the estimated climate forcing changes (greenhouse gases and aerosols) for this period. The most obvious explanation is the associated changes in cloudiness during this period.’

        But it cooled in the IR!!!!!

        Now if you can’t believe NASA/GISS – who are you going to believe.

        It is a nonsense to think that clouds don’t change – especially in relation to SST changes and especially in the tropics and sub-tropics.

        There is much more on a website accessible by clicking my title. The only thing I would add is something on the longer term variability of ENSO – http://www.nonlin-processes-geophys.net/16/453/2009/npg-16-453-2009.htmlhttp://www.clim-past.net/6/525/2010/cp-6-525-2010.pdf

        As opposed to this we have an assertion that the PDO has been ‘dropping for 30 years’ and that ENSO has been trending down ‘since about 1980′. The PDO changes on about 20 to 40 year periods from a cool mode to a warm mode. It is associated with a change in the frequency and intensity of ENSO.

        ‘The PDO was named by Mantua et al (1997), who demonstrated a connection between salmon abundance and SST in the northern Pacific. SST varied over 20 to 30 year cycles in phase with changes in salmon abundance. SST were cooler than average for 20 to 30 years – a cool mode of the PDO, and then warmer than average over 20 to 30 years, a warm mode. A warm mode PDO is associated with reduced abundance of coho and chinook salmon in the Pacific Northwest, while a cool mode PDO is linked to above average abundance of these fish. The biology responds to cold but nutrient rich sub surface water upwelling in the north eastern Pacific. The abundance of salmon was greatest in the period between the mid 1940’s and mid 1970’s, least in the period 1976 to 1998 and has, in recent years, rebounded to values not seen since the 1970s.’ (JISOA- Climate Impacts Group) – http://cses.washington.edu/cig/pnwc/pnwsalmon.shtml

        Verdon and Franks (2006) used ‘proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the PDO and ENSO. During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’

        Verdon, D. and Franks, S. (2006), Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records, Geophysical Research Letters 33: 10.1029/2005GL025052.

        So we are looking at another decade or 3 of intense and frequent La Niña. If the suggestion that solar UV drift is involved in modulating ENSO variability is correct – then we may be looking at increased La Niña frequency in the longer term – a centennial and millennial variability with well known associated cloud feedbacks.

        One potential cause of Pacific Ocean variability is shown by Lockwood et al (2010). ‘During the descent into the recent exceptionally low solar minimum, observations have revealed a larger change in solar UV emissions than seen at the same phase of previous solar cycles. This is particularly true at wavelengths responsible for stratospheric ozone production and heating. This implies that ‘top-down’ solar modulation could be a larger factor in long-term tropospheric change than previously believed, many climate models allowing only for the ‘bottom-up’ effect of the less-variable visible and infrared solar emissions. We present evidence for long-term drift in solar UV irradiance, which is not found in its commonly used proxies.’

        Judith Lean (2008) commented that ‘ongoing studies are beginning to decipher the empirical Sun-climate connections as a combination of responses to direct solar heating of the surface and lower atmosphere, and indirect heating via solar UV irradiance impacts on the ozone layer and middle atmospheric, with subsequent communication to the surface and climate. The associated physical pathways appear to involve the modulation of existing dynamical and circulation atmosphere-ocean couplings, including the ENSO and the Quasi-Biennial Oscillation. Comparisons of the empirical results with model simulations suggest that models are deficient in accounting for these pathways.’

        Lockwood, M., Bell, C., Woollings, T., Harrison, R., Gray. L. and Haigh, J. (2010), Top-down solar modulation of climate: evidence for centennial-scale change, Environ. Res. Lett. 5 (July-September 2010) 034008 doi:10.1088/1748-9326/5/3/034008

        Lean, J., (2008) How Variable Is the Sun, and What Are the Links Between This Variability and Climate?, Search and Discovery Article #110055

        ‘Stay tuned for the next update (by September 10th, probably earlier) to see where the MEI will be heading next. La Niña conditions have at least briefly expired in the MEI sense, making ENSO-neutral conditions the safest bet for the next few months. However, a relapse into La Niña conditions is not at all off the table, based on the reasoning I gave in September 2010 – big La Niña events have a strong tendency to re-emerge after ‘taking time off’ during northern hemispheric summer, as last seen in 2008. I believe the odds for this are still better than 50/50. If history ends up repeating itself, the return of La Niña should happen within about two to three months.’ Claus Wolter

      • look for la nina, its coming, we’ve seen it coming since June.

      • CH, your TOA LW flux looks suspiciously like the surface warming effect seen in the window region. If you look at your Zhang et al. reference, you will see TOA LW is more or less flat. I didn’t check your other figures for you.

      • I guess your link was also to a NASA site, but this one seems to supersede the TOA LW plot, being more recent. They do have contradictory data on this.

        http://isccp.giss.nasa.gov/projects/flux.html

      • Hi Judith,

        Yes – upwelling in the Humboldt Current strongly over the SH winter – and the state of the SOI – seemed to me to suggest a strong possibility.

        La Niña could be seen in the evolving cold tongue in the equatorial Pacific from early August and is now kicking off strongly.

        http://www.osdpd.noaa.gov/data/sst/anomaly/2011/anomnight.8.25.2011.gif

        It started showing up in SST in the central Pacific around the middle of the month.

        http://stateoftheocean.osmc.noaa.gov/all/

        Cheers

      • Chief, both ECMWF and CFS (NOAA) have been predicting this for several months

      • Judith,

        As of the middle of August and for September to October the CFS and the GloSea models indicate neutral to cool conditions while the other models suggested neutral conditions.

        Summary of models:

        http://www.bom.gov.au/climate/ahead/ENSO-summary.shtml

        Unless we have different information – there are links to the sites. I tend to discount ENSO models at any rate.

        Cheers

      • Jim D,

        ‘CH, your TOA LW flux looks suspiciously like the surface warming effect seen in the window region. If you look at your Zhang et al. reference, you will see TOA LW is more or less flat. I didn’t check your other figures for you.’

        I don’t have a Zhang et al reference – I have a Wong et al reference that talks about decadal variability in ERBS TOA flux. More IR is emitted because the planet warmed? But that would only result in equilibrium – not ‘relative’ cooling.

        Jim D | August 27, 2011 at 7:20 pm |
        I guess your link was also to a NASA site, but this one seems to supersede the TOA LW plot, being more recent. They do have contradictory data on this.

        http://isccp.giss.nasa.gov/projects/flux.html

        Oh Jim – what you have is exactly the same ISCCP-FD data expressed as net flux. By convention – showing a positive slope as the planet gaining energy and a negative as the planet losing energy. So a relative warming in SW and a relative cooling in the IR.

      • CH, their net TOA LW has no slope, and certainly nothing like the one in your NASA link that was older. Both refer to Zhang et al.

      • Jim D

        I see Zhang et al was the NASA reference for the ISCCP-FD dataset. Nonetheless – it is the same ISCCP-FD data I referred to. Although net rather than upward flux as I said.

        You have to take into account little things like the scale – try saving the graph and stretching the y axis and you might be able to see a slope. Maybe not.

      • CH, I am looking at the brown net LW TOA line on the second graph in the link you list last above. This looks very flat with both upward and downward excursions in the period. The end value is close to the beginning one (near 5). And we should expect TOA LW to be constant unless it is opposed by a significant solar or albedo change in the net TOA SW.

      • The comparison in the tropics – ignore the AVHRR

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-4-4-1.html

        The values of IR are influenced by cloud of course and in no way should be expected to be constant. Relative warming in the SW and relative cooling in the LW – from 2 platforms in the tropics.

      • CH, You say “Most ‘recent warming’ happened in 1976/77 and 1997/98.”

        No-one else is saying that.

        Maybe you’d like to take look at this graph and explain why you believe this to be the case?

        http://farm6.static.flickr.com/5020/5419352107_01659a31a4.jpg

      • tt,

        Most recent warming happened in 1976/77 and 1997/98 – it is blindingly obvious. How can it get any simpler? Exclude those years and get the residual trend between 1978 and 1996.

        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        At least catch up with realclimate – they can give you some better arguments for minimising the problem of non warming.

      • Bob,

        Take another look at the graph. I’m not in disagreement with guys at Realscience .My graph looks just the same as theirs. As the article you linked to suggested:

        “anomalous behavior is always in the eye of the beholder.”

        I suppose if you look hard enough, and tell yourself often enough that its cooling, I suppose it might be possible to fool yourself.

      • Chief,
        You mention that CO2 increases the heat content of the atmosphere.
        This is a summary of heat content of various atmospheric components:
        http://en.wikipedia.org/wiki/Heat_capacity
        I do not see much about heat capacity between CO2 and O2 that seems significant.
        What am I missing about what you said?

      • This is a summary of heat content of various atmospheric components:
        http://en.wikipedia.org/wiki/Heat_capacity

        Hunter, heat content is in joules, heat capacity is in joules per degree.

        One can raise the heat content of a substance without raising its heat capacity. Naturally the substance’s temperature increases when you raise its heat content.

        To further confuse matters a distinction is drawn between extensive and intensive heat capacity, the former being the above and the latter being in joules per kilogram per degree. For the atmosphere as a whole one wants the former, for each kilogram of atmosphere one wants the latter.

      • Robert,

        ” The contribution of CO2 to ‘recent warming’ is minimal”. Why and what make you think there is minimal CO2 contribution to ‘recent warming’. CO2 helps to cool the N2 and O2 (i.e. air) in the atmosphere and the Earth with very effective IR energy radiated to space and gain energy from collisions with O2 and N2 and absorption from the Earth surface radiations. It cools the atmosphere and cools the Earth. The more ppm CO2 in the atmosphere, the cooler is the atmosphere and the cooler the Earth.

      • I don’t know why IR cools the atmosphere. When absorbed by CO2 or water vapour – the molecules have a higher energy state – they vibrate transferring energy to other molecules in the atmosphere. Occasionally they emit a photon and move to a lower energy state.

        The higher energies are related to heat content – more CO2 in the atmosphere means that more heat is retained in the atmosphere.

      • CO2 helps to cool the N2 and O2 (i.e. air) in the atmosphere and the Earth with very effective IR energy radiated to space

        How can this help to cool? By your logic, the radiation is already on the way out the door, so what would it matter if CO2 was there or not?

        and gain energy from collisions with O2 and N2 and absorption from the Earth surface radiations.

        Huh? The gases are locally isothermal — big deal.

        And then you say that CO2 does absorb radiation on the way out, which contradicts the first point.

      • ” The more ppm CO2 in the atmosphere, the cooler is the atmosphere and the cooler the Earth.”

        It strikes me that you’ve actually understood how the Greenhouse Effect works, and you’ve had a great idea!

        If IR radiation in the outer atmosphere radiates heat out in to space then surely we can argue that it cools the Earth!

        So, the question is: are you really that obtuse? I’d say not. You’ve just spotted an opportunity to come up with another denialist argument! Well done, maybe you’d like to submit it to http://www.skepticalscience.com !

      • Robert,

        ” When absorbed by CO2 or water vapour – the molecules have a higher energy state – they vibrate transferring energy to other molecules in the atmosphere. Occasionally they emit a photon and move to a lower energy state.”
        Yes, we agree on the above.

        “The higher energies are related to heat content – more CO2 in the atmosphere means that more heat is retained in the atmosphere.” CO2 does not retain heat, it immediately release energy after gaining collisional energy from the N2 and O2. AGWers have shown that CO2 and other GHGs are more IR radiation efficient than other gases.

        WebHubTelescope,
        You did not digest. Why would 1st point contradicts 2nd point or vice versa?

        tempterrain,
        “So, the question is: are you really that obtuse? I’d say not. You’ve just spotted an opportunity to come up with another denialist argument! Well done, maybe you’d like to submit it to http://www.skepticalscience.com
        No, I don’t waste time in other blogs especially when you say “are you really that obtuse”. That makes the blog there low. Dr Curry’s blog is the only site worth my time.

      • still pushing this horsesht then?

      • tempterrain,

        do you deny that GHG’s including CO2 can cool their parcel that include N2 and other gasses that aren’t GHG’s??? That this is how the energy is radiated from the planet once it is convected upwards?? (which is sped up by the GHG’s warming the parcel?)

      • There are two opposing effects of GHGs. They increase absorption of photons from the earth, which is a warming effect, and they emit photons to space and towards the earth, which is a cooling effect in the atmosphere. The net effect turns out to be a cooling one in the troposphere, but the emission towards earth is a warming effect on the surface. The tropospheric cooling balances the warming effect of convection that provides heat from the surface more efficiently than radiation..

      • Jim D,
        “There are two opposing effects of GHGs. They increase absorption of photons from the earth, which is a warming effect, and they emit photons to space and towards the earth, which is a cooling effect in the atmosphere. The net effect turns out to be a cooling one in the troposphere” Basically, no problem but you can express it better, like Alex did.

        “but the emission towards earth is a warming effect on the surface. ”
        This statement does not comply with Stefan-Boltzmann radiation violated radiation law.

        “The tropospheric cooling balances the warming effect of convection that provides heat from the surface more efficiently than radiation.”
        Basically, no problem but you can express it a lot better without the use of “The tropospheric cooling balances” .

      • Sam NC,

        There is no violation of any law when a molecule of GH gas absorbs from the ground. and then re-radiates a IR photon, in a random direction.

        If the molecule “knew” which way was up, and which way was down, and therefore “knew” to re-radiate the photon out into space, then you might just have some sort of a point.

        I’m sure the idea of GH gas molecules being aware of what they are supposed to do must violate some law. I’m not sure which one – though. Maybe I’m the first one to state this and it can, from now on, be called “Martin’s Law” !

      • tempterrain,

        “There is no violation of any law when a molecule of GH gas absorbs from the ground. and then re-radiates a IR photon, in a random direction. ”

        This is correct. The incorrect part is the lower energy photons from CO2 heat up the higher energy higher temperature ground surface that violates radiation law.

      • SamNC, the downward photons no more heat the surface up than a blanket (not an electric one) heats you up at night. It is better to say it keeps the surface warm. The heat comes by other routes (solar). I know I said it has a “warming effect”, but more accurately it has a keeping-it-warm effect on the surface which is trickier to phrase well.

      • Jim D,

        No. Downward photons from CO2 at best can reduce the Earth surface cooling, not warming. This is the serious mistake of the AGWers thinking that the CO2 photons radiated down to the Earth can heat up the Earth.

      • SamNC, good, you got the point I was making about the blanket effect.

      • JimD and Sam NC,

        We are all in agreement then? Blimey that’s unusual!

        I like the distinction between “keeping-it-warm” and ‘warming”. That’s a good way of putting it.

      • HaHaHa,
        No, you’ve forgotten, now that CO2 is at a less energy state as it has released IR energy to the space and emitted photon to all other directions. The CO2 at a lower energy state that increased cooling of the Earth surface as well as the cooling of the atmosphere. Got it?

      • Sam NC,

        There was I thinking that you might have actually learned something but I do feel you should now be demoted to blogging on Wattsupwiththat or one of those crazy websites like “I love my CO2″!

        Its quite simple really. CO2 molecules in the atmopshere pick up some IR photons from the ground which would have been heading out into space had they not been intercepted.

        Then they re-emit the photons but only some of them are sent off in the direction of space. Some get re-radiated back towards the ground. The molecules end up in exactly the same condition as they started. But the ground ends up warmer than it would otherwise be which causes it to emit more IR photons than it would do if it were cooler. Does that make sense so far?

        A new balance is therefore set up and the number of IR photons going off into space is the same as before. But with the ground warmer than before. Have I said that before? The ground is warmer than it would be with those friendly GH gas molecules. So some of them are definitely a good thing.

        But I’m sure you know that you can have too much of a good thing. Goldilocks and all good cooks understand that. Not too little. Not too much. Everything has to be just right.

      • temterrain,

        I stopped visiting any climate discussion including Wattsupwiththat or “I love my CO2″ about 9 months ago. I visit and comment only here based on my experience and knowledge with thermodynamics and radiations. You may like to know that I am not influenced by any climate websites, all my original thinking. I am ready to believe in either camp but so far non are persuasive enough. Yes, I learn a lot here, especially, not my field of expertise. Chief Hydrologist is a good example among them.

        “Its quite simple really. CO2 molecules in the atmopshere pick up some IR photons from the ground which would have been heading out into space had they not been intercepted.” OK.

        “Then they re-emit the photons but only some of them are sent off in the direction of space. Some get re-radiated back towards the ground.” OK

        “The molecules end up in exactly the same condition as they started.” This is where I disagreed. The molecules have already emitted photon energy and need energy from outer sources (such as collisions with N2 and O2 at a higher energy state or stronger IR photons from the Earth) to restore its same condition as they started. Or, they become perpetual photon energy suppliers which cannot be true in any universe.

        “But the ground ends up warmer than it would otherwise be which causes it to emit more IR photons than it would do if it were cooler. Does that make sense so far?” No. The 1st sentence is confusing especially with those ‘it’s. I am sure Alex would have expressed it a lot better and clearer.

        I think you need a lot more efforts in the above explanations to be persuasive.

      • Sam NC,

        It’s like teaching a slow learning child.

        A GH molecule has energy X and a IR photon has energy Y. Photon is absorbed by molecule so now molecule has energy X+Y. So, for a time the molecule does have some extra energy and therefore the atmosphere will be warmer.

        Photon is emitted from molecule with energy Y towards ground where it is absorbed. Molecule now has energy X , just like it had before , so there is no need for it to take up energy from anywhere else. Now ground has extra energy Y.

        So this makes the ground warmer than it would be otherwise, until the IR photon is radiated back again.

      • tempterrain,

        ir up, ir down, aren’t you forgetting something?? Conduction from the ground to the air is very slow. Convection wouldn’t start for hours at least if that is what we depended on to get it started. GHG’s are really good at absorbing a bunch of IR and colliding with their parcels transferring it speeding up convection by just a whole bunch. Oh wait, that means that energy isn’t available to be radiated anywhere until those bits collide with GHG’s again!! That could be halfway to the Strat or more!!

        So not only does half of the emitted go up, all the transferred goes up. Kinda cuts down on how much can go down now doesn’t it??

      • Someone’s casting aspersions on slow-learning children by such comparisons.

        What child thinks convection moves faster than the speed of light?

        Or that any amount of altitude can convert ‘down’ to ‘up’?

        Though I admit some children might have trouble grasping that convection pretty much stops at the tropopause.

        ‘Stop’ is such a hard word.

      • Sam NC, by your reckoning adding CO2 cools both the air and earth. This would be unfortunate because then the IR radiation out to space would be reduced even further than the CO2 reduced it, and the earth could not radiate more energy to return to balance. How do you propose the earth radiates more energy again if it isn’t warming? Your climate system will accumulate energy somewhere, and you have just lost it somehow.

      • This is really a waste of time. Sam NC , Kuhnkat and some others are obviously determined that there shall be no GH effect. It’s like an 11 th commandment. Unless, somehow, in some tortuous way it can be argued that GH gases cool the Earth. Then that would be OK , of course.

        I might leave off this for now and go on to a creationist blog / website. and see if I can persuade those guys that there may be something in Evolutionary theory, despite what they may hear in their Church sermons.

        Can it be any harder than this?

      • temp,

        GHG’s are a MODERATING factor. Without GHG’s the equator would be unliveable during the day!! At night you would want a lot more than a Snuggy to keep warm!!

        Sadly the Climatati decided to make it a one way street that is obviously bunkum.

      • Yes, of course, the the whole of the atmosphere has a moderating effect on the Earth’s temperature. The extremes of temperature on the moon are much greater than on Earth. It’s not just GH gases that play a role in this.

        However, on average the Earth is still warmer. The Earth has a GH effect and the Moon doesn’t.

        The average temperature of the moon is -23 degC while the average temperature of the Earth + 15 deg C.

      • Temp,

        without GHG’s the atmosphere would be a lot hotter. The surface gas would be warmed conductively and convect away from the surface warming the atmosphere. The layer that cooled at night would not cool much else as it could only conduct in turn. This cycle would continue until the whole atmosphere was close to the peak temp reached by the surface unprotected from the sun’s direct rays!!

        Bart R,

        “What child thinks convection moves faster than the speed of light?”

        What has a higher bandwidth, a 300 baud modem or a station wagon full of backup tapes travelling at 70mph (old 9 track reel to reel if you think it will help your case)??

        You children are so silly sometimes!!

      • kk

        Your station wagon has 17 km worth of fuel. When it goes 17 km, it finds itself in a place only 300 baud modems work, and tapes are degaussed.

        But that’s okay, because there’s 600 km ahead of nothing but 300 baud modems packed tighter in proportion to Moore’s Law every generation.

        What part of ‘stop’ is so hard?

      • Sam, if you hug someone who’s wearing a sweater, they feel colder to you than if they’re just wearing a thin shirt.

        Yet they tell you that with the sweater on, they feel warmer.

        Now if you add CO2 to the atmosphere, the Earth looks colder when observed by a satellite.

        But down on the ground it feels warmer.

        If you see an essential difference here that makes this a bad analogy, it would be helpful if you would explain how the analogy breaks down. Preferably in simple terms.

      • Well Bart,

        Since you tried to make fun of an old revered analogy that us old time Computer Geeks used to get a laugh out of, I don’t know whether you understood it or not. Did you do the math on parallel versus serial transmission of data and realize that the station wagon going 70mph can actually transfer data faster than a lightspeed 300 baud modem (providing it doesn’t run out of gas, have a flat or get carjacked…)??

        So, let’s do the baby steps for climate. We are talking about transferring energy. Photons, or waves, transfer miniscule amounts of energy at lightspeed. Convection moves large amounts of molecules containing the stored energy from rather large numbers of photons/waves in water vapor, molecular vibration, kinetic energy, and atomic states. In sum, it is a horse race depending on how much evaporation is happening in the environment etc. The photons/waves rarely make it all the way to space on the first step and often end up in the convective station wagon anyway!!

        Kiehl & Trenberth show about 66 w/m2 radiation making it to the upper trop while about 102w/m2 make it by thermals and evapotranspiration. The old tortoise stationwagon wins again and the lightspeed Rabbett loses!!

      • I think that CO2 added to the atmosphere by people is as hot as it is ever going to get – it is after all the product of that famous oxidising and exothermic reaction. We set fire to it.

        So it can’t be the case that adding CO2 to the atmosphere causes the planet to look colder from space.

        Sam – I think you are the only sensible person here. They are all thinking in terms of nonsense analogies – sweaters for instance. Although we call them jumpers – and have a joke about what you get when you cross a kangaroo with a sheep. A woolly jumper. (LOL)

        I have just understood what you were saying. N2 and 02 impart energy to C02 and H2O in collisions – which then occasionally emit a photon to space cooling the world.

        Well – yes – that seems a reasonable part of the puzzle. Although N2 and O2 don’t actually need CO2 to cool down. Anything above absolute zero will emit radiant energy – so a warmer atmosphere will emit more energy to space and by quite a lot more than proportionately by the Stefan-Boltzmann equation. But it seems to me to be the case that the atmosphere in general will be more energetic if there are more molecules that interact with photons in the IR frequencies. Then the planet will be warmer and emit more energy to space with an exponentially increasing radiative flux with temperature. Cooling off again – because it can’t emit more energy than comes in from the sun.

        There is a puzzle here that I need to think on. If anyone wants to continue this – I suggest that we take it to the bottom of the thread.

        Catch you later Sam.

      • Vaughan,

        With what you had written that simple analogy, my conclusion is you did not read what I, Jim D and tempterrain had written. Go and re-read them and then come with a better question or a better statement for me to respond.

        tempterrain,
        I said CO2 cools itself when it released IR photons to all directions. Where are the energy come from to restore CO2’s original energy state?

        When you are unable to reason, don’t divert to childish statements which only show you are running away from a good discussion without a good reason. You cannot even reason and expect me to learn from you?

        Bart R,
        As usual, your reply is childish. This blog is serious discussion and there is no point to reply childish comments. Come back again when you can write something like a grown up.

        Jim D,
        When you are unable to think something better in reply, you divert. Let me ask you one more time ‘When CO2 radiates its IR energy thru releasing photons to all directions, it cools itself. Where are the CO2 energy come from to restore to its original energy state? CO2 will be getting cooler and cooler by emitting more photons if there is no energy to replenish to restore to its original energy state.’

      • SamNC, I was carrying your idea to its unworkable conclusion. With more CO2, less radiation gets out to space. How should the earth adjust to get more radiation out? There is one obvious answer.
        You should know the answer to your question too. The IR cooling is opposed by convection from the surface which warms and balances it.

      • kk

        Wasn’t trying to make fun of the old revered analogy.

        Was successful at dissing your hopeless misunderstood use of it.

        But as you’re now getting Perry #226 backwards too, I can leave the old wood panel modem alone and ridicule you on that instead.

        Of all the many versions of the fable, there is one common element: the Hare stops before the race is over.

        Let’s call the point the Hare rested, gave up, or was fooled, the tropopause point.

        From the tropopause (where convection practically stops) to the end of the race, the TOA, is another 600 km.

        Your 17 km is on the scale of a rounding error, and could be described as miniscule.

        Which still doesn’t matter, because heat isn’t transmitted to the vacuum of space by convection, but almost only by radiation (because we mostly ignore the really miniscule magnetic transfer).

        Also, a mere quibble, but on the scale of 102, 66 is hardly miniscule. (It’s over half, in case the math troubles you.)

        Likewise, convection existed before 1750; the net increase in heat carried up by convection (remembering convection also returns air down, and that air returns marginally warmer now than it did before 1750), really would be miniscule. It would also vary with temperature, humidity and other factors, so not exactly a predictable number, other than ‘approaching insignificant’.

        Likewise too, the effects of convection at elevating the solar tide would only amplify the GHE by increasing the net distance to TOA, albeit a minor consideration overall.

        On the other hand, an increase in concentration from 280 to 390 ppm, since ppm is a volumetric concentration and single photons travel in paths, the mean path length increases by some function of a cube as concentration increases linearly.. which it gets to do all the way to the end of the race.

        Unlike convection.

      • BartR,

        “Someone’s casting aspersions on slow-learning children by such comparisons.

        What child thinks convection moves faster than the speed of light?

        Or that any amount of altitude can convert ‘down’ to ‘up’?

        Though I admit some children might have trouble grasping that convection pretty much stops at the tropopause.

        ‘Stop’ is such a hard word.”

        It is amazing how arrogant people get caught over the simple stuff.

        http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/TFK_bams09.pdf

        Take a look at Trenberth’s cartoon. In it you will see 17w/m2 called THERMALS!! What else would I include this in except convection?? As to your extending this outside the atmosphere it is silly. Your statement did NOT include any parameters about WHERE radiation is faster than convection. Since Convection was in question it can only reasonably be interpreted as where convection actually exists. Where convection actually exists is between the surface and the tropopause. In that area convection transfers more energy than radiation every day all day and nights also.

        Sorry you are such a sore loser over forgetting the basics.

        Now, your last post appears even more scattered than mine often are. I hope you get better!!

  42. Pekka Pirilä, 8/26/11, 10:38 am, CO2 residence time discussion

    PP: You can easily find thousands of references related to the Revelle factor.

    The ones that count are those reported by IPCC in TAR or AR4, or from references cited in those Reports. Nothing else is significant because those reports supply the model for the existence of AGW. If another report agrees with IPCC in some regard, that report must be shown to be an independent source for that regard. Otherwise it is redundant and not determinative. If that other report disagrees, it is merely argumentative, not determinative. The IPCC Reports must be debunked, if at all, on their own merits.

    PP:One example of lecture material explaining it is here.

    The link showed a URL, but unfortunately would not open. I searched for the course number, ES427, along with Revelle at the site eng.warwick.ac.uk, and found course material titled Ocean Chemistry from Dr. G. P. King, rev. 11/1/04. I am familiar with this document, and could find nothing of value in it but overly restrictive assumptions (e.g., surface layer equilibrium again) and an approximation for a thin, diffuse layer that disappeared in his derivations.

    King’s development makes d[CO2]_ml (a differential molecular CO2 concentration in the mixed layer) vanishingly small. That is a bootstrap fallacy, also known as the fallacy of begging the question. If the Revelle factor exists, then the atmosphere becomes a buffer holding excess CO2 (the sea a buffer against; and not as assumed by IPCC and King, holding only the ACO2 species) and the differential from equilibrium in the mixed layer would be negligible, AND Henry’s Law coefficient would have a novel sensitivity to the state of the surface layer. Thus King’s derivation assumes what it wants to demonstrate: the existence of the Revelle Factor.

    What appears to be reality is that the Revelle Factor is a fiction, that Henry’s Law holds. That is, CO2 readily dissolves in a surface layer not in equilibrium, and Henry’s coefficient is unknown. In fact IPCC and its authorities rely on CO2 dissolving more readily as wind speed increases, which causes the surface layer to become more turbulent. Because the surface layer is not in equilibrium, it can and must act as a buffer holding excess CO2. IPCC errs by moving that buffer of excess CO2 from the surface layer to the atmosphere.

    PP: You didn’t only criticize the Georgia Tech course based on the irrelevant comment that Beer-Lambert law is valid also for multichromatic radiation, when the absorption coefficients happen to be the same, but you implied clearly that this trivial extension would change essentially the outcome. Furthermore you claimed that changes to the presentation of Beer-Lambert law would be done to justify somehow suspect conclusion on the behavior in the atmosphere. This proves that your point was not only irrelevant sophistry, but you really tried to perpetuate wrong conclusions.

    Your characterization of my critique is not accurate, and the implication you allege puts words in my mouth. You have a professional duty as a scientist to quote me exactly in both regards, and to do so as a professional courtesy. I dispute the correctness of either, and the merit of your subsequent conclusions.

    PP: If you are now willing to admit that the Beer-Lambert law does not at all work for the total IR radiation in atmosphere, that’s fine. Is that your present view?

    Allow me to explain an experiment I performed. It is (1) too complex to present fully here; (2) not yet ready to publish on my blog, in part due to lack of motivation, because (3) (a) radiative transfer is irrelevant to radiative forcing (because RT, as precise as it is, is too sensitive to instantaneous atmospheric conditions, which are unknown, compounded by it being nonlinear) and (b) global warming is predictable from the Sun with a simple transfer function, making the GHE and radiative forcing irrelevant at the outset.

    The experiment comprised computing radiative forcing using David Archer’s online MODTRAN routine across its full dynamic range of CO2 from 1 to one million parts per million. The computation was for the Standard Atmosphere, s(C), is shown logarithmically at this link.

    http://www.rocketscientistsjournal.com/_res/F22|4BEERS_slog.jpg

    The curve s(C)_est(σ) is the best fit approximation to s(C) using four Beer functions of the form Beer_sj(α,&Delta) = Δ*(1 – exp{-αC), where C is the concentration. In the graph shown, σ = 0.53 Wm^-2, computed for 10 integer values of C per decade for a total of 60 points. The parameter Δ is the saturation level for each Beer function. Repeating the experiment for the tropical atmosphere produced similar results.

    To the extent that Archer’s MODTRAN program is representative, and within the accuracy reported, the following conclusions hold. Atmospheric absorption is well-represented by a set of four Beer’s Law equations of approximately equal radiative forcing, with approximately evenly spaced absorptivity coefficients. This is approximately equivalent to dividing spectral absorption strength into four 1.5 decade wide bands and computing an empirical absorption coefficient for each band. It is equivalent to approximating the absorption coefficient as a function of wavenumber by a new set of just four absorption coefficients, one for each of four discrete levels. MODTRAN has no significant region of zero absorption, that is, no observable window. The deepest Beer functions may be conjectures.

    To answer the explicit question posed, the Beer-Lambert Law survives. It holds for the atmosphere as modeled by the radiative transfer algorithm. Radiative forcing as estimated by radiative transfer is approximately the sum for four Beer-Lambert Law equations. The first two Beer-Lambert Law equations are sufficient for estimating the climate sensitivity for as much as double the present day level of CO2 concentration.

    This model for RT holds promise for yielding a computationally efficient estimate of atmospheric forcing, including producing a practical average over various models for lapse rates, CO2 concentration variability, and diurnal and seasonal effects, among others.

    PP: The controversy on the Bern model is very similar in it’s nature. You try to discredit it based on false arguments.

    Scurrilous nonsense. Similar to what, pray tell? Your naked accusation of false arguments has no merit. Next time you bother anyone with your opinion about the validity of the Bern equation, you should first explain how the four reservoirs, a_0 to a_3, in that equation could be feasible in nature.

  43. Another relatively inert gas to look at for residence time behavior is Nitrous Oxide (N2O). This has a residence time estimated anywhere from 5 to 200 years.
    http://www.gly.uga.edu/railsback/Fundamentals/AtmosphereCompV.jpg

    It also has a similar hockey stick increase to atmospheric CO2 rise, but occurring earlier IMO.
    http://www.lenntech.com/images/pastn2o.gif

    This comparison may allow us to resolve some arguments and to reinforce others. For one, I contend that this large uncertainty over the N2O residence times suggests that it also has a fat tail. As I said elsewhere in this thread, fat-tail distributions often don’t have a statistical mean and so a range of numbers are typically given.

    As a control we can also look at Methane (CH4). This gas also shows a significant relative increase yet it is much more reactive and thus has a much shorter residence time.
    All these follow either the industrial revolution or the ramp up of large-scale cattle farming or other agricultural practices.

    • Webby,

      “Another relatively inert gas to look at for residence time behavior is Nitrous Oxide (N2O).”

      Could you please give us a definition of relatively inert????

      • The gross dividing line is whether the molecule is endothermic or exothermic on transformation. If it’s reactive it won’t stay around for long. The inert gases are all the column 8 elements. N20 and CO2 are relatively inert because they are endothermic. Methane is exothermic so it has a shorter residence time.

      • Webby,

        “The gross dividing line is whether the molecule is endothermic or exothermic on transformation.”

        I see. So since it is relatively inert due to being endothermic, or needing energy to react, It doesn’t react with much. So it wouldn’t react much with exothermic specie or due to the sun shining and providing the energy or…??

        You did say relatively less reactive, so, even if it reacts a lot it would still be reacting less than something that reacts even more, relatively speaking.

      • The chemists language for this is completely oxidized. It makes more sense because endo/exothermic requires stating relative to which products.

      • True. Since there are many possible product paths you can aggregate these into the endothermic paths and the exothermic paths, and you can estimate which overall path is more probable. For example, methane with oxygen is the obvious, very common exothermic pathway:
        CH4 + 2*O2 -> CO2 + 2*H20

        The carbon in this case is oxidized making the reverse very unlikely to occur spontaneously.

    • subscribe

  44. Vaughan Pratt

    Further up this thread you stated:

    It is not impossible however that consuming carbon based fuels more efficiently while also cutting over to non carbon based fuels could reduce CO2 emissions by say 10% or 30% or some number. Calling this impossible is defeatist.

    It is then interesting to know in advance by how much the CO2 level will change.

    That’s a relatively easy one, Vaughan, because we are talking about a specifically quantified estimate of the reduction in CO2 added into the system..

    Humans now emit 34 GtCO2/year into the system, so a 30% reduction would mean a reduction of 10.2 Gt CO2/year.

    Let’s say the reduction started in 2020.

    And let’s calculate the net reduction in atmospheric CO2 by year 2100.

    10.2 * (2100 – 2011) = 816 GtCO2
    Half of this “stays in atmosphere” = 408 GtCO2
    (see many previous posts regarding this observation)
    Mass of atmosphere = 5,140,000 Gt
    Net reduction in atmosphere = 408 * 1,000,000 / 5,140,000 = 79.4 ppmv
    Equals 79.4 * 29 / 44 = 52.3 ppmv

    Business as usual case
    IPCC “Scenario B1”
    Atmospheric CO2 concentration by year 2100 = 580 ppmv

    If CO2 emissions cut back 30% below today’s value in 2020,
    Atmospheric CO2 concentration by year 2100 = 580 – 52.3 = 527.7 ppmv

    Using the logarithmic relation between CO2 and temperature and the 2xCO2 climate sensitivity assumed by the IPCC models (3.2°C mean value), we have:

    Business as Usual (IPCC Scenario B1):
    C1 = 390 ppmv = CO2 concentration in 2011
    C2 = 580 ppmv = CO2 concentration in 2100 with “business as usual”
    C2/C1 = 1.4872
    ln(C2/C1) = 0.3969
    ln 2 = 0.6931
    dT (2xCO2 per IPCC models) = 3.2°C
    dT (2011-2100, BaU) = 3.2 * (0.3969 / 0.6931) = 1.8°C
    (checks with IPCC AR4 WG1 projection)

    With 30% reduction of emissions:
    C1 = 390 ppmv = CO2 concentration in 2011
    C2 = 527.7 ppmv = CO2 concentration in 2100 with 30% cutback
    C2/C1 = 1.3530
    ln(C2/C1) = 0.3023
    ln 2 = 0.6931
    dT (2xCO2 per IPCC models) = 3.2°C
    dT (2011-2100, 30% CB) = 3.2 * (0.3023 / 0.6931) = 1.4°C

    Net reduction in global warming by 2100 resulting from 30% emission reduction in 2020 = 1.8 – 1.4 = 0.4°C

    This would be the estimate for net reduction in warming by 2100 resulting from a 30% cutback in emissions by 2020 using IPCC model-based assumptions..

    But let’s do a quick reality check on these assumptions using the actual past record on global temperature (HadCRUT3) and atmospheric CO2, as used by IPCC
    The HadCRUT3 record started in 1850
    Since then temperature has risen by ~0.7°C

    IPCC AR4 WG1 report tells us that CO2 level was ~290 ppmv in 1850

    IPCC also tells us a) that all natural forcing (solar, etc.) caused around 7% of the warming since pre-industrial times and b) that all other anthropogenic forcing components other than CO2 (aerosols, other GHGs, etc.) cancelled one another out over this period

    So the increase in CO2 from 1850 to today caused 0.93 * 0.7 = 0.65°C increase in temperature

    Using the same logarithmic calculation, we can use the actually observed CO2 and temperature data from 1850 to today to project the future “business as usual” and “30% emission cutback” cases:

    C0 = 290 ppmv = CO2 concentration in 1850
    C1 = 390 ppmv = CO2 concentration in 2011
    C1/C0 = 1.3448
    ln(C1/C0) = 0.2963
    dT (1850-2011) = 0.65°C (attributed to increased CO2)

    Business as Usual (IPCC Scenario B1):
    C1 = 390 ppmv = CO2 concentration in 2011
    C2 = 580 ppmv = CO2 concentration in 2100 with “business as usual”
    C2/C1 = 1.4872
    ln(C2/C1) = 0.3969
    dT (2011-2100, BaU) = 0.65 * (0.3969 / 0.2963) = 0.9°C

    With 30% reduction of emissions:
    C1 = 390 ppmv = CO2 concentration in 2011
    C2 = 527.7 ppmv = CO2 concentration in 2100 with “30% cutback”
    C2/C1 = 1.3530
    ln(C2/C1) = 0.3023
    dT (2011-2100, BaU) = 0.65 * (0.3023 / 0.2963) = 0.7°C

    Net reduction in global warming by 2100 resulting from 30% emission reduction in 2020 = 0.9 – 0.7 = 0.2°C

    So we have two estimates to answer your question.

    First, if we accept the IPCC model-based 2xCO2 climate sensitivity estimate of 3.2°C, we have a net reduction in global warming from today to 2100 by reducing worldwide emissions by 30% of 0.4°C

    If instead we use the actually observed change in CO2 and temperature since 1850 as the basis, we have a net reduction in global warming from today to 2100 by reducing worldwide emissions by 30% of 0.2°C

    Maybe someone here wants to challenge the calculation here with a better one, but that is the savings in global warming I can see from a 30% reduction in worldwide CO2 emissions.

    Max

    • This is a relative reduction in accumulation assuming a model in which there are no permanent sinks (i.e., infinite residence time). Factoring in residence time makes the calculations far more intricate.

      Unfortunately, I do not see any way at this time to resolve the question of whether residence time is short or long apart from the cum hoc ergo propter hoc arguments of the advocates of a long residence time. I thought I had found one in my dispersion argument advanced previously but, as I stated at August 26, 2011 at 3:03 am, that argument is not so clear cut as I thought it was.

      • I should say, I do not see any way at this time to convincingly resolve the question of whether residence time is short or long.

        On the one side, I have the cum hoc ergo propter hoc arguments of the advocates, but what they take as being a remarkable coincidence otherwise is really not so remarkable. Low frequency time series can either be increasing or decreasing at a given time. If they are both increasing, you can always determine an affine relationship which appears superficially to relate them.

        Some would argue that, yes but we have theories which also predict this behavior. But, these are really only hypotheses, which cannot be proved in a closed loop fashion in the lab. In ages past, eminent knowledgeable people had hypotheses on other matters which also matched observed behavior, from leeches to epicycles to many others, which proved to be wrong. We like to imagine ourselves as somehow better than they, more advanced, more knowledgeable, and better able to discriminate. But, this is mere hubris. Those were brilliant people also, in their time, and they were projecting their hypotheses based on the best information they had up to that time. The only thing we have to differentiate ourselves with them is the accumulated tenets of the scientific method, which abjures leaps of logic and calling it fact before the evidence has clearly established it as truth.

        On the other side, I see unexplained variations in the affine relationship, though these could be due to simple measurement error or small unaccounted effects. I see a very low frequency transfer function between the variables, and I know that such low bandwidth regulatory systems tend not to be very robust – s**t happens to them. And, I see a great many founding assumptions, from the reliability of the proxy record to the assumed quantification of all sources and sinks, which all have to be correct for the prevailing paradigm to hold water (or, CO2 in this case). I am keenly aware of the geometric dilution of likelihood when you have many non-verifiable links in a chain of logic.

        I do not trust the ice core measurements, and they are integral to the entire narrative. There are a great many non-verifiable assumptions involved regarding the properties of entrapment over thousands, or even mere hundreds, of years. They assume particular amplitude characteristics. They assume a particular time lag. They assume frequency independence of both these items. IMO, we have only the data from 1958 to work with, and this is relatively a very short time.

        There is a marked deceleration in CO2 accumulation happening right now. It will be very interesting to see if the evident ~60 year cycle in the global temperature metric continues, and we find ourselves in a cooling spell for at least the next 20 or so years, and if the CO2 measurements start to decrease with a characteristic time lag, as would be expected if the system behaves in what I can only describe inchoately at this time as a natural way.

      • Bart

        This is a relative reduction in accumulation assuming a model in which there are no permanent sinks (i.e., infinite residence time). Factoring in residence time makes the calculations far more intricate.

        This is correct. The calculation is, by definition, a simplification.

        But it tells us that reducing human CO2 emissions by 30% (in 2020) would reduce the IPCC “model-guess-timated” level of 580 ppmv by around 52 ppmv to 527 ppmv, which in turn would theoretically reduce global warming by 2100 by 0.2C to 0.4C, depending on the 2xCO2 climate sensitivity assumed.

        This was in response to a question by Vaughan Pratt asking for an order-of-magnitude impact of cutting back CO2 emissions by 10% or 30% (10% would obviously have less impact)..

        “Residence time” is “factored in” in the first method of calculation in the sense that the amount of the human emission “remaining” in the atmosphere is estimated to be around 50%, as has been observed (with the IPCC assumption that only human emissions are changing our planet’s natural carbon cycle equilibrium, itself a very doubtful assumption).

        In the second.method I just used the IPCC model-based exponential rate of increase of atmospheric CO2 (scenario B1), which is basically a continuation of the CAGR observed over the past 5 or 50 years ~0.44%/year, IOW “business as usual”. The two methods give the same answer.

        So the answer basically shows an upper limit impact on our climate of cutting back CO2 emissions and confirms that we cannot, in actual fact, change our planet’s climate perceptibly, no matter how much money we throw it it. – which is what Vaughan apparently wanted to know.

        Max

        PS I would agree with you that there is no way “to convincingly resolve the question of whether [CO2] residence time is short or long, and IPCC seems to concur in its statement that the residence time is between 5 and 200 years (even though the IPCC model projections are apparently all based on a residence time of 400 years).

      • “even though the IPCC model projections are apparently all based on a residence time of 400 years”

        Good point. Once again, a reminder of the paucity of the argument that AGW is “science” dictated by “well known physics”. Well known physics give one a model. Numerical predictions from that model rely on parameterizations of it. And, the parameters are not “well known”.

      • “Paucity” is not the word I was looking for there. I was reaching for “flimsiness”.

    • Norm Kalmanovitch

      The only challenge to your calculation is the 5.35ln(2)=3.71 Watts/m^2 used as the forcing parameter in the IPCC climate models for a doubling of CO2.
      This parameter is base on observed warming of 0.6°C resulting from an observed increase in CO2 of 100ppmv over the past century.
      The longterm global temperature record as demonstrated by the IPCC in their first 1990 Report shows that most of this temperature ivrease is due to the natural recovery from the Little Ice Age of approximately half a degree C per century.
      Apparently the IPCC forgot tom subtract this 0.5°C of natural non CO2 induced warming from the measured 0.6°C warming resulting in a forcing parameter that is essentially six times too forceful since the 100ppmv increase in CO2 can really only be the justifiable cause for only 0.1°C of the observed 0.6°C warming.
      This would reduce your calculated values to just one sixth of what you present and while this is much closer to reality the inconvenient truth is that most of this 100ppmv increase in CO2 has been naturally sourced so these claculations have nothing to do, with CO2 emissions from fossil fuels.
      In spite of this challenge I commend you for your common sense calculations that show the fallacy of the whole climate change issue

      • Norm

        The only challenge to your calculation is the 5.35ln(2)=3.71 Watts/m^2 used as the forcing parameter in the IPCC climate models for a doubling of CO2.

        Agree.

        I have used the (questionable) IPCC assumptions a) that only 7% of the warming was caused by natural factors (solar) and b) that the remainder was caused by CO2 (with all other anthropogenic factors cancelling one another out).

        If one corrects for this, the impact of reducing CO2 emissions by 30% becomes even more insignificant than I stated in my post to Vaughan Pratt.

        Max

      • Norm Kalmanovitch

        Max

        Your argument is more than good enough so I withdraw my challenge because there is no use debating how insignificant the effect from increasing CO2 concentration is when the concentration keeps rising at 2ppmv/year yet the Earth stopped warming after 1998 and has been cooling since 2002

        Norm

    • The scenario you’ve used for business as usual, B1, is the “low growth” scenario that reaches 580ppm by 2100. In contrast the moderate growth A1B scenario reaches 700ppm by 2100, and A1T is over 900ppm.

    • 10.2 * (2100 – 2011) = 816 GtCO2
      Half of this “stays in atmosphere” = 408 GtCO2
      (see many previous posts regarding this observation)

      Max, I was fine with your arithmetic up to this point. Let me explain why this step gives me pause.

      Suppose your money manager has recommended that you adopt a life style such that your expenses are half your income, allowing you to accumulate the other half in your bank balance.

      After several years of figuring out how to do this, you finally succeed.

      Now suppose you lose your job.

      If your expenses continue to be half your income you will starve to death in a week due to spending zero on food.

      Do you see my point? Just because something has been a factor of a half during a relatively steady state does not mean that it will remain a half if you suddenly change things.

  45. Vaughan Pratt

    BTW that calculation comes out the same if one uses the more complicated formula based on the exponential rate of CO2 concentration, as assumed by IPCC:

    BaU = IPCC Scenario B1
    C2 = 580 ppmv (2100)
    C1 = 390 ppmv (2011)

    Exponential (componded annual) rate of increase
    CAGR (2011-2100) = 0.447%

    Cintermediate = CO2 concentration in 2020:
    = 390 * (1.0447)^(2020-2011) = 406 ppmv

    Increase after 2020 with no cutback = 580 – 406 = 174 ppmv

    Increase after 2020 reduced by 30%: = 0.3 * 174 = 52 ppmv reduction

    C2 (CO2 concentration by 2100 with 30% reduction after 2020):
    = 580 – 52 = 528 ppmv (same as with simpler calculation)

    Max

  46. Vaughan

    Typo correction: That should read

    = 390 * (1.00447)^(2020-2011) = 406 ppmv

  47. The so called “fat tail” is a product of unproven models.

    To me one set of measurements is worth 1,000 models.

    Endangerment Finding Proposal
    Lastly; numerous measurements of atmospheric CO2 resident lifetime, using many different methods, show that the atmospheric CO2 lifetime is near 5-6 years, not 100 year life as stated by Administrator (FN 18, P 18895), which would be required for anthropogenic CO2 to be accumulated in the earth’s atmosphere under the IPCC and CCSP models. Hence, the Administrator is scientifically incorrect replying upon IPCC and CCSP — the measured lifetimes of atmospheric CO2 prove that the rise in atmospheric CO2 cannot be the unambiguous result of human emissions.

    http://mobjectivist.blogspot.com/2010/04/fat-tail-in-co2-persistence.html

    • CO2 is largely inert unless happens across a short pathway to a sequestering site. So if you have a fast rate and a slow rate operational, the mathematical averaging of this will give a fat-tail.

      Oxygen molecules have a long residence time (several thousand years) because even though there are fast pathways to sequestering, it is obviously crowded out by the large concentration of oxygen that exists in the atmosphere already. So O2 is more thin-tailed than fat-tailed.

      Gases like N2 and very inert ones Helium or Argon have long residence times but are not fat-tail as well.

      I just don’t understand this need to believe in an unproven model of CO2 having a residence time of a few years.

      BTW, if you want to associate that link, which is my work, with anything that IPCC gets funded to do, you would be sadly mistaken. I work my analysis after-hours with funding supported by the sweat on my brow. I don’t have an Administrator, whatever that person is supposed to do.

    • netdr,

      “To me one set of measurements is worth 1,000 models.”
      Very true. Why people here wasting time arguing, typical laboratory experiments should be able to determine the roughly residence times for varying CO2 concentrations.

    • To me one set of measurements is worth 1,000 models.

      To me one model is worth 1,000 models. This is as true of climate models as of runway models. You’d be spending all your time breaking up fights between them.

  48. O.K. Here are a couple of my fits. What I did was fit the online New Zealand 14CO2 pre/post bomb trace to a first order decay. This give a rate constant between 0.056 and 0.022 year-1.
    Then I fitted to Keeling [CO2], converted into GtC, a simple model.
    The world began in 1964, when the atmospheric CO2 was 680 GtC and so in 1965 with human emissions 3.13GtC it when up. However, there was an efflux of CO2 into the aquatic phase and into the biotica,with a rate constant of (I) 0.056 or 0.022 year-1 (II).
    The fits state that the exchangeable reservoirs are either 43 (I) or 47 (II) times bigger than the atmosphere and have rates between 0.00134 (I) or 0.0005 (II) year-1.
    I modeled 680 GtC in 1964, then added human emission, I then removed either 2.5 to 5% per year and added the new amount of human emissions. I did this for all the years. The carbon I removed from the atmosphere I added to a second reservoir, and fitted the best size/rate to this second reservoir (the sum of all the reservoirs; chemical, aquatic, biotic, e.t.c.)
    So, what to take from the figures. Firstly, the fact is that the decay of 14CO2 is first-order and has a half life of between 13 and 26 years.
    Something like 2.5 to 5% of the total atmospheric mass of carbon leaves the atmosphere (14CO2). It partitions into a second reservoir which is >40 and <50 times the size. You can try to model for an infinite sink, but it is horrid, somewhere between 40 and 50 it is.
    Here are the pictures.

    http://i179.photobucket.com/albums/w318/DocMartyn/NewCO2streadystatewith14CO2.jpg

    http://i179.photobucket.com/albums/w318/DocMartyn/NewCO2streadystatewith14CO2II.jpg

  49. The residence time of CO2?

    Bwahahaha.

    May as well ask the length of the coasts of countries in America’s sphere of political influence.

    A complex function of variables we ill understand and may not be able to identify with ambiguous parameters and armwaved definitions meant to be used in calculations with almost equally nebulous foundations in reasoning, plunked down in the middle of an overheated, politicized, scandal-prone, diatribe-plagued mayhem.

    It practically demands its own name. I suggest “Salbyist Fallacy”. Putting such nonsense in a textbook, of all things. Tch.

    • Bwahahaha.

      May as well ask the length of the coasts of countries in America’s sphere of political influence.

      I assume you made this analogy because you think it is an irrelevant question to ask.

      Well, disorder exists in the environment and it makes a lot of sense to understand it as much as we can. Take a look at the recent Gulf Oil spill. I suppose people let out a “Bwahahaha” when