by Judith Curry
There was some discussion of this topic in the context Murry Salby’s talk, but it has been suggested that this topic deserves its own thread.
This post is motivated by the following email from Hal Doiron:
Hello Dr. Curry,
.
In my review of climate change literature related to atmosphereic CO2 sources and sinks, I have run into a wide range of opinions and peer reviewed research conclusions regarding the following specific question that I think is central to the CAGW debate.
.
How long does CO2 from fossil fuel burning injected into the atmosphere remain in the atmosphere before it is removed by natural processes?
.
Sources of confusion in answering this question:
.
1. In responses to one of my comments at Climate, Etc., Fred Moolten has claimed the answer to this question is about 100 years. http://judithcurry.com/2011/08/18/should-we-assess-climate-model-predictions-in-light-of-severe-tests/#comment-101642 “To focus on the most relevant element in this situation, CO2, the salient feature is the exceedingly long lifetime of any atmospheric excess we generate from anthropogenic emissions – there is no single decay curve, but the trajectory of decline toward equilibrium concentrations can be expressed as a rough average in the range of about 100 years, with a long tail lasting hundreds of millennia. In other words, the CO2 we emittomorrow, or refrain from emitting, is not something we can take back if we later decide we shouldn’t have put it up there. It will warm us for centuries.”
.
2. From an ESRL NOAA website http://www.esrl.noaa.gov/gmd/education/faq_cat-1.html#17 , I found:
· What will happen to Earth’s climate if emissions of these greenhouse gases continue to rise?
Because human emissions of CO2 and other greenhouse gases continue to climb, and because they remain in the atmosphere for decades to centuries (depending on the gas), we’re committing ourselves to a warmer climate in the future. The IPCC projects an average global temperature increase of 2-6°F by 2100, and greater warming there after. Temperatures in some parts of the globe (e.g., the polar regions) are expected to rise even faster. Even the low end of the IPCC’s projected range represents a rate of climate change unprecedented in the past 10,000 years.
.
3. I believe in listening to Dr. Murry Salby’s audio lecture at Climate, Etc., his research led him to the conclusion that the atmospheric residence time of CO2 from fossil fuel burning emissions was only a few years. This was related to investigation of the trends of the ratio of Carbon 12 to Carbon 13 isotopes in the atmosphere.
.
4. There is previous published literature, also based on the ratio of Carbon 12 and Carbon 13 isotopes that Dr. Salby discussed, that concludes the atmospheric residence time of CO2 from fossil fuel burning is about 5 years. This literature is reviewed and cited by my former NASA colleague, Apollo 17 astronaut, and former US Senator, Dr. Harrison “Jack” Schmitt, in his essay on CO2 at: http://americasuncommonsense.com/blog/category/science-engineering/climate-change/4-carbon-dioxide/#r4_14
.
As I monitor the debate on CAGW, it seems to me that if this particular recommended thread topic could be settled with high confidence, then much of the CAGW alarm could be moderated and refocused on a broader range of climate change issues. I suggest it should also be a key research topic for further investigation in an attempt to answer the posed question with high confidence.
Sincerely,
Hal Doiron
.
JC comment: I don’t have a good answer to the question Hal raises. Below are some online references that I’ve spotted, from across the spectrum.
- Residence Time of Atmospheric CO2, H. Lam of Princeton
- Wikipedia
- SkepticalScience
- CO2 Science
- Appinsys
- Robert Essenhigh
And finally an exchange between Freeman Dyson and Robert May in the NY Review of Books:
.
I don’t have time to dig into this issue right now, so I’m throwing the topic open for discussion, hoping for some enlightenment (or at least confusion) from the Denizens.
Judith,
The Freeman Dyson/Robert May links are not working.
ok let me check
Only a few years. For the most part it’s even shorter – the most of it is removed locally and immediately. Atmospheric CO2 is driven/determined by global climatic factors, whatever it means (SST, sea ice extent…). Think H2O.
Edim – I don’t see the relevance of that article to CO2 residence time. I read the full article, not just the abstract, and found it interesting in its analysis of ion-mediated nucleation rates involving sulfuric acid. It did not address the question of how many nuclei could be induced to grow to a size of climate significance for low cloud formation, although earlier data have shown this probably to be relatively small and the results reported here are consistent with that possibility..
Some news:
http://www.nature.com/news/2011/110824/full/news.2011.504.html
http://www.nature.com/nature/journal/v476/n7361/full/nature10343.html
More news (sea level drop):
http://www.jpl.nasa.gov/news/news.cfm?release=2011-262
Not OT. It’s all connected.
My comment above was intended as a response to the “news” item you llnked to.
Time to read up on Cosmic rays and Climate see NATURE latest edition. The game is up
Not latest edition anymore.
http://www.nature.com/news/2011/110824/full/news.2011.504.html
…and there are polarized views on cosmic rays and climate, just like there are polarized views on much to do with climate. At least CERN is doing an experiment in the lab.
Hal Doiron has written:
“As I monitor the debate on CAGW, it seems to me that if this particular recommended thread topic could be settled with high confidence, then much of the CAGW alarm could be moderated and refocused on a broader range of climate change issues.”
I find the abbreviation CAGW to be somewhat objectionable in its misrepresentation of mainstream views, but that is a minor quibble and peripheral to the main topic here. Rather, I would like to ask Hal Doiron a question, because I’m not sure how to interpret his statement.
Hal – At what duration for the residence time of excess CO2 would you perceive that interval to be long enough to be of concern? How many years specifically for an interval representing an average residence time for the excess? For clarity, I’m referring to the time necessary for an excess over an equilibrium concentration to return to baseline, where this can be approximated as a “half-life” although that would not be accurate in the formal sense because there is no single exponential decay curve.
If it takes X years for the excess concentration to decline halfway, what value of X would be worrisome for you?
Fred,
I believe there are clear, proven and well-known beneficial effects of CO2 in the atmosphere with regards to increased rates of plant growth, better crop yields, etc. needed to support the growing population of the planet. I don’t know what a harmful level of CO2 in the atmosphere would be. I have read that US submarines are allowed to have 8,000 ppm CO2 before there is any concern about health related effects (more than 20 times current levels). I don’t know what the optimum level of CO2 in the atmosphere would be, all things being considered, but it is somewhat likely that the level would be higher than it is now, and I have factored this into my thinking about the current situation.
If the human activity related CO2 atmospheric residence time is closer to 5 years as some scientists claim, and not the 100 or so years that you and NOAA apparently believe, then the growth rate of CO2 in the atmosphere from human related causes isn’t at such a high growth rate (compared to natural causes of CO2 sources and sinks that I don’t know how to control) that I should need to take immediate, potentially harmful action with unknown and unintended consequenses to restrict human related CO2 emissions, (your medical triage example suggested in a previous thread on decision making with limited data) as many climate scientists have called for. That is, if we can answer the question of the current thread closer to the 5 year residence time mark, then we can all agree we are not in a triage situation regarding control of CO2 emissions and that we have more time to work the problem of climate change with perhaps different decisions and action plans. After suggesting some “first steps” to take in response to your suggestion for defining such “first steps” in a previous thread http://judithcurry.com/2011/08/18/should-we-assess-climate-model-predictions-in-light-of-severe-tests/#comment-101642 , it occurred to me that if we could answer the question of this present thread, then that would change the crisis atmosphere that many climate scientists believe they are working in, and that makes them worry so much about inaction in the face of such dire, but uncertain predictions of their unvalidated models.
“If the human activity related CO2 atmospheric residence time is closer to 5 years as some scientists claim, and not the 100 or so years that you and NOAA apparently believe”
Again this is the elementary misunderstanding that Judith seems to make no effort to dispel. Both figures are correct. Individual molecules are exchanged on a timescale of five years or so. And the increase in total CO2 takes a century or more to go away.
It’s a bit like worrying about the Government printing money (OK, in pre-electronic days). In fact a huge number of notes are printed and destroyed each year. Any excess printed is small compared to circulation.
But circulation, like exchange of molecules, is just that. What counts is change in the aggregate.
I share your surprise Nick as this is a fairly common and relatively easy-to-dispel source of confusion. Aren’t you a climate scientist Judith? If you can’t take the time to answer simple questions like this, then what is the point of your blog?
Like I said in my main post, I suspect that this whole issue is much more complex than you are making it out to be. Your question, while simple, does not have a simple answer, IMO.
While the particulars of estimating equilibrium response times are governed by multiple processes (as noted by Fred and Chris Colose downthread), it’s nevertheless trivial to point out that the atmospheric lifetime of an individual molecule isn’t the relevant issue…something which you bizarrely failed to do. Is your goal with this blog to sow confusion or dispel it?
The goal of the blog is to discuss scientifically relevant issues, which we are doing.
There’s again a mixture of simple things and less simple things.
That we have two different residence times belongs to the simple things. Estimates on the level sufficient to conclude that the increase in the atmospheric CO2 over last 50 years (and also over the last 100 years) is predominantly of human origin belongs also to rather simple things.
More precise estimates of the uptake of carbon from atmosphere to the other reservoirs is not anymore simple, because none of the subprocesses is accurately known. The balance between surface ocean and the atmosphere is the best understood of all, but even that is influenced strongly on details of the buffering of pH in oceans.
One question that I have not found data on the level that I have been looking for is the role of deep oceans as a reservoir. The total amount of carbon in deep oceans is typically given to be of the order of 40 000 GtC, which is 50 times the amount in the atmosphere. The role of that in the uptake over long periods has not been discussed in papers that I have found. Archer skips the discussion in his papers stating only the the overall conclusion that 20-35% of CO2 remains until removed by sedimentation and weathering and presents the 1990 model analysis of Maier-Reimer as reference, which is not convincing. A Revelle factor of 10 would lead to the value 17%, when a balance has been obtained with the ocean, but is the Revelle factor for deep ocean 10. The value depends on the present pH and on the nature of buffering. Reaching full balance with deep oceans takes also a lot of time, but Archer seems to include that in the faster processes.
On the other hand I don’t see either the importance of the level of the very long tail. I would rather think that, what happens after the excess CO2 concentration has dropped to one half of it’s peak value, is not likely to be a problem for the further future of the Earth and people of those periods. They may very well think that the decreasing trend is then a new problem and the lesser problem the lower it is.
Search under the term “biological pump” and “carbon cycle”
It is really interesting stuff.
It is indeed.The partitions between the pumps (solubility and biological) are asymetrically distributed around 1/3 for the former and 2/3 for the latter.
The effects in peturbation experiments are interesting if we shut the Biological pump off completely, we can observe from the preindustrial an increase from around 280ppm to 450 ppm over similar timescales eg Sarmiento et al 2011.
A study of decay of bomb created radioisotopes
http://nzic.org.nz/CiNZ/articles/Currie_70_1.pdf
suggests a primary decay half life of 13 years.
A. E. Ames, It was only about four months before the background radiation levels moved to a new, sustained level (see: gamma energy range 10)
http://epa.gov/radnet/radnet-data/radnet-billings-bg.html
& does ‘who’s or what’s, make any difference when considering ‘half life’?
Tom, I don’t know for sure, but I expect that the “non replacement removal time” for all CO2 isotopes should be the same within the accuracy of any experiment to measure it. (Molecular weight can matter in packing effects and substrate interactions.) Thanks for the interesting radnet site.
Interesting answer as Marlowe’s questions were
1. Aren’t you a climate scientist Judith?
Obviously a more complex question than Marlowe thought and
2. If you can’t take the time to answer simple questions like this, then what is the point of your blog?
We HAVE been wondering about that.
Eli, you and Marlowe seem to be confusing a climate scientist with someone who parrots the IPCC consensus.
Eli was just pointing out that you are avoiding the questions. This appears to be a sensitive point.
I get 500 comments here per day. I try to do a post per day. Not to mention my two day jobs. I only answer comments/questions that I can respond to within 60 seconds, and I have to be selective in which ones I answer. If someone raises something really interesting, I might do an entire post on it. People that are trying to play “gotcha” with me and want to tell me how I should be doing my job (here or more generally) typically get ignored by me.
Judith,
The point at issue has nothing to do with any IPCC concensus. It is a matter of elementary physical chemistry. Freeman Dyson, hardly a IPCC parroter, set it out simply and explicitly in your second link:
“He says that the residence time of a molecule of carbon dioxide in the atmosphere is about a century, and I say it is about twelve years.
This discrepancy is easy to resolve. We are talking about different meanings of residence time. I am talking about residence without replacement. My residence time is the time that an average carbon dioxide molecule stays in the atmosphere before being absorbed by a plant. He is talking about residence with replacement. His residence time is the average time that a carbon dioxide molecule and its replacements stay in the atmosphere when, as usually happens, a molecule that is absorbed is replaced by another molecule emitted from another plant. “
If FD finds it easy to resolve (and it is) why is it so hard here?
Nick, atmospheric residence time is NOT a simple issue of physical chemistry! I didn’t see any physical chemistry at all in your argument.
Nick, if you agree with Freeman Dyson’s skeptism and it’s already resolved by him, then why do you keep asking?
Judith,
The elementary physical chemistry concept involved is dynamic equilibrium. The nett result of a forward and a back reaction. The time period quoted resulting from isotopes is the one-way process of CO2 molecules being absorbed by plants or whatever. As Dyson points out, there is a back reaction – that CO2 gets back into the atmosphere. If you want to know how total CO2 will diminish, you have to consider the result of both forward and back. That, as Dyson spells out, is the simple basis for the two different figures.
Nick, you are forgetting about the dynamics of the system, that is where the complexity lies
couldn’t the same be said about the greenhouse effect re complexity of the system
Nick, you are thinking of it as a black box system. Some want to consider what is happening inside it, some may know it as a clear box. Why is this a hard concept for you?
Kermit,
I am not talkin g about any sorts of boxes. The issue is clear. Two different figures have been quoted for CO2 residence time. Hal and others ask whether the IPCC is wrong. The answer, as Dyson explained, is simple. It has nothing to do with dynamics, boxes or anything else. They are talking about different definitions. If someone wants to make an issue of it, they need to explain how the figures are comparable.
Which “dynamics” would those be, exactly?
If you’re talking about the carbon cycle, those would be dynamics about which, in your own words, you do not “have any expertise!”
Judith Curry: “The Earth’s carbon cycle is not a topic on which I have any expertise.”
Or has Murry taught you all about the carbon cycle, within the month of August?
Yes Nick, we understand there are 2 definitions being used for residences time. One is large and one is small, both are valid. Which one sounds better to the IPCC? the larger scarier one or the smaller one? This is exactly why you are going on and on about it. Its just another shell game “trick” environmentalists use.
1. You expect people to answer rhetorical questions?
2. If this blog has no point, then why are there so many commenters?
Not that your opinions about the carbon cycle count for anything.
Judith Curry: “Your question, while simple, does not have a simple answer, IMO.”
With all due respect, you have already begged ignorance on this question and should “remain silent and thought a fool” rather than assert things you just don’t know, “and remove all doubt.” Really. Because by your own admission, you’re not qualified to make that judgment, as your opinion is not expert.
Judith Curry: “The Earth’s carbon cycle is not a topic on which I have any expertise.”
(emphasis added due to your repeated refusal, despite numerous requests, to offer any scientific justification of your assertion that “it is sufficiently important that we should start talking about these issues.“)
All that you can truthfully say now about the carbon cycle is that you do not understand it. Or, you can revise your previous statement that you do not “have any expertise.” That of course is up to you. But what you cannot do, at least not consistently (in case you care about that), is claim 100% ignorance while you’re promoting Salby, but then within the time of a month (hardly enough time to have become expert if you weren’t already!), declare with any authority that anybody else’s analysis of same is wrong.
You didn’t have the expertise to articulate any reason then that Salby’s analysis is right or even likely right, therefore you also don’t have the expertise now to say whether anybody else’s is wrong.
Not having “any expertise” on this question, it’s just the analysis you can provide right here, right now, versus theirs. To prove Eli, Marlowe and Nick wrong, you must show that at least one term they neglect, or the sum of terms they neglect in their analysis, have magnitude equal to (at a minimum) or greater than the terms that they include in their analysis. This is not a matter of opinion. As a scientist (former?), you know that “IMO” doesn’t cut it. Your opinion counts for nothing. Either you can show that other terms invalidate their analysis, or you cannot and in that case you have no basis to fault their analysis.
All humanity is divided into two classes; those who don’t understand the carbon cycle and those who don’t understand that they don’t understand the carbon cycle.
=============
No. It isn’t complex. Let me list a quick series of proofs:
1) The ice core record clearly shows that the recent increase in CO2 concentrations is, to use an overused word, unprecedented in the Holocene period, and, indeed, in the last 800,000 years, and non-ice core approaches show that the current CO2 may exceed levels seen for the last 20 million years (Tripati, Science, 2009).
But, maybe you choose to throw out paleoclimate records because… well, because they don’t fit your preconceived notions that human emissions are too small to be meaningful. So…
2) The Revelle factor: straightforward bicarbonate buffer chemistry known since 1957-1958 (Revelle & Suess 1957, Bolin and Eriksson 1958) shows that “a 10% increase in the CO2-content of the atmosphere need merely be balanced by an increase of about 1% of the total CO2 content in sea water” (Sabine et al., Science, 2004 estimate that the uncertainty of this factor ranges from 8 to 16). This is the key element that means that the ocean CANNOT absorb all the CO2 proportionally the way that Henry’s law might suggest, and that therefore a decent percent of any CO2 emission will stay in the atmosphere for thousands of years until sedimentation processes have sufficient time to react (See Archer et al., Annu. Rev. Earth Planet. Sci. 2009, for a review of millenial processes). This is a key concept that many short residence timers have never figured out – they just look at the total gigatons of carbon in the ocean and figure that it can easily soak up any increase in the atmosphere, but they’re wrong.
3) Carbon cycle models have done a pretty good job of explaining the difference between “residence time” and “adjustment time”, dating back at least as far as Rodhe and Björkström in 1979 (turns out that this decoupling is a result of this bicarbonate buffering). This (among other things) is what trips up people like Essenhigh and Segalstad.
4) Carbon cycle models also do a decent job of explaining trends in isotope levels in the atmosphere. Eg, Stuiver et al. Earth and Planetary Science Letters 1981, or Stuiver et al. GRL, 1998. See also, “Suess effect”. People like Salby and Spencer do cute little regressions, but they don’t have real physical models that actually show how and why concentrations and isotope levels have changed the way they have: nor do they test their regressions against existing carbon cycle models – if they did, they’d see that the existing models produce the right signatures. Essenhigh has a model, but first he handwaves the increase in concentrations based on a simplistic understanding of the CO2 solubility-temperature relationship (Revelle and Suess realized that the magnitude of this relationship was insufficient to explain observed CO2 changes back in 1957), and Essenhigh derives lifetimes for 12C and 14C that are different by a factor of more than THREE!!! I still can’t believe that any competent reviewer with a basic knowledge of chemistry could have let that pass – isotope separation is HARD (see Uranium separation): kinetic isotope separation for seawater is on the order of a couple tenths of a percent, and for plant matter is maybe a couple percent at best, so where does a factor of 300% come from?! (also, the carbon cycle field realized that the ocean wasn’t perfectly stirred at least 35 years ago: see Oeschger et al, Tellus, 1975 for an early example of switching to a more realistic diffusive model).
5) These things have been debunked before. I recommend O’Neill B, Measuring Time in the Greenhouse, Climatic Change, 1997.
Does this mean that we understand the carbon cycle, or that our models are perfect? Not hardly. I recommend Doman AJ; van der Werf GR; Ganssen G; Erisman, JW; Strengers B, A Carbon Cycle Science Update Since IPCC AR-4, A Journal of the Human Environment 39(5-6):402-412, 2010 as a review of what the actual interesting questions in carbon cycle science these days are.
So, to review: your suspicions are totally off-base. If this was a biology blog, this kind of post would be the equivalent of wondering whether the “missing link” disproves evolution. If it was an astronomy blog, it would be equivalent of giving press-space to the guys who claim that the double shadow from a flag prove that the Moon landing was faked. By sticking to this position that this is a scientifically relevant topic of discussion, your status as a “meta-expert” is cast in doubt since you are apparently not “able to distinguish a genuine expert from a pretender or a charlatan.” I grant you that you at least have figured out the Sky Dragons are charlatans – but, to go back to my astronomy blog analogy, that’s like figuring out the guys who claim the Moon is made of green cheese are charlatans. Given your impressive publication record, you should be able to do a lot better than this. And maybe this should make you wonder whether you are a little too hasty to give credence to the “not-IPCC” crowd. And yes, the IPCC is certainly not perfect, but that doesn’t mean that if the IPCC says that a clear mid-day sky looks blue, you should go around believing people who claim that actually it is more like a purple.
-M
(and yes, I do get frustrated at having to debunk stupid myths over and over and over again. There are plenty of real, interesting uncertainties regarding an issue as complex as climate change, and especially with regards to appropriate mitigation or adaptation measures, so it is really frustrating that the debate keeps getting bogged down in questions that were solved 50 years ago)
M,
The Appeal to Evolution is always a curiosity, but it’s irrelevant to questions of climate, even the 1,000,000th time it’s tried.
Andrew
It may be irrelevant to questions of climate, but it is very relevant to questions of self-delusion. I note that you did not address a single one of my technical arguments. I could also keep going:
6) The northern-hemisphere/southern-hemisphere gradient, and the relationship of that gradient to increasing CO2 emissions (see graph in IPCC AR4 Chapter 7).
7) The mass argument that many here have used before.
8) Surface ocean pH is increasing faster than pH at depth (indication of diffusion, and of which direction the CO2 is going).
I’d also point out Engelbeen’s webpage.
Anyway, have fun with your delusions,
-M
M,
would you like to point out the empirical data that were used to show the Revelle Factor??
OK, how about the experiments that were done??
Well, what have you got other than assertions??
Please copy from the papers copiously as I am very ignorant.
(and yes, I do get tired of having to debunk the same old tired Junk Science over and over again every time someone doesn’t actually read and COMPREHEND the papers they link.)
Good job, kuhnkat – recognizing your own ignorance is the first step to wisdom!
If you were to read Sabine et al., you’d find out that they used data from the World Ocean Circulation Experiment (WOCE) and the Joint Global Ocean Flux Study (JGOFS) to measure inorganic carbon. The Revelle factor can be calculated as proportional to the ratio between DIC and alkalinity. Therefore, Sabine et al. were able to produce a global map of the Revelle factor, ranging from 16 in cold Antarctic waters to 8 in warm tropical basins.
M,
you just made the mistake of using bald assertions again. I told you, copy a lot. I do not expect people to remember stuff I wouldn’t if I were on the other side. I DO expect them to actually support their assertions with more than more arm waving. You simply state they say you can do it.
SHOW ME!!! Show me why THEIR assertions are meaningful!!
Sigh. I’m not a carbon-cycle expert myself, but I have read enough of the literature to be able to give you somewhat of a tutorial.
First: in pure H2O, CO2 is not very soluble, but it would follow Henry’s law. But seawater isn’t pure, there’s a lot of buffer in there, and that buffer allows seawater to dissolve a _lot_ more carbon than pure H2O would be able to.
Second: The total carbon in seawater is equal to the carbon in the dissolved CO2/H2CO3 plus the HCO3- plus the CO3=.
Third: Adding CO2 to the solution increases the acidity, driving the equilibrium towards CO2/H2CO3.
Fourth: Therefore, the ratio of CO2/H2CO3 to (HCO3- plus CO3=) increases with added CO2.
Fifth: The CO2 in the atmosphere is in Henry’s law equilibrium only with the CO2/H2CO3 in the solution. So, if we were to add a strong acid to the solution, increasing the ratio in part 4, that would decrease the total carbon in the ocean. Of course, when adding CO2 we’re adding both an acid and CO2 at the same time, and so the increased acidity merely means that the increase in total oceanic carbon is _less_ than the 1:1 you’d expect by pure Henry’s law rather than leading to a net loss of carbon.
Okay: so that’s the theory. We can measure DIC (that’s Dissolved Inorganic Carbon, see part 2) experimentally. Bolin and Eriksson found that [CO2] = 0.0133 mmol, [HCO3-] = 1.9 mmol, and [CO3=] = 0.235 mmol. We can also measure alkalinity experimentally, which is important because alkalinity = [HCO3-] + 2[CO3=], and Bolin and Eriksson found alkalinity equal to 2.37 mval. You can combine that with the disassociation constants of H2CO3 and the solubility of calcium carbonate – k1/[H+] = 143 and k1k2/[H+]^2 = 18, the calcium concentration of seawater [Ca++]=10 mmol, and with Henry’s law you can solve for the Revelle factor and you can get about 12.5.
And now I’m tired of typing stuff in, but I’ve found a non-paywalled reference for you: http://ocean.mit.edu/~mick/Papers/Omta-Goodwin-Follows-GBC-2010.pdf. I will note one caveat to my above data, which is that Egleston (2010) referenced in Omta (2010) show that inclusion of borate in the alkalinity equation changes the answer somewhat, especially as the pH of the solution approaches 7.5. If you want more experimental data, Egleston uses alkalinity and DIC from the GLODAP project and temperature and salinity from the World Ocean Atlas (because disassociation constants are functions of temperature and salinity).
Sadly, I suspect that you are a troll, and that my work here is therefore meaningless, but perhaps others can learn something…
Thank you, M!
It’s good stuff, what we are calling multi-physics these days.
“The quantity O has turned out to be particularly useful for a
number of reasons. First of all, it is much more constant than
the Revelle buffer factor that has been extensively applied in
theories of the ocean‐atmosphere carbon partitioning.”
So, this paper that finds the Revelle buffer factor to be less useful you think I should accept as proving the Revelle buffer factor??
Ah, yes, the subtle* difference between “this approach is better” and “the old approach is not useful”.
Omta et al: “Unfortunately, R is not constant: it varies between approximately 8 and 15 at the ocean surface [Watson and Liss, 1998]. Furthermore, the globally averaged Revelle buffer factor depends strongly on the total amount of carbon in the ocean‐atmosphere system [e.g., Goodwin et al., 2007]. We now derive an alternative index that is more constant than R.”
Translation for the purposes of my original argument: The Revelle factor is large (in this case, greater than 8). That’s all that’s needed for the use of the Revelle factor in support of the fact that CO2 has a long residence time in the atmosphere. You’ll note that in my original post I cited Sabine et al who also discussed this range of 8 to 16. Omta et al. also point out that the Revelle factor will increase with increasing emissions, up to a factor of 19 at 1000 ppm CO2, which just increases the proportion of CO2 which stays in the atmosphere. Nowhere does Omta say that the Revelle factor is wrong, merely that they prefer their new “O” factor because its behavior is less sensitive to changes in emissions and other factors (and that behavior is monotonic, even past 1000 ppm).
So yes, this paper supports my point quite well. It also demonstrates the difference between real science (“let’s figure out how the Revelle factor changes over space and different scenarios, and whether there might be other factors that could be useful descriptions of the system”) and junk science (“the Revelle factor doesn’t exist, and the historic CO2 concentration increase might be due to magic fairies rather than human CO2 emissions, despite the fact that several completely independent methods all demonstrate otherwise”).
-M
*This is meant to be sarcastic, by the way. I realize you might not do well with subtle.
M,
sorry, your sarcasm doesn’t work very well.
Let’s start with the basics. Where is the definition of the Revelle Factor/ That is, what are the components, the constant(s) if any, the equations, relationships?
Next, what are the obnservations or experiments that these are based on?
Sadly, you are typical of the knowledgeable types who deal with Climate Science regularly and never question the basics. They do not exist in your explanations and do not exist in that paper. Whether they actually exist in the papers referred to by you and the paper you linked I do not know.
Here is a discussion about the same issue by Jeff Glassman and Pekka Pirilla:
http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99490
http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99650
Again, there are no basics establishing that the Revelle Factor was EVER fundamentally established as a part of science. Pekka, like you, points to many issues that very well could have been a part of a Revelle Factor IF IT HAD EVER BEEN ESTABLISHED!!!
It wasn’t. It is one of several myths of Climate Science that modern scientists are working around and filling in. Notice that Pekka states that the Revelle Factor could actually range to 1. So we have a mythological factor that can range from 1 to over 16 that is a kind of buffer effect for Co2 and water. Gee whiz. Color me impressed by all the hard science going on!! Any idea what the temperature curves are like? How about what actually causes the buffering and its curve or linear relationship? Yup, I am just overwhelmed by all the data you have inundated me with about the Revelle Factor.
This is sarcasm in case you didn’t notice.
Let’s try a little reading comprehension here, Kuhnkat:
You say: “Notice that Pekka states that the Revelle Factor could actually range to 1.”
Pekka states: “Thus Henry’s law remains valid for fixed pH. The Revell factor is 1.0 in that case.”
Um. Note the conditional in that sentence (you do understand conditionals, right? I don’t want to overtax your small brain, as you have previously admitted that you are “very ignorant”). If we keep pH fixed, then the Revelle factor is 1.0. But, in the real world, pH is not constant, and adding CO2 will make a solution more acidic. If you want to test that, extract some red cabbage juice (which makes a good pH indicator), and you can show that adding dry ice (or even just waving a juice-soaked towel in the air to absorb CO2) will increase the acidity. If you read my post starting at “First:” you’d see that the chemical theory is clear. Yes, the Revelle factor depends on the buffering, dissolved carbon, and temperature of the solution. That doesn’t make it “mythological”, it just makes it dependent on conditions. Is gravity mythological because it is 9.8 m/s^2 here, but 0 m/s^2 in interstellar space far from any massive bodies? I think not. I actually gave you every single constant and equation you needed (assuming you know what a disassociation constant is, but then, I’m not here to teach you high school chemistry, though you could clearly use a refresher). So the Revelle factor can be experimentally measured, and it has been, with the answer being “between 8 and 16 depending on where in the ocean you look”.
Heck, you could dissolve some sodium bicarb in solution and measure Revelle factors yourself.
You, my dear troll, are “very ignorant”. And this is why Curry’s attachment to her “e-salon” is of little value. It is saturated with people like you. Which is why the best blogs use moderation to keep the signal to noise ratio at some reasonable level.
You, my dear troll, are “very ignorant”. And this is why Curry’s attachment to her “e-salon” is of little value. It is saturated with people like you. Which is why the best blogs use moderation to keep the signal to noise ratio at some reasonable level.
Curry’s E-salon is of high value because people here are very willing to learn from experts who drop by and impart knowledge, and occasionally the experts acknowledge something they pick up here which they were not previously aware of.
Of course, the ‘experts’ running blogs which censor anything which threatens their apparent intellectual superiority won’t gain in this way, because experts who behave like arrogant pricks are generally unlikely to create an ambience in which they can teach well, or indeed learn.
M,
how many other laws do climate scientists deal with that say they are only applicable at a fixed temp or at equilibrium, say Stefan-Boltzman, yet, they are used anyway KNOWING that the temp and pressure and flux are continuously changing?? Sorry, don’t want to tax your brain too much. You obviously have much more important things to do than try and educate this ignorant person.
Now, do you think that every point in the ocean is changing quickly enough 24/7 that there is NEVER a time that the Ph actually stays constant for a measurable length of time? Even if it doesn’t, wouldn’t the 1 be a LIMIT of the range??
It was a nice try though, distracting from the issue that there is no support for an actual Revell Buffer Factor presented here or in the papers linked other than many scientists using or referring to it without definition or derivation. I will accept this as your acquiescence that YOU and Pekka do not have this to hand. Maybe you can research it and provide the information?
M
I am humbled.
Thank you.
M wrote “and yes, I do get frustrated at having to debunk stupid myths over and over and over again.”
I have submitted a comment to Energy and Fuels explaining the error in Prof. Essenhigh’s paper; I did so in the hope of limiting the spread of this particular error, which does neither side of the debate any good. Prof. Essenhigh is right that residence time is about five years, but this is entirely uncontraversial, the IPCC put the figure at about four years. However the rise and fall of atmospheric CO2 is not governed by the residence time, but the adjustment time, and hence the conclusion is incorrect. My paper uses a one-box model, essentially identical to that used by Essenhigh, to explain the difference between residence time and adjustment time (amongst other things) and demonstrates that the observations are completely consistent with the generally accepted anthropogenic origin, but not with a natural origin. The paper has been conditionally accepted, I am just working on the corrections at the moment.
This particular aspect of the carbon cycle is not straightforward and it is perhaps not surprising that this confusion of residence time and adjustment time should occur. I wouldn’t say this was a stupid myth, the solution wasn’t immediately obvious to me before I looked into it. Part of the reason why it has peristed is perhaps it has been deemed to basic to have been discussed in detail in the peer-reviewed litterature (until now).
Congratulations on your paper!
I am glad that you and am M have used “adjustment time” as the proper name to give to the impulse response settling time. As a part-time researcher, I haven’t run across this before, and didn’t realize that Rodhe defined this in 1979.
Do you give a single value to the adjustment time or do you give it a range of values?
This is exactly what I continuously discover. Casual readers see the results being presented and they assume that some unverified software program is spewing nonsense because they don’t get the fundamental explanation. Once you start apply a compartment model (or box model as you refer to it) to the problem space, the the answer becomes obvious.
Take a look at the comment in this thread I made a couple of days ago concerning my own attempt at a box model for sequestration:
http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-106310
I bet that it supports your findings, and I did the analysis because I was having a hard time coming up with a fundamental understanding of the fat tail response curve. My only disagreement is that I do think it is straightforward, because it is the same thing I would do to model diffusion of dopants and other low concentration particles in a semiconductor material. Electrical engineers and material scientists consider that a straightforward problem, and this understanding is what enables all computers that we are using today (as we proceed to type away).
Mr. Marlowe Johnson, solve this problem, with the IPCC product? Where: (GI+GO)=0
That’s easily solved with a simple change of variables:
Tom = G[arbage]
settledscience, We all know that AGW science has shown absolutely no problem changing the variable, to suit themselves. Once again, your crowd has no intention to address the question I put forward: How may we ‘solve this problem, with the IPCC “product”:) which is= 0.’
Stupid analogy. The it’s not just the difference that is important. Knowing where the money is going and what it is doing provides insightful.
Kermit says, “Knowing where the money is going and what it is doing provides insightful.” Sad to know that there is nothing left & that’s not right. What differnce does it make now, thats the question, we don’t know? What a problem a day makes…
http://www.kitco.com/ind/willie/aug252011.html
Hope, this will help.
I’ll waste a little bit of time on some basic chemistry here. Hopefully nobody else raised these points.
The “residence time” is a pretty nebulous concept and doesn’t have much to do with the chemistry of the CO2 in and on the earth. The climate concept is that humans have injected(are injecting) a significantly large amount of CO2 into the atmosphere over a relatively short period of time. This excess CO2 does react and does not all stay in the atmosphere. So the concept of residence time would be “how long will it take for all the human-induced CO2 to be removed from the atmosphere and the level go back to the 280 ppm or so equilibrium”.
The first question is “has CO2 EVER been in equilibrium in the atmosphere? The paleo records all indicate that it has not been. The level has varied widely in different eras, generally lagging behind changes in temperature. Equilibrium chemistry is a very chancy thing. The best way to determine equilibrium is by determining the free energies of the reactions involved, the amounts of reactants, and calculate what the equlibrium values are in a phase diagram. This is totally impoosible to do because we do not know the reactions involved or their rates- simple solution/dissolution mass flow in the oceans, uptake of CO2 as carbonate into plants and animals, rate of release from decay, rate of carbon accumulation in sediments, rate at which sediments are being subducted at the plate boundaries, etc.
The second question is “who cares?”; The amount of CO2 in the atmosphere fluctuates quite a bit on many time scales. Any notion of a residence time for CO2 in the atmosphere will depend almost entirely on the assumptions used to calculate it. Given the small numbers involved(CO2 in the atmosphere<<CO2 in the oceans<<CO2 and C in mantle and surface rock) it is only a small blip in the noise. Part of the who cares is that the climate doesn't distinguish between molecules or atoms, only the concentrations involved, and to a tiny extent the isotopes involved. Once it is there, it goes where it goes and does what it does.
George M
Wouldn’t your 280 ppm equilibrium be more properly 230+/-50 ppm steady state, seasonally and geographically normalized, if we’re talking paleo (to six sigma, and then allowing for uncertainty)?
Though I doubt this much alters your argument.. whatever it is.
Hal – Since it will become clear to you from the various responses below that the actual decline of excess CO2 toward baseline occurs over centuries, rather than 5 years, with a long tail requiring many thousands of years, I would be interested in your response to the original question – how much should that concern us?
can you please not sidetrack the discussion before it starts
Stephen – I don’t think it is a sidetrack to refer back to Hal’s email that served as the reason for this entire post. The final paragraph of his email captured the essence of his point – if the residence time is very short (5 years), there is little reason to be concerned. He never answered whether there was a reason for concern if the residence time is much longer, which it is. That is a legitimate question to ask in my view.
Correct. Your comment is clearly on topic, not a sidetrack.
By “long tail” you are referring to a probability distribution, aren’t you?
Some subsequent comments make reference to that phrase without seeming to understand it. Or maybe you’re using it differently than I expect. Please clarify.
That is nothing but unscientific hand-wavy claptrap.
That does not address the perfectly clear, direct question you were asked, nor in your rambling, off-topic comment, did you ever directly, responsively answer that perfectly clear, direct question you were asked.
Fred Moolten asked you:
“I don’t know” would have been a perfectly respectable, honest answer, Harold.
Faking it is not respectable.
In fact, you did even state categorically that you don’t know.
But you should have been honest and just stopped there instead of trying to pretend you know something relevant to the subject, by babbling on about meaningless factoids.
You mean “poisoning.” The level of CO2 that’s considered poisonous is 100% irrelevant to informed discussion of the CO2 greenhouse effect. It’s just a means of faking knowledge, which you don’t have.
No, you don’t, nor do you know anything relevant to making informed estimates of the probabilities of longer or shorter residence times.
Oh, is it really “somewhat likely” Harold? Based on what scientific expertise can you make that assertion? None. Can you quantify “somewhat likely?” No, you cannot. You don’t even know what quantities to compute, so you really have nothing of value to contribute — except for name-dropping, which obviously serves Curry’s interest in increasing her notoriety.
Funny that “scientist” is nowhere on his résumé! Less funny is that neither you nor Curry care that he has no scientific credentials whatsoever.
You just advocated fudging the science (“closer to the 5 year residence time mark”) in favor of a particular policy outcome (“different decisions and action plans”), the exact thing that you climate science deniers are always falsely accusing all the legitimate scientists of doing. I’m very sure that from his political days, your old pal “Jack” can explain to you what “unintentional truth-telling” means. :-)
Amateur.
Harold H Doiron
I believe there are clear, proven and well-known beneficial effects of CO2 in the atmosphere with regards to increased rates of plant growth, better crop yields, etc. needed to support the growing population of the planet. I don’t know what a harmful level of CO2 in the atmosphere would be. I have read that US submarines are allowed to have 8,000 ppm CO2 before there is any concern about health related effects (more than 20 times current levels). I don’t know what the optimum level of CO2 in the atmosphere would be, all things being considered, but it is somewhat likely that the level would be higher than it is now, and I have factored this into my thinking about the current situation.
That’s quite the credo. Been worshipping at the temple of Idsos, have we?
I’ve considered your opinion for a week, as I wanted to give it careful thought.
I see Chief Hydrologist has punctured and lampooned your faith, and he needs no help from me.
However.
For on the scale (we have good cause to believe by the paleo record, extrapolations, and SWAG) of ten million years, the CO2 level of the atmosphere has been ergodically steady at 230 ppm +/- 50 ppm. We’re over 44% above that mean now, which appears to be unprecedented on the span of a geological epoch.
Certainly the ice core record of the past 800,000 years indicates this range to a high degree of certainty, as confirmed by the stomata count of plant fossils and myriad other evidences.
Where we substitute our own judgement of ‘best’ for what has been the dominant mode of a principle component of our complex, dynamical, spatiotemporal world-spanning climate for a span of time an order of magnitude longer than the existence of our species, we exhibit what can only be called arrogance.
Where we do it based on spinmeistering and public relations, we display profound folly.
CO2 at levels above 200 ppm up to about 2500 ppm functions as an analog to plant hormones, not as a ‘nutrient’ or fertilizer.
It’s not so very different to plants from steroids in professional athletes. It alters primary and secondary sexual characteristics, modifies structures, and results in additional mass in some parts of the plants.
Over seasons and generations, plants adapt to higher CO2 levels in some ways, gradually tapering off in the ‘benefits’ realized, but retaining negative effects longer than they keep the benefits.
That’s why plants as a group flourished quite as well at 180 ppm as at 280 ppm and at 380 ppm.
The purported benefits of CO2 elevation have only one medium term experiment in the field that I know of, and although it demonstrates selective benefits in field conditions in terms of favoring some species over others, it doesn’t match the levels seen in hothouse conditions with unlimited nutrient and ideal growing conditions.
In short, there is nothing clear, proven, or — if well-known — especially correct in mad schemes to profit from higher CO2.
It won’t end world hunger, as Lord Lawson suggested in his book based on nothing more than speculation and wishful thinking.
It’s a silly opinion. You’re welcome to hold it, but please don’t claim it’s clear or proven.
Please factor the Uncertainty of your beliefs into your thinking.
Because to my thinking, a Perturbation on unprecedented scales in a Chaotic system will tend to disturb ergodicity in unpredictable ways, which increases the cost of climate-related Risks to me.
Those costs are real, and translate into money taken from me.
And I don’t recall consenting to your CO2-worship picking my pocket.
And let us not over look the role of freshwater bodies as carbon sinks.
This is apparently much larger than previously recognized by the climate science community.
I would sugest that before we start diverting the topic, however nicely, into predictions about “X”, we should define the behavior of CO2 in the atmosphere more clearly.
“I would suggest that before we start diverting the topic, however nicely,
into predictions about “X”, we should define the behavior of CO2 in the
atmosphere more clearly.”
hunter, Why?… Of course, you already accounted for this? Well?…
http://www.dailymail.co.uk/sciencetech/article-2030337/Scientists-underground-river-beneath-Amazon.html
Scientists confirm that this is to be their ‘last’ surprise.
Tim Ball: “Pre-industrial levels were 50 ppm higher than those used in the IPCC computer models. Models also incorrectly assume uniform atmospheric distribution and virtually no variability from year to year. Beck found, “Since 1812, the CO2 concentration in northern hemispheric air has fluctuated exhibiting three high level maxima around 1825, 1857 and 1942 the latter showing more than 400 ppm.” Here is a plot from Beck comparing 19th century readings with ice core and Mauna Loa data…
“Elimination of data occurs with the Mauna Loa readings, which can vary up to 600 ppm in the course of a day. Beck explains how Charles Keeling established the Mauna Loa readings by using the lowest readings of the afternoon. He ignored natural sources, a practice that continues. Beck presumes Keeling decided to avoid these low level natural sources by establishing the station at 4000 meters up the volcano. As Beck notes “Mauna Loa does not represent the typical atmospheric CO2 on different global locations but is typical only for this volcano at a maritime location in about 4000 m altitude at that latitude.” (Beck, 2008, “50 Years of Continuous Measurement of CO2 on Mauna Loa” Energy and Environment, Vol 19, No.7.) Keeling’s son continues to operate the Mauna Loa facility and as Beck notes, “owns the global monopoly of calibration of all CO2 measurements.” Since Keeling is a co-author of the IPCC reports they accept Mauna Loa without question.”
(Time to Revisit Falsified Science of CO2, December 28, 2009)
Wagathon, the Beck paper is a joke. What they measure at Mauna Loa is a consistent record, same place year on year, clear of urban influences. Beck’s results are from multiple records, often close to industrial CO2 sources, using multiple analytical methods. Notice how the variability suddenly falls when more precise methods are adopted.
Consistent?
From May 15th to 21st CO2 went up 1.84ppm
From July 17th to 23rd CO2 went down 1.34ppm
ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_weekly_mlo.txt
There was a even a 1.93ppm jump in 7 days recently.
No consistency there.
wow thats an entirely new class of retarded argument
Your lack of curiosity never surprises me.
If CO2 goes up 1ppm in a year it is a sign of man-made catastrophic climate change.
If it goes up almost 2 ppm in 7 days it is a sign of the natural in and out breathing of the earth …
“The sawtooth pattern represents the natural carbon cycle. Every summer in the northern hemisphere, grass grows, leaves sprout, and plants flower. These natural processes draw CO2 out of the air. During the northern winters, plants wither and rot, releasing their CO2 back into the air. This sawtooth pattern shows the planet breathing.”
http://www.terrapass.com/blog/posts/science-corner
The above explanation is a joke when you look at the weekly data.
Bruce,
You miss the point entirely. While there may be short term fluctuations of the same order as the yearly increase, those fluctuations do not scale over time. So while you may indeed get an average reading on one day which is equal to the lowest reading from a year or so before, you are not going to get one which is equal to the lowest reading from a decade earlier, still less from several decades earlier, and it is this which reveals the trend, clearly and unmistakeably.
Your point is to ignore the explanation that explains the sawtooth component of the Mana Loa graph because it is a joke.
Why are their short term fluctuations? Science would explain them. Propaganda explains them away with joke “causes”.
If the “natural carbon cycle” can be 2ppm over 7 days, why not 2ppm over 365 days?
CO2 follows temperature historically.
wow that’s a not entirely a new class of numbnut argument
Bruce,
Take those variations, and divide them by the average atmospheric concentration over the relevant period, and tell me what sort of variation you get in percentage terms. It is going to be well under 1% in all cases.
This would seem more that sufficiently consistent for the purposes it is being used for.
2ppm over 1 day?
http://cdiac.ornl.gov/ftp/trends/co2/Jubany_2009_Daily.txt
OK, here is the graphical representation of that data.
http://img197.imageshack.us/img197/9555/co2.gif
Read again what jimbo said above: “… tell me what sort of variation you get in percentage terms ..”
“tell me what sort of variation you get in percentage terms.”
Ok.
AGW = .5% change in ppm per 365 days
Jubany = .5% change in CO2 per 1 day
But more importantly, this “sawtooth” pattern explanation seems awfully bogus since daily changes can be 2ppm.
““The sawtooth pattern represents the natural carbon cycle. Every summer in the northern hemisphere, grass grows, leaves sprout, and plants flower. These natural processes draw CO2 out of the air. During the northern winters, plants wither and rot, releasing their CO2 back into the air. This sawtooth pattern shows the planet breathing.”
I mean … really! If you only looked at the yearly graph you might be naive to believe such an explanation.
Bruce, Why can you not just look at the data impartially? If that curve charted my pay rate in $/hour based on working commissions, I would not be complaining about the fact that there was a ripple and some noise in the overall upward slope.
You may be having a problem with understanding signal processing mathematics, and in particular its digital counterpart. What oscillations you see and their amplitude are really a matter of the impulse response caused by a forcing function. Depending on the frequency response, daily fluctuations can be filtered out, yearly fluctuations filtered out less, and the longest periods filtered out the least. The fact that samples are taken at the same time each day points out the stark reality of the Nyquist criteria.
The Nyquist criteria states that digital sampling at the same rate as a naturally occurring frequency will fold that value over to look like a constant value. You actually have to sample at twice the natural frequency, i.e. the Nyquist frequency, or measure the numbers twice per day to see the daily fluctuations. We electrical engineers had this burned in our skulls during school so really see nothing weird about the data. I am not sure what your background is.
If your pay changed daily, and Human Resources explained it was because of yearly variations in the business cycle you would be very, very suspicious.
Bruce, Do you have a problem with reading comprehension too? Note that I crafted my analogy to state that I worked based on commissions. Do I have to explain to you that commissions are susceptible to random fluctuations?
Paul B – perhaps the Beck paper is a joke – I haven’t read it. But Wagathon has touched on a real difficulty with the Mauna Loa data. The site is not ideal for assessing baseline CO2 in the atmosphere as it lies on the flank of a volcano which itself emits large and uncontrolled amounts of CO2. The only way I can think of to allow for such sporadic local contamination is to use the minimum values recorded within each time period, presumably daily, and dump the rest. Perhaps the resulting data are a reliable and meaningful guide to global CO2 levels. Indeed I’m fairly happy to accept that they are, although I would prefer to see a demonstration of this.
This oft-repeated canard about Mauna Loa being an unsuitable site for CO2 measurements is dealt with here:
http://www.skepticalscience.com/mauna-loa-volcanoco2-measurements.htm
Could you check your link FiveString. I could not get it to work. Thanks.
The link is missing a dash between volcano and co2. Perhaps WordPress software removed it. It should be this
Wish this site had an ‘edit’ function… Thanks for the correction Pekka.
“It is true that volcanoes blow out CO2 from time to time and that this can interfere with the readings. Most of the time, though, the prevailing winds blow the volcanic gasses away from the observatory. But when the winds do sometimes blow from active vents towards the observatory, the influence from the volcano is obvious on the normally consistent records and any dubious readings can be easily spotted and edited out”
They edit the data based on supposition? Not good.
When you said the “canard” “is dealt with” I thought you might be talking scientifically not craptastically.
Now that your whining that by your speculation they might use some seemingly spurious measurements has been debunked, you’ve started whining that they should use known spurious measurements.
You do know that 1984 was not intended as an instruction manual, don’t you?
CO2 to from underwater volcanoes
http://nwrota2009.blogspot.com/2009/04/getting-gas.html
Estimated number of underwater volcanoes? 3 million+
Where?
If this was anything but Plimmer’s pipe dream we would see interesting profiles of various stuff like HCO3- in the oceans. We don’t
Hal,
Lets have a closer look at the 100-year lag idea…
.
The IPCC states that humans started to introduce CO2 and other dry GHGs ‘markedly’ in 1750:
.
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-human-and.html
.
[“Global atmospheric concentrations of carbon dioxide, methane and nitrous oxide have increased markedly as a result of human activities since 1750.”]
.
So, by 1850 (100 years later) we should have been feeling the ‘full effect’ of the initial input. Thereafter, every year would see an increase in the ‘full effect’ of CO2. This would, logically, lead to an acceleration of the effect of CO2 and, according to the cAGW theory, an acceleration in global warming. So now we can look at the temperature data since 1850:
.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1850/to:2011
.
The question is: Do you think that shows an acceleration? If you apply enough smoothing you may do so but then you are adjusting how you interpret the data. The raw data graph shows no warming at all in the last 13 years (if anything a cooling) which itself would cast serious doubt on any interpretation of ‘acceleration’. In addition, any reason given for the cooling – such as ‘natural variation’ – has to consider that any ‘natural variation’ has to be powerful enough to combat not only the initial CO2 effect, but also the acceleration in the initial effect. In that case, natural factors easily outweigh any CO2 effect.
.
Personally, I think a more important question is: What is the contribution made by CO2 to the Greenhouse Effect? Until we can answer that question correctly, any theory/hypothesis/assertion regarding the supposed warming effect of CO2 is flawed as being based on an assumption.
.
I agree with hunter that the behaviour of CO2 needs to be ascertained more clearly.
.
Thanks for an interesting post.
Arfur – There is no reason necessarily to expect an acceleration simply because CO2 is increasing. That is because any warming reduces the radiative imbalance caused by the increased CO2 and thereby reduces the tendency for further warming. Depending on the rate of CO2 rise, we could see a steady warming, an acceleration, or warming at a reduced rate, although as long as CO2 is rising, the temperature trend averaged over multiple decades can be expected to be upward. Over shorter intervals, the effects of other, short-term climate drivers will modify the long term trend, as can be seen by examining the behavior of global temperature anomalies over the past 100 years, with their fluctuating ups and downs overlying an upward trend.
The relevance of the residence time function is that it tells us how long a given CO2 concentration will continue to exert warming effects if the planet is out of balance. In general, that will be centuries.
Fred,
Not according to the IPCC…
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter3.pdf
See FAQ 3.1 Fig 1 and note:
[“Note that for shorter recent periods, the slope is greater, indicating accelerated warming.”]
.
So its not just the CO2 increase, but the slight acceleration in the increase that supports my argument. If the increased CO2 leads to a radiation imbalance which leads to warming which leads to a reduction in the radiation balance which leads to a reduction in warming – as you state – then what is the problem? The trend you speak of is 0.06 C per decade since 1850 and is not increasing from 1850! That size of trend is not even mildly threatening, let alone ‘catastrophic’ (IPCC term, not mine)!
.
CO2 may ‘exert warming effects’ but these effects are not significant, as demonstrated by observed data. I repeat, until you know what the CO2 effect is, you are basing your argument on an assumption.
.
Off to bed now, will have to leave any further reply until tomorrow…
Regards,
The recent temperature record is consistent with continued warming of about 0.15C/decade. Which in turn is consistent with AGW.
That’s all there is to it. And where precisely has the IPCC used the specific phrase “catastrophic”?
So if the temperature falls 4C to the depths of the LIA, thats natural, but if it recovers by .8C, it is mans fault?
And using any other start point other than the IPCC chosen start date of ‘accurate data’, ie, 1850, is misleading. What is your definition of ‘recent’? How about this one…?
http://www.woodfortrees.org/plot/hadcrut3gl/from:1998/to:2011/plot/hadcrut3gl/from:1998/to:2100/trend
.
That’s ‘recent’, so where is your 0.15 C per decade trend there? If you move the start date to the right of 1850, you can get all sorts of trends but not one of them is the ‘overall trend’. So I just stick to the overall trend. That way, if the temperature starts to increase in an accelerative fashion WILL increase. In fact, the trend today is much lower than it was in 1880. Give me a call back when you can find the trend increasing above that one!
.
Oh, and the IPCC used the term ‘catastrophic’ here:
http://www.ipcc.ch/publications_and_data/ar4/wg3/en/ch2s2-2-4.html
The 0.15C/decade trend is still there even in HadCRUT past 1998, it’s just masked by variation (solar cycle and ENSO). See: http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/
trend is still there
it’s just masked
Lol. Wot?
how-fast-is-earth-warming?
It isn’t. And hasn’t been since 2003.
There have been cooling impacts since about 2003 (2003 was solar max! there’s a been a low solar minimum recently!). Plus 2003-2007 was pretty much el nino after el nino whereas since 2007 there have been 2 quite strong La Ninas.
The earth is still warming, it’s just that since 2003 the above-mentioned cooling impacts have suppressed it.
lolwot,
Faith is a truly powerful debating tool. If you say…
[“There have been cooling impacts since about 2003 (2003 was solar max! there’s a been a low solar minimum recently!). Plus 2003-2007 was pretty much el nino after el nino whereas since 2007 there have been 2 quite strong La Ninas.”]
…Does that mean that there were no La Nina events between 1910 and 1940 or 1970 and 1998? What caused those warmings? You want to use CO2 to explain a warming but that it is overcome by natural forcings during a cooling period. That denies the likelihood that natural forcings can work both ways.
Arthur, any period of time which starts with El Ninos and ends with La Ninas (eg 2005 to 2011) will have a cooling bias due to ENSO that has nothing to do with the longterm warming trend.
I mean what you are doing is little more sophisticated than saying temperature dropped from 1998 to 2000 and claiming this contradicts global warming. It does not. 1998 to 2000 was El NIno to La Nina. That’s short term noise able to overwhelm the longterm trend.
2005 to 2011 has about a 0.1C cooling impact from falling ENSO. It also has a cooling impact from the solar cycle.
In short that’s why the period 2005-present is kind of flat. It’s not because the longterm warming has stopped, it’s because ENSO and the solar cycle happen to line up over that period to cancel it out.
“any period of time which starts with El Ninos and ends with La Ninas (eg 2005 to 2011) will have a cooling bias due to ENSO that has nothing to do with the longterm warming trend.”
Excellent point, so how much did the 30 year run of positive PDO starting with more la ninas and ending with more el ninos 1975-1998 contribute to the longterm warming trend?
About 0.1C warming
So. lolwot, what caused the other 0.5 C warming in that period? You are condensing your ENSO argument into a short period and appear unwilling to consider that the same natural causes you claim work against the cAGW theory in the short term can also exist during the warming periods that you appear to attribute to CO2. If, as you state above, ENSO only contribute 0.1C in that period, are you suggesting that CO2 is powerful enough to contribute another 0.5 deg C?
.
If that is your argument, what caused the warming from 1910 to 1945, and what caused the subseuquent cooling?
“If, as you state above, ENSO only contribute 0.1C in that period, are you suggesting that CO2 is powerful enough to contribute another 0.5 deg C?”
It maybe even more powerful than that. Without aerosol emissions the amount of warming would have been greater than 0.5C
.
“If that is your argument, what caused the warming from 1910 to 1945, and what caused the subseuquent cooling?”
Solar activity increased in the early 20th century. I think that plays a significant part of it. Between that and the reliability of the records back then I am not sure there is a problem there. Global temperature went just about flat after the 1940s until it started rising again during the 70s.
lolowt,
If the records were so unreliable back then, how do you know the increase since 1970 are unusual? The HadCRUt dataset goes back to 1850 and clearly shows three distinct warming periods. You want to discount the first two and concentrate on the latter one. The you want to discount the levelling/cooling after the latter one. How many goalposts do the warmists want to move?
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
“El Niño–Southern Oscillation is a strong driver of interannual global mean temperature variations. ENSO and non-ENSO contributions
can be separated by the method of Thompson et al. (2008) (Fig. 2.8a). The trend in the ENSO-related component for 1999–2008 is +0.08±0.07°C decade–1, fully accounting for the overall observed trend. The trend after removing
ENSO (the “ENSO-adjusted” trend) is 0.00°±0.05°C decade–1, implying much greater disagreement
with anticipated global temperature
rise.”
Arfur – The IPCC site you linked to reinforces the points I made above,but if you have a question about a specific item, I’ll try to explain the reasoning behind it.
Fred,
You could start with the bit that explains why the slight acceleration in CO2 can be construed as both leading to a reduction in warming and an acceleration in warming depending on how you feel at the time.
.
I repeat, the overall trend of 0.06 C per decade is NOT increasing. What is the problem?
How does a line “increase”? If you stick a line of best fit through date 1850 onwards then of course the line is going to be straight. That is what a line is.
the overall trend of 0.06 C per decade is NOT increasing
That’s t-r-e-n-d Lolwot, not l-i-n-e
Of course it’s not increasing. It’s a straight line you’ve fit through all the data. What do you expect when you apply a line of best fit? That the end part will curve upwards?
The fact is that recent warming is greater than the “overall trend of 0.06C” anyway.
What recent warming?
Oh, you mean that warming, last millenium.
:)
If you correct for ENSO and the solar cycle the warming has continued with no pause. You should correct for these things as they are noise not signal.
Ok lolwot, I’ll explain.
Draw a hockey-stick curve (one that shows an acceleration in warming). Draw a straight line from the origin of the curve to a point a short way along the x-axis. Then draw successive lines, each starting from the origin, to successive points further along the x-axis. Each successive ‘line’ will be showing an increasing ‘trend’ because each successive line will be steeper than the last. Got it?
.
In terms of global temperature, IF the radiative forcing theory was correct, then the overall trend (drawn from the 1850 origin) would show a relatively consistent INCREASE.
.
It doesn’t. Its really that simple. The observed data does NOT support the theory. The overall trend from 1850 to 2011 is 0.06 C per decade. The overall trend in 1998 was 0.067 Cpd. The overall trend in 1944 (another peak) was 0.06 Cpd and the overall trend in 1888 (the first peak) was 0.17 C pd!
Where you are wrong is that climate models don’t show your acceleration. Look at 20th century hindcasts. They simply do not show the acceleration you claim and yet I am 100% sure they show AGW.
Fine lolwot, you believe in your models and I’ll believe in observed data.
Humans have an intuitive sense of the way a model works. Take the place of an outfielder who is trying to catch a fly-ball. He will instantaneously model the trajectory based on minimal information. If the fielder followed your advice, he would wait for the baseball to land on the grass and then run to it. Lot of good that will do.
And yo can laugh at this analogy but that’s the way we work. The only difference is that the practical human will innately use a model based on heuristics and acquired knowledge, whereas the scientist will use math and the corpus of scientific evidence.
You’re kidding me, right? How many fly-balls would the catcher have caught if he’d used the IPCC models instead of his own models based on his own historical evidence (ie. practice)?
Well they wouldn’t catch any if they used Postma’s or the skydragon’s models.
Precisely!
Which is why I don’t believe either set :)
They would be looking at where the average baseball lands and then they would try to catch the average every time. They would also overestimate the hitters ability due to the amount of co2 in the air
I know why they call them deniers. At every chance they try to deny another person from actively thinking and advancing an argument. The majority (with a couple of exceptions from the truly skeptical POV) will never actually help you and offer constructive criticism. Instead they just stomp around.
WHT,
.
[” The majority (with a couple of exceptions from the truly skeptical POV) will never actually help you and offer constructive criticism. Instead they just stomp around.”]
.
If that is directed at me, then I beg to disagree. I have engaged with several warmists on this thread alone and I think I have done so in a respectful and reasonable manner. I will be happy to discuss CO2 (the thread subject) with you if you so wish.
.
How do you think it is possible for a trace gas at less than 0.04% concentration to significantly affect global temperature?
.
If you don’t wish to discuss this basic premise of the cAGW debate, just let me know. I will assure you of my respectfulness as long as you do the same.
.
Regards,
Arfur
Arfur, so if the 2000’s was 0.15 C warmer than the 1990’s and the previous average was only 0.06 C, you don’t count that as an acceleration? Could you clarify your definition of acceleration?
Jim D,
You are averaging out in decades? Yes, the 2000s were averagely warmer. But cAGW was sold to Joe Public as an impending catastrophe on the basis of not-averaged data. If the cAGW theory is correct, the MBH98 curve would still be increasing today. The FACT that the temperature has not increased above 1998 by itself effectively disproves the theory.
.
You can hide all sorts of significant data if you smooth out enough. Look here:
http://www.woodfortrees.org/plot/hadcrut3gl/from:1990/to:2010
.
Do you see an acceleration between the 90s and the 00s? Imagine you were a climber who walked from the start of the graph to the finish. You would have had to climb to the highest peak in 1998 but you would have spent a longer time at an averagely greater height in the 00s. By your argument of using decades, the climber would say he was ‘higher’ in the 00s, whereas history will show that he climbed the ‘highest’ in 1998.
We have just had the warmest decade by 0.15 degrees. Why would that not be a sign of warming faster than your average 0.06 degrees/per decade. It is an acceleration by any definition, and the projections have it accelerating further to 0.3 or more degrees per decade if the CO2 increase continues to accelerate.
Jim D,
.
Please read what I wrote.
.
Or try this… If the temperatures stay flat for the next two hundred decades (or more), you will still be able to say that each decade ‘is the (equal) highest decade’! Unfortunately, there will have been no temperature increase and no acceleration. The climber will be walking across a very large, flat and high plateau…
.
Of course, if the temperature increases, we may reach a point where the overall trend increases above what it is today. That will show an increased trend but not necessarily an acceleration. For an acceleration each successive overall trend has to be increased over the previous..
.
Using short-term, intermediate trend lines (as the IPCC did), is irrelevant, as each short-term trend can change greatly. Only the overall trend counts.
I would probably try to account for volcanoes and ENSO.
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_current.gif
Arfur, my advice is to not look at anything less than a decade average when talking about climate. You can easily get confused by the up and down fluctuations of internal and short-term solar variability that are meaningless as they cancel out on the climate scale.
And my advice to you is not to believe in fairies, little green men from Mars and the genuinely ridiculous notion that a bunch of trace gasses existing at a combined total of less than 0.04% has the potential to significantly effect the global temperature.
.
Its a shame that no-one on the MBH98 team didn’t mention that the data should have had a ten year smoothing when they sold the ‘rapid and accelerating’ idea to the politicians. Maybe a caveat that the next thirteen years would be likely to show no further warming would have given them a little more credibility?
I don’t understand the obsession with MBH98 here. That was in the AR3 report ten years ago now, and since then we have had an AR4 report in 2007 that overrides their time series, but anti-AGW people seem to cling to AR3 as being easier to criticize than AR4. Can you update your criticisms to AR4 at least, or don’t you have any?
“the genuinely ridiculous notion that a bunch of trace gasses existing at a combined total of less than 0.04% has the potential to significantly effect the global temperature”
A 3.7wm-2 forcing per doubling is not insignificant. And CO2 levels being so low just makes it a hell of a lot easier to double it.
When so many of your number so often make these ridiculously garbage arguments no wonder climate “skeptics” have no credibility outside their own community.
Jim D, it is one thing when people can’t agree on the facts. It is quite another when they can’t agree on the logic by which they reason about the facts. It is obvious that Arfur uses different rules of reasoning from you. As long as that remains the case I predict no resolution of your differences.
lolwot and Vaughan,
The reference to MBH98 was solely for the purpose of arguing against your use of ten-year smoothing. There was no suggestion of ten-year smoothing when the graph was used to ‘sell’ cAGW to the public. There was no suggestion of ENSO cycles, there was just hype. To further your point of smoothing, it is interesting to note that warmists are now using the ‘smoothing argument’ to avoid the lack of warming since 1998. I wonder, if there is not further warming for the next eight years, whether you will insist on the use of a twenty-year smoothing?
.
As to my logic and the effectiveness of the radiative forcing theory, lets have a closer look, shall we?
.
[“A 3.7wm-2 forcing per doubling is not insignificant. And CO2 levels being so low just makes it a hell of a lot easier to double it.”]
.
Ok, so first of all, is W/m-2 a unit of heat? No. Is radiation the same as heat? No. Is the subject we are discussing called the cAGW or cAGR? well, its cAGW. Ok, so unless you can prove that the 3.7Wm-2 figure is translated into some units of heat, and quantify it, your point is a pathetic attempt to whitewash the real problem. So, go on, tell me how much HEAT is attributed from CO2. I’ll ask again the question that NO warmist want to discuss – what is the contribution of CO2 to the Greenhouse Effect?
.
Instead of Vaughan chipping in with his usual and puerile use of snark, why don’t you ask yourselves why you don’t want to answer this simple question? What is it you’re afraid of? While you’re at it, maybe you could answer this question, since you seem to like the use of radiation so much – what was the Wm-2 figure in 1850 before the CO2 addition of 3.7?
.
Vaughan, if you can’t play nice, don’t play at all.
That last post should have started with ‘JIm D, lolwot and Vaughan…’
This notion of acceleration in AR4 was a blatant bit of manipulation by the IPCC. You simply can’t compare trends over different periods. Begging the question that they shouldn’t be drawing linear trends for nonstationary data.
George M,
Agreed.
How many humans?
How ‘industrialized’ was society, prior to 1950? 1900?
Come on science deniar, think. Just this once. Criminy!
settledscience
So, complain to the IPCC, not me!
And I think you’ll find its ‘denier’, not ‘deniar’…
Have you forgotten the context of your own comment?
http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104326
Wrong. Obviously, there has been more anthropogenic warming since 1850 because more anthropogenic greenhouse gases have been polluted since 1850 than from 1750 to 1850.
Well, it’s nice of you to try to sneak a post in after this thread has gone quiet, but you are the one who is wrong.
The point IS within the context. It doesn’t matter if the contribution of humans is greater later in the century, the fact is that – using the 100 year lag theory – by 1850 we should have been seeing all the lag which started in 1750. Every year after that, and every year the anthropogenic contribution increase, we should be seeing an increasing warming effect (ie accelerating) because of that lag (if it exists).
Unfortunately for you, and the other warmists, the lack of acceleration in the temperature datasets is a strong indication that the 100-year lag theory is wrong! If the lack of increased waring is due to ‘natural variation’, then this natural variation is not only capable of overwhelming the ‘radiative forcing’ of CO2 but the hypothesised increase in the ‘CO2 effect’ caused by the lag!
That was my point and it IS in context. I politely suggest it is YOU that needs to think…
And it’s still ‘denier…’.
“Stomata data on the right show higher readings and variability than excessively smoothed ice core record on the left. The stomata record aligns with the 19th century measurements as Jaworowski and Beck assert. A Danish stomata record shows levels of 333 ppm 9400 years ago and 348 ppm 9600 years ago.
EPA declared CO2 a toxic substance and a pollutant. Governments prepare carbon taxes and draconian restrictions crippling economies for a completely non-existent problem. Failed predictions, discredited assumptions, incorrect data did not stop insane policies. Climategate revealed the extent of corruption so more people understand malfeasance and falsities only experts knew or suspected. More important, they are not rejected as conspiracy theorists. Credibility should have collapsed, but political control and insanity persists – at least for a little while longer.” ~Dr. Tim Ball
Tim Ball’s credibility collapsed long ago
AGWers do like smearing people.
“Climategate revealed the extent of corruption so more people understand malfeasance and falsities only experts knew or suspected.”
Right on, man!
“Tim Ball’s credibility collapsed long ago”
AGWers do like smearing people.
ClimateGate did show corruption. When people get caught hiding declines, or making one tree in Yamal a global treemometer or keeping papers that are inconvenient out of the journals, that is corruption.
And we get to point it out. And you get to try and cover it up as per your usual modus operandi.
What has Tim Ball done to you Nick?
Right, because we don’t hear about “The Team” or “Mike Mann” on every occasion?
The difference is Tim Ball actually deserves to be smeared
Chris,
You keep missing the chance to treat your betters with civility.
Ha you’ve got to be kidding. I for one am sick of the likes of Ball talking utter crp and getting a free pass from you guys either because you are all so damn ignorant you don’t even see the blatant errors of your “betters” or because you guys are applying one of the most brazen double standards.
Mike,
Can you delete any emails you may have had with Keith re AR4? Keith will do likewise… Can you also email Gene [Wahl] and get him to do the same? I don’t have his new email address. We will be getting Caspar [Ammann] to do likewise.
Cheers, Phil
I actually don’t give a crp about emails.
The thing is skeptics are fine at understanding errors that are mundane like Al Gore getting the Sun the wrong temperature. And they are fine at understanding complex errors in details of paleoclimate studies.
But amazingly, and this is very hard to believe it isn’t deliberate, they utterly fail to understand the errors in stuff midway between the two, eg the errors throughout people like Tim Ball’s gishgallops.
lolwot,
The great thing about true believers is how they can pretend that calls to hide data and act corruptly is OK.
Chris,
Why does anyone deserve smearing? Smearing is when someone is damaged by people lying about them.
I guess smearing makes sense for believers.
the thing is I understand why they did those things. And it wasn’t because they were corrupt. It was because they were dealing with malicious fools and they didn’t want to give them an inch.
Yes Lolwot, they probably were thinking along the lines you so vivdly describe.
How scientific.
Dealing with “skeptics” is not about science. The skeptics dabble in rumour and spin. It’s more like politics than science. Skeptics couldn’t even interpret the CERN cloud paper correctly without spinning it for example.
The scientists who realize how anti-science “skeptics” are simply cared not to give them any time and that’s all there is to it.
Of course as a “skeptic” you are blinded about that.
As evidenced by the Climategate emails, they also ‘cared’ enough about their cause to support it with dodgy statistical methods, manipulated data and a determined effort to control the journals and the peer review process. Followed by a determined effort to delete the evidence of their malfeasance and slander the people exposing their nefarious activities.
The biggest skeptic lie about climategate is the idea that it was the scientists abusing the journals, when in actual fact it was the “SKEPTICS” who abused the journals and subverted peer review.
I think in psychology they call it “projection”. Or at least it is a matter of blaming scientists for the crimes of the “skeptics”
I mean seriously some of the crud that skeptics save through the peer review system and promote (G&T, the Beck paper) would be embarrassing if skeptic’s double standards on the matter weren’t so reckless to human civilization.
Simple yes/no question for you, bloke.
Did Stephen McIntyre have any right to the data for which he enjoined all the readers of his ‘blog to inundate CRU with FOI requests? If “yes,” then on what basis did Stephen McIntyre have any right to those data?
settledscience | August 28, 2011 at 12:38 pm |
Simple yes/no question for you, bloke.
Did Stephen McIntyre have any right to the data for which he enjoined all the readers of his ‘blog to inundate CRU with FOI requests?
Yes
If “yes,” then on what basis did Stephen McIntyre have any right to those data?
On the same basis that the information commissioner used to force the CRU to release the data to another party who requested it: Freedom of information.
I kinow there are lots of reasons why *some scientists* resist the principle, most of them run contrary to the scientific method. However, the law is on the side of those who wish to examine their claims.
tallbloke,
you left out the part where the research was paid for with public funds extracted from us through taxes. Remember that minor bit where Jones received money from the US too?? Then there are the EU Regulations that make ALL climate data public.
Wrong, tallbloke. McIntyre has no right to data that is protected by a legally binding confidentiality agreement. He is also not a scientist and has no right to be treated as one.
http://climateaudit.org/2009/07/24/cru-refuses-data-once-again/
This all has to do with intermediate computations which McIntyre pitched such a fit about not having given to him, because he is not competent to perform the same computations on the raw data. He was allowed access to the raw data all along, but he just isn’t smart enough to figure out what to do with it.
Yes, Wagathon, we can all agree that Tim Ball talks nonsense a lot. I assume that was your point?
Speak for yourself.
Dyson makes a lot of sense, though I may have picked some other name than carbon eating plants. Land use changes have an impact that may be underestimated. After reading the reference links I can confidently say the residence time is between a few years and a few hundred years. Based on the IPCC science by consensus methodology, their estimate is “likely” on the high side :)
Judith Curry 8/24/11, CO2 residence time
The question of CO2 residence time reaches into raft of IPCC errors, already well covered in Climate Etc.
The residence time of CO2 in the atmosphere is taught in 11th year public school physics as the leaky bucket problem. IPCC’s formula in its TAR and AR4 glossaries are correct, but, alas, used nowhere in those IPCC Reports. It’s 1.5 years if you include IPCC’s leaf water, but if you ignore leaf water as IPCC does, it’s 3.5 years. Jeff Glassman response to Vaughan Pratt, “Slaying the Greenhouse Dragon. Part IV” thread, 8/15/11, 3:56 pm.
See my response to Joel Shore, who teamed with Eschenbach on WUWT, to mistakenly propose that the residence time of a slug or pulse of CO2 was somehow different than their freshly minted lifetime of a molecule of CO2.Id., 6:24
For IPCC’s other formula, the Bern formula, see my response to Bruce, “Time-varying trend in global mean surface temperature” thread, 7/16/11, 12:39 pm.
Or my response to Fred Moolten, “Energy imbalance” thread, 4/20/11, 12:33 pm re the physically unrealizable model for the uptake of CO2.
Or my response to Pekka Pirilä, “Radiative transfer discussion” thread, 1/9/11, 11:20 am, and the ensuing, embedded discussion.
IPCC needs a slow uptake of CO2 to make it well-mixed in the atmosphere, thereby justifying its shift of the MLO CO2 record from regional to global, and thus to attribute the 50 year CO2 bulge to man’s emissions. A fallout beneficial to IPCC’s plan to scare the public was that CO2 would acidify the surface layer according, all to the chemical equations with equilibrium stoichiometric coefficients. IPCC reported the chemical equations (AR4, Eqs. 7.1, 7.1, p. 529), and used the equilibrium coefficients for its approximate ratio CO2:HCO3^-:CO3^– of 1:100:10, id., a solution given in the literature by the Bjerrum Plot, but along with the coefficients, never mentioned by IPCC.
The Bern formula outlined in these references (AR4, Table 2.14, p. 213, fn. a.) is an attempt to justify a slow uptake of CO2 in the ocean, quantifying four fates for ACO2: (1) some to the Solubility Pump, (2) some to the Organic Carbon Pump, (3) some to the CACO2 Counter Pump, and (4) the remainder remaining in the atmosphere. AR4, ¶7.3.1, pp. 511 ff., Figure 7.10, p. 530. The formula implies the existence of partitions or channels to feed the three different processes and the null process. Partitions or channels don’t exist in the real world. Instead, all atmospheric CO2 is accessible to the surface layer via the solubility pump until Henry’s Law (omitted by IPCC). Contrary to the belief of IPCC and its supporters posting here, the surface layer is never in equilibrium. Also the two biological pumps can’t work from CO2_g but needs ionized CO2_aq. The Revelle Buffer didn’t work when Revelle and Suess first tried it in 1957, and its resurrection by IPCC is a scientific failure, but may prove a political success. In attempting to measure the Revelle Buffer, IPCC accidentally measured Henry’s Law, which the lead author in review concealed so as not to confuse the readers.
And looking into IPCC’s model with a little more depth, one finds that IPCC has a flux of ± 90 GtC/yr or so between the atmosphere and the ocean of natural CO2, a net of zero including the terrestrial flux, while only ACO2 is subject to IPCC’s residence time bottleneck. This is another physical impossibility. The two species of gases differ only in their isotopic mix, 12CO2:13CO2:14CO2, and no assignment of absorption coefficients for these three forms of molecules can begin to satisfy IPCC’s uptake model.
To get an intuitive feel for how long it takes water to absorb CO2, see Marshall Brain’s video, at blogs.howstuffworks.com/2010/09/17/diy-how-to-carbonate-your-own-water-and-save-big-bucks-on-club-soda/.
Henry’s Law is instantaneous on even weather scales, much less climate scales. Henry’s Coefficients depend on temperature and pressure first, and salinity a distant third. IPCC’s notion that the coefficients depend on the carbonate state or pH of the surface layer is novel physics.
I have described the relationship between Henry’s law and Revelle factor in this comment
http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99650
The claims made in the message of Jeff Glassman are not correct. The Revelle factor is a result of well known chemistry and not in contradiction with Henry’s law, which applies to undissociated CO2 in seawater, not to the total solubility including bicarbonate ions, whose share is more than 99% of the total solubility. The value of pH of the oceans is really essential and salinity has its effect through its influence on pH. The Revelle factor is not present if pH remains constant, but additional CO2 leads unavoidably to some decrease in pH and the Revelle factor is a manifestation of this change.
His criticism of “the Bern formula” is also erroneous and appears to tell of not understanding the basics of the approach.
It’s unfortunate that I don’t know good comprehensive descriptions of all essential arguments. Many sources contain valid partial descriptions, but not the whole argument. One reason for that may be that the basics have been known so long that repeating the arguments has appeared unnecessary and scientifically uninteresting. Another reason is that the knowledge of the subprocesses remains highly inaccurate. Recent publications tell clearly that very little is known with high accuracy.
On the annual level the uncertainties are tens of percents of the average increase in atmospheric CO2, but over longer periods, the relative uncertainties are reduced, when constraints based on knowledge on the reservoirs of carbon get gradually tighter. Absolute proofs may remain unachievable, but the reasons to believe that the overall picture is well understood are strong. That means in particular that the impulse response to a sudden pulse of CO2 to the atmosphere has been estimated with fairly good accuracy for periods up to 100-200 years and for levels of concentration within a factor of two from the present.
What has been published on the longterm response over hundreds of years is less convincing as there are less constraints on that from the empirical data and as it remains dependent of poorly known dynamics of oceans. Similarly are also arguments on the reduction in uptake to oceans with increasing CO2 amounts based on less well known phenomena.
Pekka Pirilä 8/25/11, 4:52 am CO2 residence time
Revelle and Suess (1957) postulated that a peculiar buffer mechanism existed in sea water that created a bottleneck against dissolution of CO2. It was not based on chemistry but was a mere assumption. R&S applied it to the annual amount of industrial CO2 added to the atmosphere, that is, the brunt of anthropogenic CO2. It was the conjecture needed to justify the Callendar effect, now AGW, by having ACO2 accumulate in the atmosphere. This was desirable in 1957 to support the Keeling Curve, just getting underway by Charles Keeling, Revelle’s protégé.
R&S found,
It seems therefore quite improbable that an increase in the atmospheric CO2 concentration of as much as 10% could have been caused by industrial fuel combustion during the past century, as Callendar’s statistical analyses indicate. It is possible, however, that such an increase could have taken place as the result of a combination with various other causes. The most obvious ones are the following:
1) Increase of the average ocean temperature of 1ºC increases PCO2 by about 7%. …
That increase due to temperature is a result of Henry’s Law, which was mentioned neither in R&S nor by IPCC in TAR or AR4.
The paper concluded,
Present data on the total amount of CO2 in the atmosphere, on the rates and mechanisms of CO2 exchange between the sea and the air and between the air and the soils, and on possible fluctuations in marine organic carbon, are insufficient to give an accurate base line for measurement of future changes in atmospheric CO2. An opportunity exists during the International Geophysical Year to obtain much of the necessary information.
R&S(1957) was actually a pitch for IGY funding.
R&S were unable to set the parameters for their buffer conjecture to satisfy their boundary conditions. When IPCC reported on attempts to measure the Revelle Buffer factor, it produced a graph showing the Revelle Buffer varied with temperature. AR4, Second Order Draft, Figure 7.3.10 (a). That graph was a linear transformation of Henry’s Law. Nicolas Gruber, a reviewer, said that the Revelle Buffer has almost no temperature sensitivity. Gruber’s distinction was that made by R&S: the increase anticipated for CO2 could have been due to temperature because of the other cause, that associated with Henry’s Law, ¶1), above. The Revelle Buffer is no more and no less than a conjecture for the modification of Henry’s Law for the absorption of CO2. When IPCC attempted to resurrect the Revelle Buffer, it rediscovered Henry’s Law. IPCC’s editor wrote,
The buffer factor has a considerable T dependency (see Zeebe and Wolf-Gladrow, 2001). However, it is right that in the real ocean, this T dependency is overridden often by other processes such as pCO2 changes, TAlk changes and others. The diagram showing the T dependency of the buffer factor was omitted now in order not to confuse the reader. The text was changed. Bold added, 6/15/06.
For a complete discussion, see On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, … or CO2: “WHY ME” by following the link at my name in the header.
In other words, the Revelle Buffer is in dispute as to its temperature dependence, and when measured it looks like Henry’s Law rescaled. It can’t be pinned down because it is a phantom. Regardless, IPCC concealed these discrepancies in its final version of AR4.
While reasonable doubt exists as to the validity of the Revelle Buffer conjecture, a more important issue is that R&S and IPCC applied this conjecture to ACO2 and not to natural CO2. This failure to apply lacks any support in physics, and must be deemed an error. Moreover, the magnitude of that error is scaled by the ratio of natural CO2 outgassing from the ocean, about 90 GtC/yr, to ACO2 emissions, about 6 GtC/yr.
PP: The Revelle factor is a result of well known chemistry and not in contradiction with Henry’s law, which applies to undissociated CO2 in seawater, not to the total solubility including bicarbonate ions, whose share is more than 99% of the total solubility. The value of pH of the oceans is really essential and salinity has its effect through its influence on pH. The Revelle factor is not present if pH remains constant, but additional CO2 leads unavoidably to some decrease in pH and the Revelle factor is a manifestation of this change.
(1) Notice that Pekka provides no citations. He recites from memory.
(2) On the solubility of gases in water, and including CO2, the Handbook of Chemistry and Physics, 72d Ed., says,
Solubilities of those gases which react with water, … carbon dioxide, … are recorded as bulk solubilities, i.e., all chemical species of the gas and its reaction products with water are included. P. 6-3.
This formulation of gas solubility, unaffected by the postmodern physics of AGW, contradicts Pekka Pirilä’s undissociated CO2 assertion.
(3) Pekka Pirilä could have found support for his undissociated model in Zeebe & Wolf-Gladrow’s Encyclopedia of Paleoclimatology and Ancient Environments, 2008, (available on line). However, Z&W-G define Henry’s Law only for thermodynamic equilibrium, and then in proportion to the sum of the concentrations of the two molecules of CO2_aq and H2CO3.
(3.1) In thermodynamic equilibrium, the ratios are known by the solution to the carbonate equations along with the stoichiometric equilibrium constants. The solutions are given by the Bjerrum plot. Wolf-Gladrow, D., CO2 in Seawater: Equilibrium, Kinetics, Isotopes, 6/24/06 (available on line), taken in part from Zeebe and Wolf-Gladrow, 2001, IPCC’s source for carbonate chemistry (AR4, ¶7.3.4.1 Overview of the Ocean Carbon Cycle, p. 528, excluding the Bjerrum plot. Since for thermodynamic equilibrium all the reaction products are in a known proportion, changing from one species, e.g., undissociated CO2, to any mix of species, is just a matter of a known scale factor.
(3.2) The surface layer of the ocean is quite turbulent, contains entrained air, and undergoes thermal exchanges with the atmosphere and the deep ocean. It is never in equilibrium, which Pekka Pirilä, IPCC, and others ignore. The undissociated form of Henry’s Law is not applicable to the real world.
(4) CO2 is highly soluble in any water, and dissolution always occurs. It proceeds instantaneously on weather to climate scales, but accelerated by wind. It does not wait, that is, it is not buffered, for the state of equilibrium to adjust. Solubility does not in the first, second, or third order, depend on the chemical state of the water, meaning expressly either its pH, its alkalinity, or its DIC ratio, even though Henry’s coefficient might be estimated differently. That ratio is the partial pressure of CO2_g to the concentration of CO2, whether bulk, molecular, or some other mix, in the water. Pekka Pirilä’s phrase Henry’s law, which applies to undissociated CO2 in seawater would be correct if he meant that in some formulations, Henry’s law coefficient refers to the concentration of undissociated CO2 in seawater. As written it implies that dissolution is regulated by the concentration of undissociated CO2, which is false. The ratio of dissolution has no effect on the flux between CO2_g and CO2_aq.
PP: His criticism of “the Bern formula” is also erroneous and appears to tell of not understanding the basics of the approach. [¶] It’s unfortunate that I don’t know good comprehensive descriptions of all essential arguments.
Pekka Pirilä is undoubtedly skilled at relating physical processes to their algebraic representations. He is just not applying that skill, and as a result reasoning incorrectly. His missing good comprehensive description is this. The Bern formula has four coefficients, a_i, which total to 1, and which represents the mass of the pulse of CO2 put into the atmosphere. The Bern formula assigns values to those coefficients (21.7%, 25.9%, 33.8%, and 18.6%). That assignment is the algebraic equivalent of creating four reservoirs in the atmosphere to hold CO2 for the four processes. Those reservoirs do not exist. The Bern formula is incompetent.
PP: That means in particular that the impulse response to a sudden pulse of CO2 to the atmosphere has been estimated with fairly good accuracy for periods up to 100-200 years and for levels of concentration within a factor of two from the present.
Not by the Bern formula is it known! The Solubility Pump time constant is 1.186 years in the Bern formula, and that represent Henry’s Law uptake of CO2 from the atmosphere by dissolution. Two centuries is 168 time constants. That pulse will be reduced to 5.8E-74 in 200 years, known with fairly good accuracy.
Puff, the Magic Dragon
Lived by the Sea.
===========
I have not studied, how well Revelle and Sues succeeded in their analysis, as that’s of historical interest, not scientific interest.
The phenomenon that is described by Revelle factor is true. It’s not new speculation or a new hypothesis, but an inescapable consequence of chemistry, more specifically of the equations of chemical balance.
Pekka Pirilä 8/25/11, 12:57 pm CO2 residence time
What is significant is that the Revelle Factor or Revelle Buffer was (a) not based as you claimed on chemistry and (b) never worked, not originally in 1957 and not when IPCC tried to resurrect it.
Do you have any references substantiating your claims (a) that the Revelle Buffer is true and (b) that it is based chemistry?
It’s solid science. Your claims are totally without merit as is your insistence of single exponentials both for the removal of CO2 and transmission of radiation.
I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.
Pekka Pirilä 8/25/11, 2:49 pm CO2 residence time
PP: It’s solid science. Your claims are totally without merit as is your insistence of single exponentials both for the removal of CO2 and transmission of radiation.
I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.
When you make claims like these, sans references, inaccurate, and flying by the seat of your pants. This is not science, and you are not participating in a serious scientific dialog. Moreover, you manage to draw conclusions (e.g., nonsense) from this muck of errors.
JAC: The goal of the blog is to discuss scientifically relevant issues, which we are doing. curryja 4/25/11, 9:53 am, CO2 residence time
I doubt she meant the goal is to discuss scientifically relevant issues subjectively, off the top of the head, shooting from the hip, though that’s too often the result.
1. I said that the solution to IPCC’s resident time formula in AR4 Glossary was one exponential. 8/24/11, 9:48 pm. If you have a mathematics reference that says it is otherwise, please provide it. It must be postmodern math.
2. I never wrote anything so silly as radiation transmission could be represented by a single exponential. I did write that radiation absorption was represented by a single exponential, and showed you how.
3. Nearly a year ago, I gave you a derivation of the Beer half of the Beer Lambert law resulting in a single exponential. Radiative transfer discussion, 12/23/10, 3:40 pm. You responded,
PP: Your derivation is mathematically the same one that everyone gives [not one citation], but it applies only to monochromatic radiation [no citation]. It does not help that you state that you do not make that assumption, as it is hidden in your derivation as well [no citation] – unless you claim that all wavelengths are absorbed equally strongly. Bold added, id., 4:48 pm.
In the ensuing dialog, I asked you to explain how that assumption was hidden in my derivation. Id., 5:35 pm. You failed to do so.
You did agree that a single exponential was appropriate for radiation absorption. But as shown in this quotation, you continued to say incorrectly that it was valid only for a single wavelength.
4. Previously, I had written to you
The Beer-Lambert Law applies an empirical coefficient. Precisely speaking, it applies to a complex spectrum of light for which all the spectral components have the same empirical coefficient, and not restricted just to the same frequency. What we agree(?) on thread, 5/27/11, 10:35 am.
Later, I criticized the Georgia Tech course EAS 8803 for representing the Beer Lambert law incorrectly as monochrome, explaining that it was unnecessarily restrictive. Planetary energy balance, 8/20/11, 2:47 pm. Your response was this reversal of your position:
PP: Unfortunately for your case the Beer-Lambert law is valid only, if the absorption coefficient is the same for all wavelengths present, and that requirement is not satisfied in any general case. How serious the error is depends on the case, but it may be very large, as it indeed is in the Earth atmosphere. Id., 3:00 pm.
Not only did you, once again, supply no citations, but you failed to provide my intervening teachings asserting that same position to you. You write as if you were correcting me when by your writings make plain, it is you who has been corrected.
You are learning, but it’s hard to dig out from your citation-free, zig zag posts. This seat-of-the-pants style from yourself and others here is a major cause of the failure of the topics on this blog to converge as Dr. Curry seems to wish.
My comments are based on the fact that “you showed” something that’s mathematically absolutely wrong.
For the removal of CO2 you presented a very similar error in most basic mathematics.
In this case you don’t accept that the existence Revelle factor is based on very basic chemistry. You stated also that pH doesn’t matter although it tells directly the most important factor that influences the total solubility of CO2 in seawater, the value of Dissolved Inorganic Carbon (DIC). DIC is the value of interest for the carbon storage as its quantity is more than 100 times larger than the solubility of CO2.as gas.
I have written myself that the Revelle factor is not easy to determine accurately, because it depends on the buffering of the seawater. It’s, however, known to be of the order of 10 and thus very important.
Your continuing dismissal of simple mathematics as well as most basic and most reliably known physics and chemistry is still incomprehensible to me. Scientific references are not needed, when the conclusions are based directly on knowledge available in every basic textbook of physics or chemistry, or on elementary properties of exponential functions and when these arguments are given in full.
I have presented my arguments in sufficient detail and many of them several times also in my answers to your comments. If you are in denial mode, the amount of detail doesn’t make any difference. The validity of your claim that I don’t participate in scientific discussion can be judged from my various comments on this site. Sometimes the discussion has proceeded to the point, where my comments become more blunt.
Pekka Pirilä, 5/27/11, 9:27 Am, CO2 discussion
Your reply is as empty as your citation list. What error in mathematics are you talking about?
PP: I have written myself that the Revelle factor is not easy to determine accurately, because it depends on the buffering of the seawater. It’s, however, known to be of the order of 10 and thus very important.
You claim! Where is your reference? Who got to read it? We need to see it because what you describe here doesn’t make sense. The Revelle Factor doesn’t depend on the buffering of seawater – it IS the buffering of seawater. Revelle and Suess (1957) suggested that it might be of the order of 10, under equilibrium conditions at constant alkalinity. P. 22. IPCC measured it between 8 and 13 (AR4, ¶7.3.4.2, p. 531), another large range, and in the open ocean (neither constant alkalinity nor equilibrium). IPDD measured it with a disputed, temperature dependent co-parameter, which it concealed so as not to confuse the reader.
You claim the Revelle factor is … very important, but give no reference to where it was ever used, or to what the result of its application might have been. The RF was discussed in IPCC Reports, but never used, and its effects contradicted, as I cited previously for you.
You can easily find thousands of references related to the Revelle factor.
One example of lecture material explaining it is here.
You didn’t only criticize the Georgia Tech course based on the irrelevant comment that Beer-Lambert law is valid also for multichromatic radiation, when the absorption coefficients happen to be the same, but you implied clearly that this trivial extension would change essentially the outcome. Furthermore you claimed that changes to the presentation of Beer-Lambert law would be done to justify somehow suspect conclusion on the behavior in the atmosphere. This proves that your point was not only irrelevant sophistry, but you really tried to perpetuate wrong conclusions.
If you are now willing to admit that the Beer-Lambert law does not at all work for the total IR radiation in atmosphere, that’s fine. Is that your present view?
The controversy on the Bern model is very similar in it’s nature. You try to discredit it based on false arguments.
Mr. Pekka Pirilä, You seem to have all the answers to the infinitely complex questions dealing with the workings of planet Earth. You even were quick to model a relativity experiment using ‘small balls’. Earth’s rotational speed was put forward by you as a difficult mathematical problem that would need much thought…
I suggested Peter’s model out of hand using the Bible as a possible proxy
for the Earth’s rotational speed for your experiment, at: 4.167 rps…
—————————————————————————————
“David Wojick, Perhaps the fisherman Peter, was a scientist too.
Yesterday…
Pekka Pirilä | August 20, 2011 at 10:56 am | Reply
Using scale models is a well known and useful tool in engineering. The
models agree never fully on the real system, but in many cases it’s possible
to construct models that behave in a very similar way. That requires always
that several different properties are matched. In fluid dynamics Reynold’s
number is usually the most important parameter that must have the same value in the model as in the real system. Prandl’s number is another common important parameter.
In the example described by DocMartyn some critical numbers must also be matched to get results of any significance. Without going to the details, it appears totally clear that the small balls must be made to turn much faster than the Earth turns. I cannot tell, what would be the most appropriate length of day, but it might well be closer to 1 min than to 24 hours.
Such an experiment tells practically nothing unless it’s supported by a
careful theoretical analysis that tells the right combination of parameters.
Tom | August 20, 2011 at 12:04 pm | Reply
What if?…
II Peter 3:8 But you must not forget this one thing, dear friends: A day is
like a thousand years to the Lord, and a thousand years is like a day.
1000 years, times 360 days= 360,000 night-day cycles (NDC)
Divided by 24 heavenly hours= 15,000 NDC per HH
Divided by 60 heavenly minutes=250 NDC per HM
Divided by 60 heavenly seconds=4.167 NDC; as revolutions per second. for your model?
David, what do you think of the model Peter gives us, above? Is my math
sound? Would this rotation speed work on the ‘small balls’ representing the
Earth? Even if we don’t have all the answers, it is helpful to know where to
look for the answer.”
——————————————————————————–
Pekka Pirilä, what do you say? If this number works for you, what would that mean for our relative ‘size’ to the observer? Fun, to think outside the box.
I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.
Of these two categories, I would put Jeff’s point that “The residence time of CO2 in the atmosphere is taught in 11th year public school physics as the leaky bucket problem” in the latter. Although my high school physics didn’t get into atmospheric physics, I consider the leaky bucket analogy a helpful one for understanding what would happen if total CO2 emissions were to drop by more than 4 GtC/yr. I replied to Jeff’s dragon-part-iv post just now at
http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-106478
I’ll reserve judgement on the rest of Jeff’s recent posts, there’s only so many arguments one can usefully engage with at one time.
Pekka Pirilä 8/25/11, 12:57 pm CO2 residence time
A PS to my last:
I have just re-read the AR4 §7.3.4.2 on the Revelle Factor. It concludes nothing relevant to it. The section has a beautiful diagram of the distribution of the measured RF, Figure 7.11, but it’s a half measurement: the old part (a) showing its disputed temperature dependence is gone. The conclusion to Section 7.3.4.2 Ocean Carbon Cycle Processes and Feedbacks to Climate includes this:
Future changes in ocean circulation and density stratification are still highly uncertain. Both the physical uptake of CO2 by the ocean and changes in the biological cycling of carbon depend on these factors.
Thus the physical uptake of CO2 by the ocean is still highly uncertain, analysis of the Revelle Factor notwithstanding. The RF got IPCC nowhere. It’s past time for it to learn about Henry’s Law, the concealed chart, and to apply it instantaneously to the surface ocean.
And of course you have read and thoroughly comprehended all of the references on that page as well. Right?
Some of the required reading from just that sub-section of a section of Chapter 7 of AR4 is:
That’s just from the first two paragraphs of 7.3.4.1.
You whine like a child that Pekka won’t get into scientific details with you, but I don’t believe you have read any real climate science beyond the IPCC’s summary. For the rest of your “information” you’re just regurgitating what you’ve passively absorbed from climate science denial ‘blogs like wuwt, climateoddity and in at least 50% of its content, climateetc. But if I’m wrong about that, if you have any original thoughts on the matter, then you can pick the one paper cited in section 7.3.4 of AR4 most relevant to your claims about ocean chemistry, and you can tell us
(1) exactly what it gets wrong
&
(2) what’s the right answer.
That is, if you’re at all serious. Otherwise, just continue the hand-wavy nonsense, and keep pretending that it’s Dr. Pirilä’s burden to explain to you the chemistry of buffers to your satisfaction, as if your failure to understand and then apply first-year college chemistry makes all the legitimate scientists wrong.
settledscience, 8/28/11, 1:12 pm, CO2 discussion
∅: And of course you have read and thoroughly comprehended all of the references on that page as well. Right?
Who are you who names himself “settledscience”, the empty set, ∅, and why do you ask unimportant questions? Are you taking a poll, writing a term paper for summer school?
If you ever get into science, you’ll find that when researching the accuracy of a paper, reading every citation is not required.
For your term paper, the answer in general to your question is “no”, because
(1) most of IPCC’s references are behind a paywall,
(2) many have proved irrelevant to IPCC’s claim for them, and
(3) citing references without quotations violates a basic principle for writing scientific papers, one that provides that references should be for the purposes of validating claims and not a divergence for the reader to validate the paper through independent, collateral research .
By contrast, note I made several reference to ¶7.3.4.2 for propositions stated. If you want to participate in a scientific dialog instead of just adding noise, check that reference for yourself and raise any errors you might perceive.
∅: Some of the required reading from just that sub-section of a section of Chapter 7 of AR4 is: [quotations] That’s just from the first two paragraphs of 7.3.4.1.
How might you know what is or is not required on a subject for which you have expressed and can express no opinion?
You are not on the same page with yourself.
Those two paragraphs you cite contain the following nine references. A little analysis provides a nice object lesson in how IPCC abuses referencing.
1. Degens, E.T., S. Kempe, and A. Spitzy, 1984: Carbon dioxide: A biogeochemical portrait. In: The Handbook of Environmental Chemistry [Hutzinger, O. (ed.)]. Vol. 1, Part C, Springer-Verlag, Berlin, Heidelberg, pp. 127–215. [$249.50; 10 pdf chapters at 24.95 per chapter.]
2. Eglinton, T.I., and D.J. Repeta, 2004, Organic matter in the contemporary ocean. In: Treatise on Geochemistry [Holland, H.D., and K.K. Turekian (eds.)]. Volume 6, The Oceans and Marine Geochemistry, Elsevier Pergamon, Amsterdam, pp. 145–180. [$107]
3. Falkowski, P., et al., 2000: The global carbon cycle: A test of our knowledge of Earth as a system. Science, 290(5490), 291–296. [$15]
4. Hansell, D.A., and C.A. Carlson, 1998: Deep-ocean gradients in the concentration of dissolved organic carbon. Nature, 395, 263–266. [$32]
5. Nightingale, P.D., et al., 2000: In situ evaluation of air-sea gas exchange parameterisations using novel conservative and volatile tracers. Global Biogeochem. Cycles, 14(1), 373–387. [Downloaded, superseded and not read.]
6. Royal Society, 2005: Ocean Acidification Due to Increasing Atmospheric Carbon Dioxide. Policy document 12/05, June 2005, The Royal Society, London, 60 pp., http://www.royalsoc.ac.uk/document. asp?tip=0&id=3249. [Downloaded; acidification wrongly considered real to rely on Bjerrum solution.]
7. Sarmiento, J.L., and N. Gruber, 2006: Ocean Biogeochemical Dynamics. Princeton University Press, Princeton, NJ, 503 pp. [$75]
8. Wanninkhof, R., and W.R. McGillis, 1999: A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13), 1889–1892. [Down loaded & reviewed. Relies on pCO2 from Takahashi, which produces incorrect result.]
9. Zeebe, R.E., and D. Wolf-Gladrow, 2001: CO2 in Seawater: Equilibrium, Kinetics, Isotopes. Elsevier Oceanography Series 65, Elsevier, Amsterdam, 346 pp. [$101].
Are you pretending to have read these?
Six of your own citations are behind a paywall: purchase price for all six is $587.50, which would have to be paid in the blind in the hope of answering an impudent, incompetent, and irrelevant inquiry. Major bits of reference 9 are available in a separate publication by Dieter Wolf-Gladrow, which I have read and on occasion cited. IPCC should be required under the US and UK Freedom of Information Acts to provide accessible, text readable copies of all its citations.
The other three I have downloaded. Two are incompetent, one relying on the Takahashi method, which produces an incorrect result (see Takahashi method, rocketscientistsjournal.com, On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc., ¶5), and the other relying on the Bjerrum solution for a surface layer in equilibrium, which does not exist (see id). The third I have not read because IPCC deemed its results contradicted by ref. 9.
∅: I don’t believe you have read any real climate science beyond the IPCC’s summary. … But if I’m wrong about that, if you have any original thoughts on the matter, then you can pick the one paper cited in section 7.3.4 of AR4 most relevant to your claims about ocean chemistry, and you can tell us (1) exactly what it gets wrong & (2) what’s the right answer.
1.) Why do you imagine anyone cares about your opinion?
2.) Yes, you are wrong about that.
3.) If any serious reader, or as improbable as it might be, yourself, is interested, I have, above, provided sufficient reasons with links to complete answers. I have no reason or desire to restrict myself to “one paper” as you ask. Short answers include
(a) the surface layer is not in equilibrium so the Bjerrum solution to the carbonate chemical equations for equilibrium are not applicable,
(b) the Takahashi diagram provides a small fraction of the CO2 flux reported elsewhere by IPCC, so gets the carbon cycle wrong,
(c) IPCC wrongly applies the imaginary CO2 buffer to ACO2 and not to natural CO2, a physical impossibility, and
(d) the right answer is that the surface layer stores excess CO2_aq instead of the atmosphere storing excess CO2_g. And of course none of it is relevant to the basic climate question because Earth’s surface temperature follows the Sun with a simple transfer function.
∅: Otherwise, just continue the hand-wavy nonsense, and keep pretending that it’s Dr. Pirilä’s burden to explain to you the chemistry of buffers to your satisfaction, as if your failure to understand and then apply first-year college chemistry makes all the legitimate scientists wrong.
I was waving bye-bye to ∅.
I pretend nothing. Dr. Pirilä is quite wrong about the Revelle Factor, and most recently about climate science invalidating the Beer-Lambert Law. He is stuck supporting AGW, a belief system, thereby inheriting all IPCC’s many errors, and leaving himself to argue sans references.
Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.
And of course, now that I know that Jeff denies the Settled Science that is the Greenhouse Effect, I need not bother speaking to it ever again.
Jeff it appears does not believe that excess CO2 actually exists in the atmosphere. I say this because the title of the article he cites is called:
“On Why CO2 Is Known Not To Have Accumulated in the Atmosphere”.
Do I have that interpretation right?
settledscience bloviates,
“Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.”
Here is your chance to be the big man on campus. Simply link the appropriate studies and/or papers from the literature to prove your point.
I would suggest you reread Mr. Glassman’s post so that you actually understand what he is saying also.
settledscience 9/5/11, 7:47 pm, CO2 residence time
∅ [empty set]: Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.
I suppose you don’t care about self-respect judging by your anonymity. But if you have any, and don’t want any longer to be seen as bloviat[ing] [thanks to kuhnkat, 9.8.11, 3:42 pm], here’s what you need to do in trying to participate in an intelligent discussion, to say nothing of science.
(1) Make a point, e.g., describe something you think is not in accord with original research.
(2) State your point completely, quoting from any sources as necessary. Don’t make the reader do your research for you.
(3) Provide references, never behind a pay wall, so the reader can check your interpretation of the source.
Otherwise, you come off as noise, and as a sycophant — someone who is trying to gain respect by association, in this case, the AGW movement.
WebHubTelescope 9/5/11, 10:45 pm, CO2 residence time
WHT: Jeff it appears does not believe that excess CO2 actually exists in the atmosphere. I say this because the title of the article he cites is called:
“On Why CO2 Is Known Not To Have Accumulated in the Atmosphere”.
Do I have that interpretation right?
No.
Johnny Carson had a routine called the Great Carsoni in which he would conjure the contents of a sealed envelope by pressing it to his turbaned head. It isn’t working for you. Why don’t you go beyond the title and actually read the article?
The article answers the question of why the CO2 that IS in the atmosphere is not an accumulated backlog. It begins,
The Acquittal [of Carbon Dioxide shows that carbon dioxide did not accumulate in the atmosphere during the paleo era of the Vostok ice cores. If it had, the fit of the complement of the solubility curve might have been improved by the addition of a constant. It was not. And because the CO2 presumably still follows the complement of the solubility curve, it should be increasing during the modern era of global warming in recovery from Earth’s various ice epochs. These conclusions find support in a number of points in the IPCC reports.
The remainder of the paper contains 18 enumerated reasons, in gory detail, including tangential considerations of the fact that CO2 does not accumulate. Read on, see if you disagree with any, and then follow my advice to skepticalscience on 9/8/11, 6:31 pm.
That’s all I do is original research. If it’s not original it’s not challenging and therefore not much fun.
WebHubTelescope 9/9/11, 3:16 am, CO2 Residence Time
WHT: That’s all I do is original research.
You answer criticism of your empty post with another empty post. Besides, you contradict yourself. It’s not all you do – you also submit empty posts.
Even if what you claim were true, that you do original research, how is that relevant to anything here? History is littered with respected scientists, much less anonymous posters, whose pottery is thoroughly cracked (crazed) outside their narrow confines. Climate Etc. is surfeit with contributors, some who use a real name with doctor attached, who think AGW exists as a matter of belief to defend it tooth-and-nail, of course, like WHT, liberally spiced with ad hominem, off-point, and sans references.
thanks for your support.
Jeff, sorry for the late contribution. Wasn’t aware of this discussion during my absence, but here is a nice empiric proof of the Revelle factor. Total carbon (DIC) is measured in seawater at several places, the longest series are in Hawaii and Bermuda. The latter can be found here:
http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
The period displayed is 20 years, 1983-2003.
The atmospheric CO2 levels in that period increased about 10%, the pCO2 (that is free dissolved CO2) of the oceans increased about 8% (which obeys Hemry’s Law, with some delay), but DIC increased only with 0.8%, about a factor 10 compared to the free CO2 in the oceans, even more for atmospheric CO2.
As the total amount of carbon in the atmosphere is about 800 GtC and in the ocean’s surface about 1000 GtC, it is obvious that a 10% increase of CO2 in the atmosphere only gives less than a 1% increase in total carbon in the ocean’s surface.
The CO2 exchange between the atmosphere and the upper part of the oceans is very fast (e-folding time about 1.5 years) but is limited to not more than 10% of any change in the atmosphere. The rest of the 50% of what disappears from the human emissions (as mass, not as individual molecules) is absorbed by much slower sinks in other reservoirs (deep oceans, more permanent storage in the biosphere).
Ferdinand Engelbeen 9/9/11, 4:57 pm, CO2 Residence Time
Your calculation about a factor of 10 for the Revelle Factor is a single calculation for a parameter that IPCC says ranges from 8 to 16 over the ocean. AR4, Figure 7.11(b).
http://www.rocketscientistsjournal.com/2007/06/_res/F7-11.jpg
Your link is to an anonymous article from the Bermuda Institute of Ocean Sciences (BIOS) that features a chart prepared expressly for AR4. IPCC used that chart in part, but rejected it in relevant part. See AR4, Figure 5.9, p. 404. IPCC kept data from BATS and ESTOC, added data from HOT, but rejected data from ALOHA and Hydrostation S. More importantly, IPCC rejected the BIOS curve of normalized DIC, the parameter on which you rely for your calculation.
You have neither proof nor evidence of the Revelle factor. BIOS does not mention the Revelle factor. Where IPCC discussed the Revelle factor (AR4, &7.3.4, pp. 528 et seq.), it did not mention the BIOS data.
The most complete picture of the Revelle factor appeared in AR4 Second Order draft as Figure 7.3.10.
http://www.rocketscientistsjournal.com/2007/06/_res/F7-3-10.jpg
The global map of the Revelle factor IPCC attributes to Sabine et al. (2004a), co-authored by Nicolas Gruber. The Revelle factor variation with temperature (Figure 7.3.10a), attributed to Zeebe and Wolf-Gladrow (2001) ($101), is a simple linear transformation of the solubility curve. Gruber objected, and his objection resulted in concealment of the temperature sensitivity curve:
Comment: “buffer factor decreases with rising seawater temperature…” This is a common misconception. The buffer factor itself has almost no temperature sensitivity (in an isochemical situation). In contrast, the buffer factor strongly depends on the DIC to Alk ratio. The reason why there is an apparent temperature sensitivity is because of the temperature dependent solubility of total DIC (note that (a) is not isochemical, it is done with a constant pCO2, i.e. DIC will decrease with increasing temperature). In the ocean, surface ocean DIC and Alk are controlled by a myriad of processes, including temperature, so it is wrong to suggest that the spatial distribution of the buffer factor shown in Figure 7.3.10c is driven by temperature . [Nicolas Gruber (Reviewer’s comment ID #: 307-70)]
[Editor’s] Notes: Taken into account. The buffer factor has a considerable T dependency (see Zeebe and Wolf-Gladrow, 2001). However, it is right that in the real ocean, this T dependency is overridden often by other processes such as pCO2 changes, TAlk changes and others. The diagram showing the T dependency of the buffer factor was omitted now in order not to confuse the reader. The text was changed. Bold added, AR4, Second-Order Draft.
The Revelle Buffer was a conjecture, a mere relationship between anthropogenic CO2 parameters in the ocean and in the atmosphere. The authors, Revelle & Suess (1957) wanted to validate the Callendar Effect, a conjecture that manmade CO2 would build up in the atmosphere and cause global warming. To do so, they expressed the ratio of CO2 from fossil fuels going into the atmosphere, r, to that going into the ocean, s, as constant factor, λ, times the ratio of total carbon in the atmosphere, A_0, to that in the ocean at equilibrium, S_0. r/s = λ*A_0/S_0. The factor λ is the Revelle buffer factors, and they guessed it might be about 10. However they were unable to find a realistic set of parameters to produce a factor of 10, and left the problem as one for IGY funding.
IPCC tried to rehabilitate the buffer for the same reason. If the ratio proved to be even approximately a constant over the ocean, meaning that r and s were correlated, then a physicist might discover a cause and effect relationship. The ratio is not even approximately constant, but varies by ±50%. Furthermore, one set of experts, Zeebe & Wolf-Gladrow, say the buffer is no different than solubility, while another, Gruber, denies it.
The Revelle factor is useless. All sorts of ratios can be postulated, and some might someday be meaningful. The Revelle buffer factor is not one of them.
The analyses from Revelle & Suess; Sabine, et al; Gruber; IPCC, Zeebe & Wolf-Gladrow, and BIOS all rely on thermodynamic equilibrium in the surface layer. Surface layer equilibrium is ludicrous on its face. The relationship between the carbonate components when not in equilibrium is unknown.
Lastly, although this is not the end of the nonsense, the idea that the ocean buffers against dissolution of anthropogenic CO2 at 6 GtC/yr but not at all against natural CO2 at 90 GtC/yr is equally ludicrous.
The ocean absorbs CO2 according to Henry’s Law. The absorption depends on the partial pressure of CO2_g and SST, and somewhat on salinity, according to an unquantified solubility curve. It does not depend on surface layer pH or ionization, and it does not accumulate in the atmosphere.
Jeff, that is a long exposure. It seems that there still is a lot of discussion about the exact height of the Revelle factor. But that is not the point in discussion (it doesn’t matter if the factor is 8 or 16, it matters that there is a factor).
I haven’t looked at the AR4 references yet, but regardless of the reasons why some results were rejected, nDIC or DIC without normalizing is measured in many places and all show a much slower increase than pCO2(aq) or pCO2(atm). According to you that should be impossible, as DIC should increase at the same rate as pCO2 in air and water…
But the main points of discussion are in your last paragraphs:
Lastly, although this is not the end of the nonsense, the idea that the ocean buffers against dissolution of anthropogenic CO2 at 6 GtC/yr but not at all against natural CO2 at 90 GtC/yr is equally ludicrous.
You are misinterpreting the facts: The 90 GtC/yr is the effect of temperature on CO2 solubility: partly causing continuous CO2 releases from the tropic oceans (mainly the Pacific deep ocean upwelling) and continuous CO2 sinks near the poles (mainly the THC sink in the NE Atlantic); partly the warming and cooling of the mid-latitude oceans over the seasons. The net effect of this is zero CO2 change over a full seasonal cycle at zero average temperature change over a year (16 microatm in the water or ~16 ppmv in the atmosphere for 1°C temperature change).
In contrast, any increase in the atmosphere pushes more CO2 in average in the ocean surface at the same temperature. That is where and when the Revelle factor is working.
The ocean absorbs CO2 according to Henry’s Law. The absorption depends on the partial pressure of CO2_g and SST, and somewhat on salinity, according to an unquantified solubility curve.
That is right and wrong: the ocean absorbs CO2 according to Henry’s Law. But Henry’s Law only is about free CO2 in the waters. For a 10% increase of CO2 in the atmosphere and the same temperature, the amount of free CO2 in the ocean surface waters will increase with 10%. If there were no following reactions, that increase of free CO2 results in an increase of total carbon in solution with 0.1%, as free CO2 is less than 1% of total carbon in solution.
But there are following reactions. Free CO2 is converted into carbonic acid that dissociates into bicarbonate and carbonate ions and hydrogen ions. Thus if you add more CO2 to the ocean waters, the pH lowers. But a lower pH means that the dissociation reaction is going the other way out, back to bicarbonate and free CO2. With other words, the amount of total CO2 in solution increases beyond the increase in 1% free CO2, because of further dissociation reactions but not to the full extent of the increase in the atmosphere, because of the counteraction caused by a lowering of the pH. See further:
http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf
It does not depend on surface layer pH or ionization
Simple proof that you are wrong: make a solution of soda or bicarbonate and add some acetic acid. Lots of CO2 are bubbling up, because the amount of free CO2 in the solution gets far beyond Henry’s Law “normal” concentration.
it does not accumulate in the atmosphere
The increase in the atmosphere is measured, the human emissions are double that. But if you assume that some natural cause follows the human emissions at such an incredible fixed ratio, but the human emissions disappear somewhere without leaving a trace in the atmosphere, that is your opinion, not based on any observed facts.
Ferdinand Engelbeen, 9/10/11, 7:06 pm, CO2 Residence Time
Did I say welcome back to the discussions? Well, welcome back.
FE: It seems that there still is a lot of discussion about the exact height of the Revelle factor. But that is not the point in discussion (it doesn’t matter if the factor is 8 or 16, it matters that there is a factor)
One could make a ratio out of any two random variables, but the fact that dividing one by another produces a number is meaningless. Sometimes RVs will appear correlated suggesting to an investigator that measuring their correlation might be fruitful. That is not the case with the fossil fuel CO2 emissions because they cannot be measured separately from the natural CO2. Of course, if anthropogenic CO2 were directly measurable, no one would need the Revelle factor.
FE: nDIC or DIC without normalizing is measured in many places and all show a much slower increase than pCO2(aq) or pCO2(atm).
and previously,
FE: The atmospheric CO2 levels in that period increased about 10%, the pCO2 (that is free dissolved CO2) of the oceans increased about 8% (which obeys Henry’s Law, with some delay), but DIC increased only with 0.8%, about a factor 10 compared to the free CO2 in the oceans, even more for atmospheric CO2. 9/9/11, 4:57 pm.
The parameter pCO2(aq), which I assume is the same as your pCO2 … of the oceans, doesn’t exist. CO2 has no partial pressure when dissolved. Instead, pCO2(g), g for the gas in thermodynamic equilibrium with the water, is deemed to be pCO2(aq). I have not reviewed the measurements or methods by which investigators have estimated pCO2(aq). I do know, however, that they report a difference between pCO2(atm) and pCO2(aq), and that that difference, coupled with wind speed, has been taken as the cause of the air-sea flux of CO2. Takahashi used these data to produce his map of CO2 flux across the globe. Unfortunately, the sum of his positive and negative fluxes is about an order of magnitude less than the positive and negative fluxes estimated by other means and reported by IPCC and others. For a full discussion, see ¶#5, and especially Figures 1 and 1A, and Equation (1), On Why CO2 Is Known Not To Have Accumulated in the Atmosphere …,
http://rocketscientistsjournal.com/2007/06/on_why_co2_is_known_not_to_hav.html
Takahashi’s results from application of pCO2(aq) deflate any confidence one might assign to pCO2(aq). His map depends on the differential pressure of pCO2(g) and pCO2(aq), where the latter does not exist except as it might be created in the laboratory procedures.
FE: You are misinterpreting the facts: The 90 GtC/yr is the effect of temperature on CO2 solubility: partly causing continuous CO2 releases from the tropic oceans (mainly the Pacific deep ocean upwelling) and continuous CO2 sinks near the poles (mainly the THC sink in the NE Atlantic); partly the warming and cooling of the mid-latitude oceans over the seasons.
I agree with your geography, and in fact I think the role of the THC in CO2 was first postulated in The Acquittal of Carbon Dioxide in October, 2006. If you have an earlier reference, please let me know.
However, I don’t put any stock in the seasonal effects, nor in the warming and cooling, with the attendant breathing of CO2, in the ocean gyres. These would have a net effect of zero in the +92 GtC/yr and -90 GtC/yr annual air/sea fluxes. The upwelling caused primarily by Ekman pumping appears to be the exit of the THC, where deep, cold, CO2-saturated waters are dumped on the surface to warm and outgas.
FE: The net effect of this is zero CO2 change over a full seasonal cycle at zero average temperature change over a year (16 microatm in the water or ~16 ppmv in the atmosphere for 1°C temperature change).
For air-sea CO2 flux, I model surface oceanography as the superposition of three components: seasonal effects, gyres, and the year-long transport of surface waters from the exit of the THC to its headwaters at the poles. As I said, the first two should contribute a net of zero to the annual air-sea CO2 fluxes of about 90 GtC/yr. The mechanisms of outgassing and recharging are quite different, though both depend on Henry’s Law. That outgassing occurs continuously in bulk at the Eastern Equatorial Pacific and at a couple of other hot spots. The recharging occurs distributed over the entire ocean surface due to the cooling in the transport current, the return path of water for the THC. This action causes a high volume river of CO2 to spiral around the globe. This river cannot be represented successfully by the net uptake of 2.2 GtC/yr, nor by parceling that net over the globe.
FE: In contrast, any increase in the atmosphere pushes more CO2 in average in the ocean surface at the same temperature. That is where and when the Revelle factor is working.
First, any increase in atmospheric CO2 would cause more to CO2 to be dissolved in the surface layer according to Henry’s Law. However, ACO2 is only about 6 GtC/yr, and if it weren’t for the flap adopted by IPCC that AGW exists, the anthropogenic contribution would be negligible. It only contributes about 3% to the total CO2 flux from land and ocean into the atmosphere, and about half that if you include the leaf water that IPCC introduced and dropped – all this in a model that has yet to be successful in estimating global warming within the first order of magnitude, and that is turning out to be invalid with respect to climate sensitivity based on satellite data.
The AGW model raises the art of relying on small difference between large numbers to a new height by neglecting effects and putting aside any noise masking the signals, both through fantastic assumptions. The danger in relying on the small difference between large numbers used to be taught in grade school; now they don’t even teach grammar in the US. (Schooling here is in its fourth generation of teaching the Delicate Blue Planet Model, environmentalism, and self-esteem trumping everything else.)
Examples of the incredible assumptions include these: (1) Cloud cover can be successfully parameterized as a statistical constant. (2) Shortwave and longwave radiations tend to balance. (3) Radiative transfer will produce sufficiently accurate longwave global results if only the atmosphere could be correctly modeled. (4) The surface layer is in thermodynamic equilibrium. (5) Amplification of solar variations does not exist. And with regard to your claim that the Revelle factor is working, (6) the ocean buffers against dissolution of anthropogenic CO2 but impedes natural CO2 not at all.
FE: Henry’s Law only is about free CO2 in the waters.
You are correct enough (because the surface layer is never in equilibrium), and that contradicts the Revelle factor. Formally, Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-). On the other hand, Henry’s Coefficients are only tabulated for thermodynamic equilibrium, in which case the ratio would be known by the Bjerrum solution to the carbonate equations. However, CO2 is always highly soluble in water, equilibrium or not. In the dynamic situation, the ratio could be anything, (neither IPCC’s 1:100:10 nor Dr. King’s 1:175:21), with the formation of HCO3(-) being almost instantaneous but CO3(2-) lagging.
FE: For a 10% increase of CO2 in the atmosphere and the same temperature, the amount of free CO2 in the ocean surface waters will increase with 10%. If there were no following reactions, that increase of free CO2 results in an increase of total carbon in solution with 0.1%, as free CO2 is less than 1% of total carbon in solution. But there are following reactions. Free CO2 is converted into carbonic acid that dissociates into bicarbonate and carbonate ions and hydrogen ions. Thus if you add more CO2 to the ocean waters, the pH lowers. But a lower pH means that the dissociation reaction is going the other way out, back to bicarbonate and free CO2. With other words, the amount of total CO2 in solution increases beyond the increase in 1% free CO2, because of further dissociation reactions but not to the full extent of the increase in the atmosphere, because of the counteraction caused by a lowering of the pH. Bold in original.
Your rationale reads like IPCC’s. AR4, ¶7.3.4.1, p. 528, and Box 7.3, p. 529. IPCC relies on equations and on the Revelle factor (AR4, ¶7.3.4.1, p. 531), both attributed to from Zeebe & Wolf-Gladrow (2001) ($101). Revelle’s original factor was a conjecture that was the ratio two fractions, the numerator was the ratio of new fossil fuel emissions that goes into the atmosphere divided by that which goes into the ocean, and the denominator was the ratio of CO2 in the atmosphere to that in the ocean, i.e., as I wrote earlier, γ = (r/s)/(A_0/S_0).
ZWG formulated the Revelle buffer differently. The numerator was the ratio of the change in [CO2] to the total [CO2], where [CO2] is the concentration of unionized CO2 in sea water, and the denominator was the ratio of the change in DIC to total DIC, i.e., (Δ[CO2])/([CO2])/(ΔDIC/DIC). One of the differences in the ZWG and IPCC formulation was that it removed the restriction to anthropogenic CO2 in R&S’s formula, making the Revelle factor applicable to natural and anthropogenic CO2 (ACO2). Then IPCC, and by implication ZWG, applied the Revelle factor only to ACO2 and NOT to natural CO2. Silly assumption (6), above.
Moreover, ZWG’s formulation required Total Alkalinity to be constant, a condition IPCC ignored. IPCC also confused terms by changing DIC to [DIC] (the concentration of a concentration) and by calling Δ[CO2]/[CO2] the fractional change in seawater pCO2. As ZWG say, Doubling of pCO2 –> doubling of [CO2], NOT that “Doubling of pCO2 ⊧ doubling of [CO2]. Wolf-Gladrow, D., CO2 in Seawater: Equilibrium, Kinetics, Isotopes, 6/24/06, p. 56. ZWG is careful not to confuse pCO2 and [CO2(aq)].
Most importantly, in all formulations, whether from Revelle & Suess, ZWG, or IPCC, the equations apply only in equilibrium. See silly assumption (4), above. IPCC specifies after re-equilibration. AR4, ¶7.3.4.2, p. 531. It continues, saying,
Due to the slow CaCO3 buffering mechanism (and the slow silicate weathering), atmospheric pCO2 will approach a new equilibrium asymptotically only after several tens of thousands of years (Archer, 2005; Figure 7.12). Id.
So according to IPCC, the Revelle factor applies only after several tens of thousands of years. This is all quite inane. As I have repeated and you have answered below, pCO2 (which is only atmospheric) and Henry’s Law are unaware of the sequestration of CO2 by the two biological pumps. Those two pumps are NOT connected to the atmosphere as IPCC shows here:
http://www.rocketscientistsjournal.com/2007/06/_res/10AR4F7-10.jpg
Those biological pumps feed on ions from the surface layer, not on air borne gaseous CO2. By never being in thermodynamic equilibrium, the surface layer acts to isolate Henry’s Law from ocean chemistry. The surface layer need not be in equilibrium, which IPCC assumes (a) in order to create a bottleneck, (b) creating the bulge in MLO CO2, which still (c) is inadequate to cause the observed global warming, so IPCC (d) makes CO2 initiate warming, releasing the potent greenhouse gas, water vapor, which (e) fortunately for AGW, has no effect on cloud cover. Fiction has a snow-balling effect.
And the ultimate inanity perhaps, the surface layer was never in equilibrium and so cannot and will not re-equilibrate.
Ferd, your calculations require you to establish first that the surface layer is equilibrium so that you can rely on the implications of the stoichiometric equilibrium constant, (ZWG) and after that you need to show the sensitivity of your calculations to whatever assumptions or calculations you desire for Total Alkalinity.
As ZWG says,
[CO2], [HCO3(-)], [CO3(2-)], and pH can be calculated from DIC and TA. Quoted in Wolf-Gladrow, 6/24/06, p. 50.
In other words, you cannot calculate [CO2] from DIC without TA.
FE: See further: http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf
Your link at Warwick University is to 2004 class notes by Dr. G. P. King. He says,
In contrast to nitrogen and oxygen, most CO2 of the combined atmosphere– ocean system is dissolved in water (98%). This is because CO2 reacts with water and forms bicarbonate (HCO−2 ) and ca[r]bonate (CO2− 3 ) ions. King, p. 2.
The dissolution of CO2 in water compared to N2 and O2 has to do with their respective solubility coefficients and not because of the chemical equations of carbonates.
King derives an expression for the Rayleigh Factor as RF = [DIC]/[CO3(2-)], an uninteresting ratio on the right hand side. See King, eqs. (25) and (26), p. 6. Because that ratio is uninteresting, King has found no inherent property or use for the RF. He just made RF appear algebraically in something after a number of assumptions.
In his derivation, he says Total Alkalinity is approximately constant (Eq. (19), p. 5, which is close enough), and substitutes [DIC] (DIC) (Eq. (12)) after assuming [CO2] is negligible (Eq. (20)). He differentiates the result (Eq. (21), effectively assuming that because [CO2]_ml (ml for mixed layer) is small, its differential must be small! That is not sound mathematics.
After assuming [CO2] in the ocean (his [CO2]_ml) is small, King says:
Returning our attention to (18) and taking differentials and dividing by [CO2]_ml it is easily shown that … [Eq. (24)]. Bold added, King, p. 6.
In this step, the author divides by something he assumed to be negligibly small! This is an incredibly delicate step, and without justification is unsound mathematics.
The uptake factor expresses the increase in the concentration of total CO2 (i.e., DIC) in seawater corresponding to an increase in the concentration of CO2 (or partial pressure of CO2). See Fig 7.14 in the attached pages. King, p. 7.
King seems to know pCO2 refers to the gas state. However, the attached pages are missing.
It is known as the Revelle factor after Roger Revelle, who was among the first to point out the importance of this sensitivity for the oceanic uptake of anthropogenic CO2. King, p. 7.
Just preceding this sentence, King derives the Revelle factor without regard to the species of CO2. King, ¶2.2.3-¶2.2.4, Eqs. (12) to (26), pp. 5-6. Now he arbitrarily attributes his derivation to ACO2. King is silent about why the Revelle factor might apply to 6 GtC/yr of ACO2 but not to 90 or 210 GtC/yr of natural CO2.
Dr. King is an American physicist lecturing in climate to undergraduates in a school of engineering in the UK.
FE: >>It does not depend on surface layer pH or ionization
Simple proof that you are wrong: make a solution of soda or bicarbonate and add some acetic acid. Lots of CO2 are bubbling up, because the amount of free CO2 in the solution gets far beyond Henry’s Law “normal” concentration.
What you have created is the chemistry of a water bottle bomb, also known as a bubble bomb, among other names. The acid working on the baking soda turns into salt water and carbon dioxide that wants to outgas. This can explode a closed household plastic bottle because the pCO2 generated will far exceed one atmosphere per Henry’s Law. Your experiment does NOT show that Henry’s coefficient depends on pH or ionization as I denied. Henry’s coefficient was the same before and after your experiment, a tabulated, known value if only the bottle were in thermodynamic equilibrium.
As your own authority, King, says
α is the solubility of CO2, which is a function of temperature and salinity. King, p. 7.
That is, Henry’s Coefficient is not known to be a function of pH or ionization. If it is dependent on one of these parameters, it is a third order effect, and a fourth order effect in Henry’s Law after pressure.
FE: >>it does not accumulate in the atmosphere
The increase in the atmosphere is measured, the human emissions are double that. But if you assume that some natural cause follows the human emissions at such an incredible fixed ratio, but the human emissions disappear somewhere without leaving a trace in the atmosphere, that is your opinion, not based on any observed facts.
During the time that the atmospheric CO2 increased 15%, my dog’s breath got 30% more foul. That coincidence does not mean that half the stink in my dog’s breath had any thing to do with the CO2 increase.
I make no such assumption as you suggest. Your incredible fixed ratio does not exist anywhere in the real world. If it did, IPCC would have had evidence it was desperate to find for the MLO bulge to be manmade.
Instead, IPCC manufactured other evidence to show that the increase in CO2 was caused by human emissions using two other methods than yours. First, it tried to show that the increase in CO2 paralleled the decline in O2, which was supposed to indicate that fossil fuel combustion reduced O2 by the same amount as it increased atmospheric CO2. Second, IPCC tried to show that the isotopic lightening measured in the increase of CO2 paralleled the rate of emissions from fossil fuel, which was supposed to emit an extra light isotopic ratio because ancient vegetation preferred 12C over 13C. See Fingerprints on SGW, by clicking on my name. IPCC’s claims are in this chart:
http://www.rocketscientistsjournal.com/2010/03/_res/AR4_F2_3_CO2p138.jpg
which is the product of chartjunk. If the right hand ordinates had been honestly drawn, the curves would have looked like these:
http://www.rocketscientistsjournal.com/2010/03/sgw.html#III_
and
http://www.rocketscientistsjournal.com/2010/03/_res/CO2vO2.jpg
IPCC’s visual correlation implied by parallel traces is incompetent on several levels. Rule 1: never rely on visual correlation. Rule 2: quantify results, here with a mass balance analysis, which IPCC never produced. Rule 3: even if the results tracked one another according to the mass balance analysis, that is merely correlation, which does not establish cause and effect any more than my dog’s breath did.
Ferd, you need to do a mass balance analysis before you announce the discovery of an incredible fixed ratio, and then establish a cause and effect by showing that (1) the MLO data are global, and (2) eliminating all possible natural causes. Ultimately, (3) you will need to do the same thing once again, because even if ACO2 were the cause of a global pCO2 increase, that doesn’t show that CO2 causes the observed global warming. For that, you will also have to establish (4) that the Sun was not the cause. Lots of luck.
IPCC’s results are either incompetent or out-and-out fraud, and I’m sorry to say that I don’t think the authors were all that incompetent.
Jeff Glassman | September 12, 2011 at 7:34 pm |
Jeff, thanks for the welcome. And sorry for the delay in reply. Again you have a long exposure of arguments. I will try to respond only on those points where I think there are problems…
About pCO2(aq):
First an explanation how pCO2(aq) is observed: seawater is continuously sprayed in a small amount of air, so that that there is a rapid equilibrium with each other. CO2 levels are measured (semi-)continuously in the air. That is deemed pCO2(aq). This method was and is used on (currently lots of commercial) sea ships and fixed stations worldwide.
If pCO2 in the atmosphere is higher than pCO2(aq), then the CO2 flux is net from air to water or reverse if pCO2(atm) is lower. One can discuss the local and total fluxes (which depend of wind/mixing speed), but the direction anyway is fixed by the pCO2(aq-atm) difference.
E.g. for the Bermuda longer series, pCO2(aq) is higher than pCO2(atm) in high summer, the rest of the year the Atlantic Ocean around Bermuda (and most of the Atlantic Gyre) is a net sink for CO2.
About seasonal effects:
If you look again to the BATS graph at
http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
(BTW that is composed by Bates, from BATS) you will see that there is a huge seasonal variability in all variables. Although difficult to see in the scale of the graphs, the seasonal trend is:
spring to fall: decrease in pH, increase in pCO2, decrease in DIC
fall to spring: increase in pH, decrease in pCO2, increase in DIC
The decrease in DIC during summer months is by increased biolife, which uses bicarbonate for shells and CO2 for organics. But at the same time CO2 is set free by the bicarbonate to carbonate shells reaction, which lowers the pH and increases pCO2.
Here you see that pCO2 and DIC are decoupled by biolife, even to the effect that pCO2 increases and DIC decreases at the same time.
Further, temperature increases pCO2(aq) with about 16 microatm/°C. If that leads to a sea-air flux, that will decrease DIC too, because CO2 must come from somewhere, while pCO2(aq) remains high as that is temperature dependent, bicarbonates and carbonates are far less (as long as not saturated in carbonates), but will supply CO2 to maintain free CO2 levels at elevated temperature.
Anyway, you shouldn’t underestimate the fluxes back and forth caused by the seasons from the mid-latitude oceans. Based on the d13C decline, which is about 1/3rd of what can be expected from fossil fuel use, the continuous CO2 exchange via the THC down/upwelling is about 40 GtC/year, thus leaving about 50 GtC/year back and forth from seasonal changes in ocean temperature.
About the contribution of humans:
It only contributes about 3% to the total CO2 flux from land and ocean into the atmosphere, and about half that if you include the leaf water that IPCC introduced and dropped
You make the same logic error as many before you: the flux is a back and forth flux, where inputs and outputs are near equal. That doesn’t contribute to any change in atmospheric CO2 levels, as long as total influx and total outflux are equal. It doesn’t matter if the total influx is 100 or 1,000 or 10,000 GtC/yr, only the difference between the influx and outflux is important. And that is easily calculated:
increase in the atmosphere = natural inputs + human input – natural outputs
or
increase in the atmosphere – human input = natural inputs – natural outputs
or
4 GtC/yr – 8 GtC/yr = natural inputs – natural outputs = – 4 GtC/yr
Thus at this moment, and in the past 50 years, the natural outputs are/were 1-7 GtC/yr larger than the natural inputs, including natural variability no matter what the real heights of the natural inputs and outputs were, see:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em.jpg
Thus we know the difference between total CO2 inputs and outputs quite exactly, including the noise. Both are smaller than the human emissions.
About free CO2 in water:
Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-).
Jeff, here you are completely wrong. Henry’s Law is only about the same species in air and water. In this case that is CO2 in air and CO2 in solution. Not bicarbonate and carbonate. That are different species which have no intention to escape from the solution at all. Only CO2 in solution does exchange with CO2 in the atmosphere. Which leads to a ratio between the two in steady state, depending of the temperature, that is what Henry’s Law is about. In fact that is what the pCO2(aq) observations show.
Thus pCO2(aq) is a direct measure for [CO2] in the surface water, if the temperature is known, but that doesn’t tell us anything about DIC.
That makes that your discussion of the Revelle factor is based on this erroneous assumption and some more.
The ratio between observed change d [CO2]/ [CO2] vs. dDIC/DIC indeed is the Revelle factor. For BATS indeed a factor of about 10 in change rate.
That will be further elaborated in the household (bi)carbonate experiment…
The Revelle factor is applicable to all CO2, anthro or not, but the IPCC and most other people involved (myself included) assume that near all the increase in the atmosphere and thus surface waters is caused by the human emissions…
So according to IPCC, the Revelle factor applies only after several tens of thousands of years.
Sorry, but that is a misinterpretation. The dissociation reaction from CO2 to HCO3(-) and CO3(2-) is very fast (fractions of seconds to seconds). That has only an indirect connection with the precipitation out of solution by the formation of CaCO3/MgCO3 by coccoliths, which is a slow process (as an overall net process: much is formed, but much is dissolved again). That is what the IPCC is referring to, nothing to do with the Revelle factor, except that this process influences the ratio between the different carbon species and the pH.
About the carbonate / acid experiment
Your experiment does NOT show that Henry’s coefficient depends on pH or ionization as I denied. Henry’s coefficient was the same before and after your experiment, a tabulated, known value if only the bottle were in thermodynamic equilibrium.
and
That is, Henry’s Coefficient is not known to be a function of pH or ionization.
or as expressed by King:
α is the solubility of CO2, which is a function of temperature and salinity.
Indeed the solubility of CO2 (that, again, is about free CO2 gas in the liquid) is a function of temperature and salinity, not of pH. But DIC is strongly influenced by pH. That is what the experiment shows. After you have added an acid, 95 to 99% of all carbonate and/or bicarbonate disappeared to the atmosphere as CO2, thus DIC reduced with some 94 to 98% (depending of the strength of the acid), while the solubility of CO2 still is the same (after temperature re-equilibrium), according to Henry’s Law.
That simply shows that Henry’s Law is about the solubility of CO2 as gas in water and not about the rest of the carbon (as bicarbonate and carbonate) in solution.
About the mass balance
As said before, humans emit 8 GtC/year nowadays. The increase in the atmosphere is 4 +/- 2 GtC/year. Because of the law of conservation of matter, and no carbon species escapes to space, nature as a whole is a net sink for CO2. It is that simple. The real, net addition from nature to the atmosphere is zero, nada, nothing.
No matter how large the contributions from oceans, vegetation, volcanoes,… within a year are, all natural sinks combined were larger over the past 50 years than all natural sources combined. Thus even without knowledge of any individual natural flow, we know that humans are the cause of the increase of CO2 in the past at least 50 years.
All other observations add to this knowledge and all alternative explanations fail one or more observations.
About the O2 balance and d13C balance
Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline
Not completely right: that only is right for pure carbon (C + O2 -> CO2), not for oil (CnH2n+… 3/2n O2 -> n CO2 + n/2 H2O) and not for natural gas (CH4 + 2 O2 -> CO2 + 2 H2O).
The d13C level is decreasing by the addition of 13C depleted fossil fuels, but the decrease is diluted by the deep ocean exchange with the atmosphere: what goes down in the deep is the current 13C/12C mix, but what is upwelling is the deep ocean mix, which is higher in d13C than the current atmosphere. Based on that difference, one can calculate the deep ocean – atmospheric exchanges:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/deep_ocean_air_zero.jpg
Some more calculations with O2 and d13C, which show how much CO2 is sequestered by vegetation and how much by the oceans:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf
Finally…
Ferd, you need to do a mass balance analysis before you announce the discovery of an incredible fixed ratio, and then establish a cause and effect by showing that (1) the MLO data are global, and (2) eliminating all possible natural causes. Ultimately, (3) you will need to do the same thing once again, because even if ACO2 were the cause of a global pCO2 increase, that doesn’t show that CO2 causes the observed global warming. For that, you will also have to establish (4) that the Sun was not the cause. Lots of luck.
(1) The MLO data are representative for 95% of the atmosphere, which is global enough I suppose. See the data for lots of stations, air flights, ships, buoys and nowadays satellite data, like here:
http://www.esrl.noaa.gov/gmd/ccgg/iadv/
Only in the first few hundred meters over land, CO2 levels are far too variable to be of interest (except for people interested in individual fluxes).
(2) I have calculated the mass balance, which shows that nature is a net sink for CO2. That effectively eliminates nature as a source.
(3) That humans are the cause of the CO2 increase doesn’t say anything about its effect on temperature. I suppose that CO2 has some effect, but less than the minimum range of what the current climate models “project”.
(4) I am pretty sure that current climate models underestimate the role of the sun.
Points (1) and (2) are proven beyond doubt, but there can be a lot of discussion between “warmers” and “skeptics” about (3) and (4).
Ferdinand Engelbeen, 9/17/11, 11:44 am, CO2 Residence Time
FE: About pCO2(aq) … .
The method you describe is not unfamiliar to me, and it employs what is called an equilibrator. http://bloomcruise.blogspot.com/2011/07/sampling-co2-in-air-and-sea.html
JG: Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-).
to which you responded
FE: Jeff, here you are completely wrong. Henry’s Law is only about the same species in air and water. In this case that is CO2 in air and CO2 in solution. Not bicarbonate and carbonate. That are different species which have no intention to escape from the solution at all. Only CO2 in solution does exchange with CO2 in the atmosphere. Which leads to a ratio between the two in steady state, depending of the temperature, that is what Henry’s Law is about. In fact that is what the pCO2(aq) observations show. [¶] Thus pCO2(aq) is a direct measure for [CO2] in the surface water, if the temperature is known, but that doesn’t tell us anything about DIC.
1. So if I am not wrong about some things, but instead completely wrong, I must be wrong about the constitution of DIC.
IPCC: The marine carbonate buffer system allows the ocean to take up CO2 far in excess of its potential uptake capacity based on solubility alone, and in doing so controls the pH of the ocean. This control is achieved by a series of reactions that transform carbon added as CO2 into HCO3– and CO32–. These three dissolved forms (collectively known as DIC) are found in the approximate ratio CO2:HCO3–:CO32– of 1:100:10 (Equation (7.1)). AR4 Box 7.3 Marine Carbon Chemistry and Ocean Acidification, p. 529.
IPCC: Equation (7.3)), relating the fractional change in seawater pCO2 to the fractional change in total DIC after re-equilibration: Revelle factor (or buffer factor) = (Δ[CO2])/[CO2]/(Δ[DIC]/[DIC]) (7.3). Citations deleted, AR4, ¶7.3.4.2, p. 531.
So what I did was substitute CO2(aq) for CO2 on p. 529, or for [CO2] on p. 531, and I think this agrees fully with your last sentence. This is reinforced by IPCC’s text in which the left hand side (necessarily) of Eq. (7.3) is identified as the fractional change in seawater pCO2. Thus IPCC refers to [CO2] as the pCO2 in seawater, which is no different than CO2(aq) as I was using the term. Also David Archer, Contributing Author to AR4, Chapter 7, confirms,
DA: For total CO2, the exchanging species is CO2(aq), which constitutes about 0.5% of the total dissolved CO2 concentration. Archer, D., Daily, seasonal, and interannual variability of sea surface carbon and nutrient concentration in the Equatorial Pacific ocean, p. 10 of 17.
So my terminology is not wrong. However the Archer citation about CO2(aq) concentration needs to be read in context with his much earlier statement:
DA: The pCO2 of sea water is determined by its alkalinity, CO2, temperature, and salinity through carbonate equilibrium chemistry. Archer, id., p. 7 of 17.
that is, pCO2 is not measured. And IPCC’s reference to [DIC] is erroneous, a reference to the concentration of a concentration.
2. Your view about Henry’s Law and that carbonate and bicarbonate ions have no intention to escape from the solution at all shows a lack of understanding of the physics involved in the distinctly different processes of dissolution and chemical kinetics with their widely separated time scales. And of course, no one claims ions ever escape into the atmosphere.
Those ions, created in a matter of a fraction of a microsecond from equilibrium by a pulse of CO2(aq), left undisturbed, return to equilibrium in a matter of a few milliseconds. Mitchell, M.J., et al., A model of carbon dioxide dissolution and mineral carbonation kinetics, 12/11/09, p. 1274, Figure 1. The process should be much faster with agitation of the solution, but the states of dynamic equilibrium would be unrecognizable, difficult to estimate, and perhaps impractical to generalize.
Dissolution on the other hand proceeds according to the gas transfer velocity, which does account for agitation by wind and surface layer turbulence (not equilibrium). Velocity measurements lie in the region of about 4 to 29 cm/hr, which is equivalent to a cubic meter of gas dissolving in about 3 to 25 hours. See Wanninkhof, id., p. 7374.
Within the time scale of Henry’s Law, the surface layer can for all practical purposes instantaneously buffer CO2(aq), not to its equilibrium value, but to the limits of DIC, whether for uptake or outgassing. That is so even constraining the layer to the sluggish, hypothetical state of equilibrium. Similarly, Henry’s Law proceeds effectively instantaneously with respect to diurnal or longer processes involved in weather, climate change, or climate.
3. I’ll ignore the fact that your method doesn’t measure alkalinity, along with your concerns about biolife and seasonal effects. What no one can ignore is that the input seawater is not in thermodynamic equilibrium. So its ratio of CO2:HCO3(-):CO3(2-) is unknown. The measuring method changes that inlet ratio by equilibrating the sample. You don’t know if the open sea CO2(aq) is 0.5% as Archer says, or 0.1%, or 10%, for that matter. You only know CO2(aq) in the open sea is different than the estimated CO2(aq), otherwise the equilibrator would be a waste of time and money.
Of course, my critique would be wrong should the surface layer actually be in thermodynamic equilibrium. Thermodynamic equilibrium is a ludicrous assumption, even if explained to a science illiterate, and one of the fatal errors in IPCC’s AGW model. As it stands, arguing as you and IPCC do that pCO2(aq) exists as an alternate theory to Henry’s Law is a corollary to the same false assumption.
4. On the matter of a partial pressure difference, you are in agreement with others when you say,
FE: If pCO2 in the atmosphere is higher than pCO2(aq), then the CO2 flux is net from air to water or reverse if pCO2(atm) is lower. One can discuss the local and total fluxes (which depend of wind/mixing speed), but the direction anyway is fixed by the pCO2(aq-atm) difference.
You give no references, but your notion about CO2 flux is consistent with these sources:
IPCC: So long as atmospheric CO2 concentration is increasing there is net uptake of carbon by the ocean, driven by the atmosphere-ocean difference in partial pressure of CO2. Bold added, TAR, Ch. 3, Executive Summary, p. 185.
IPCC: Estimates (4º x 5º) of sea-to-air flux of CO2, computed using 940,000 measurements of surface water pCO2 collected since 1956 and averaged monthly, together with NCEP/NCAR 41-year mean monthly wind speeds and a (10-m wind speed) dependence on the gas transfer rate (Wanninkhof, 1992). Footnote deleted, bold added, AR4, Figure 7.8 caption, p. 523.
RW: Gas transfer of CO2 is sometimes expressed as a gas transfer coefficient K. The flux equals the gas transfer coefficient multiplied by the partial pressure difference between air and water:
F = K(pCO2_w – pCO2_a) (A2)
where K – kL and L is the solubility expressed in units of (concentration/pressure). Citations deleted, bold added, Wanninkhof, R., Relationship Between Wind Speed and Gas Exchange Over the Ocean, Appendix, p. 7379.
Here pCO2_w is pCO2(aq) and pCO2_a is pCO2(atm).
But as you correctly state, what is measured
FE: is deemed pCO2(aq). Bold added.
It has to be deemed to be a partial pressure because it doesn’t exist. It can’t be measured. It can’t exert a pressure to resist pCO2(atm). As logical as it might seem to attribute a flux to a difference in pressure, the analogy is lost when one term is only deemed to exist.
Besides, this theory that the flux depends on a partial pressure difference is not in accord with Henry’s Law. That Law explicitly depends on pCO2(atm), and says nothing about an alleged pCO2(aq). Henry’s Law does not make the mistake of giving reality to a parameter deemed to exist, especially by bootstrapping a parameter derived from the same law.
5. Your reference to the isotopic ratio is imaginary, compounding IPCC’s fraud on this so-called fingerprint.
FE: Based on the d13C decline, which is about 1/3rd of what can be expected from fossil fuel use … .
I think when pinned down, you just make up data out of whole cloth. Prove me wrong with a reference. When IPCC tried to rely on δ13C, it had to manufacture fraudulent data. See SGW, Part III, Fingerprints, and especially Figure 27 (AR4, Figure 2.3, p. 138.
6. You make unsupported claims about the physics of solubility and of the thermohaline circulation.
FE: … the THC down/upwelling is about 40 GtC/year, thus leaving about 50 GtC/year back and forth from seasonal changes in ocean temperature.
I claim the THC accounts for the 90 GtC/yr given by IPCC and others. I may have been the first to have the THC transport any amount of CO2. I think you just made up the 50/40 split. Prove me wrong by providing a reference.
7. You disassemble huge carbon mass flows, scattering them into many small differences between large numbers. This is a conjecture essential to IPCC’s model in which natural forces are in balance, a model in which the alleged Revelle Factor magically buffers only manmade CO2 emissions. This conjecture makes man’s CO2 emissions appear far more significant than their insignificant 1.5% to 3% contribution. Your argument is
FE: Anyway, you shouldn’t underestimate the fluxes back and forth caused by the seasons from the mid-latitude oceans. … You make the same logic error as many before you: the flux is a back and forth flux, where inputs and outputs are near equal.
You, as well as others, have repeatedly tried to characterize the annual fluxes in the carbon cycle as distributed into a large number of small, nearly zero fluxes. It is, for example, the essence of Takahashi’s beautiful map of faux air-sea fluxes. I am willing to accept that model with respect to land-air fluxes, which are in fact distributed. But you have no rational physical model for the air-sea fluxes of about ± 91 GtC/yr. Your model fails physics.
Prof. Robert Stewart, Texas A&M University, maintains a commendable, open source, online textbook, Oceanography in the 21st Century. He shows the surface ocean to atmosphere flux as +90/-92 GtC/yr. Part II, ch. 5, p. 2 of 4. He explains:
Carbon dioxide dissolves into the ocean at high latitudes. CO2 is carried to the deep ocean by sinking currents, where it stays for hundreds of years. Eventually mixing brings the water back to the surface. The ocean emits carbon dioxide into the tropical atmosphere. This system of deep ocean currents is the marine physical pump for carbon. It help pumps carbon from the atmosphere into the sea for storage. Stewart, id., p. 3 of 4.
His air-sea fluxes agree with IPCC, and they are not the sum of small increments. This CO2 flux is a massive river, with an input and an output, contradicting the distributed model. Stewart’s model supports mine in which the flux is controlled by the thermohaline circulation, also known as the MOC. However, because of the underlying poleward cooling currents, physics demands that the CO2 dissolving into the ocean occur over the entire surface, not just at high latitudes as Stewart states. It is at high latitudes that DIC is carried to the deep ocean, as Stewart does say.
What you refer to as mid-latitude … back and forth flux and as seasonal changes in ocean temperature exist, but add to zero, and are quite negligible. They are also due in major part to the ocean gyres. These flux variations are second order effects that contribute nothing to the first order effect of the +90/-92 GtC/yr pair of air-sea fluxes. You still need to account for them.
On another point, Stewart’s model is not yet mine because he says,
I define the deep circulation as the circulation of mass. Of course, the mass circulation also carries heat, salt, oxygen, and other properties. Stewart, id., Ch. 13, Deep Circulation in the Ocean, p. 1 of 7.
Stewart has omitted the major feature of the transport of CO2, and a major component of the THC/MOC mass. The total MOC flow is 31 Sv. AR4, Box 5.1, p. 397. That’s a potential to outgas over 530 PgC/yr if it were 100% efficient and warmed from 0ºC to 35ºC.
8. You misunderstand what I wrote about the Revelle factor.
JG: So according to IPCC, the Revelle factor applies only after several tens of thousands of years.
FE: Sorry, but that is a misinterpretation
If you wanted to be accurate, you might have said that the Revelle statement was wrong. However, the error is IPCC’s, not mine. I summarized IPCC’s words literally and correctly.
9. You have failed to grasp the error in your model for acid changing solubility.
FE: But DIC is strongly influenced by pH. That is what the experiment shows. After you have added an acid, 95 to 99% of all carbonate and/or bicarbonate disappeared to the atmosphere as CO2, thus DIC reduced with some 94 to 98% (depending of the strength of the acid), while the solubility of CO2 still is the same (after temperature re-equilibrium), according to Henry’s Law.
You ignore the error I pointed out in your experiment to talk around it. You added baking soda and acid, and that is what produced the CO2. It was not released by a change in Henry’s coefficients as you claimed originally, and now try to rehabilitate.
10. You next quote me accurately but out of context to change IPCC’s alleged combustion fingerprint for your own purposes. Here is the statement from me with your omissions in bold:
JG: IPCC’s argument is that the decline in O2 matches the rise in CO2 and therefore the latter is from fossil fuel burning. Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline, so the traces should be drawn identically scaled in parts per million (1 ppm = 4.773 per meg (Scripps O2 Program)). SGW, III, Fingerprints.
My complete statement is about IPCC’s argument. That’s why the sentence you saved says should, not “does”. Having distorted what I said, you say.
FE: Not completely right: …
What’s not right is your dishonest quotation. What’s not completely right is IPCC’s argument.
11. You believe that MLO CO2 concentrations are global because they agree with other stations, not recognizing that investigators intentionally calibrate the other stations to agree with MLO.
FE: The MLO data are representative for 95% of the atmosphere, which is global enough I suppose. See the data for lots of stations, air flights, ships, buoys and nowadays satellite data, like here: http://www.esrl.noaa.gov/gmd/ccgg/iadv/ .
Your link to the CO2 measuring station network is a nice resource. It’s an update to the version blessed by IPCC. TAR, Figure 3.7, p. 212. MLO should indeed be representative, not because it IS representative, but because IPCC has seen fit to calibrate the network into agreement:
The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically <1 ppm. Resolution of such a small signal (against a background of seasonal variations up to 15 ppm in the Northern Hemisphere) requires high quality atmospheric measurements, measurement protocols and calibration procedures within and between monitoring networks (Keeling et al., 1989; Conway et al., 1994). Bold added, TAR, ¶3.5.3, p. 211.
And
To aid in interpreting the interannual patterns seen in Figure 4, and in derived CO2 fluxes shown later, we identify quasi-periodic variability in atmospheric CO2 defined by time-intervals during which the seasonally adjusted CO2 concentration at Mauna Loa Observatory, Hawaii rose more rapidly than a long-term trend line proportional to industrial CO2 emissions. The Mauna Loa data and the trend line are shown in Figure 5 with vertical gray bars demarking the intervals of rapid rising CO2. Data from Mauna Loa Observatory were chosen for this identification because the measurements there are continuous and thus provide a more precisely determined rate of change than any other station in our observing program. Also, the rate observed there agrees closely with the global average rate estimated from the nearly pole to pole data of Figure 4 (plot not shown). Bold added, Keeling, et al., Exchanges of Atmospheric CO2 and 13CO2 with the Terrestrial Biosphere and Oceans from 1978 to 2000. I. Global Aspects, SIO Ref. 01-06, June, 2001.
Your ESRL reference says under Measurement Details, Carbon Dioxide,
Because detector response is non-linear in the range of atmospheric levels, ambient samples are bracketed during analysis by a set of reference standards used to calibrate detector response.
That should be sufficient. To bracket: to place within. Thefreedictionary.com.
MLO is representative of global CO2 measurements by necessity for the AGW conjecture, then by assumption, followed by so-called calibration procedures. You might note that IPCC has not made the calibration data available.
12. Conclusion: Because your belief system, AGW, is a fiction, it must be supported by a seemingly unending string of fictions, and a concealment of reality.
Ferd, you have been bamboozled, not once but many times just in this one post – • by the phantom of pCO2(aq) only deemed to exist, • by the unwarranted assumption, both overt and hidden, of thermodynamic equilibrium, • by the broken link between open ocean CO2 measurements and the ocean surface, • by IPCC’s fraudulent reliance on δ13C, • by an imaginary partial pressure difference substituting for ordinary solubility, • by reports of the existence of the failed Revelle factor, • by your own hypothetical experiment to add CO2 and confuse it with Henry’s Law outgassing, • by a network of CO2 stations calibrated into agreement.
When you put aside the powerful evidence that both Earth’s climate and its climate change are determined by the Sun, and that surface temperatures in both Earth’s cold and warm states are regulated by albedo, to adopt the belief that man is the cause of climate change, you put yourself in the box of having to defend a raft of exceptions to physics and fudged data.
Judith,
You should have a good answer to Hal’s question. It’s a point of elementary physical chemistry, well set out by your Skeptical Science link:
“Individual carbon dioxide molecules have a short life time of around 5 years in the atmosphere. However, when they leave the atmosphere, they’re simply swapping places with carbon dioxide in the ocean. The final amount of extra CO2 that remains in the atmosphere stays there on a time scale of centuries. “
They could also have included exchange with the biosphere, which is significant on the 5 year scale. Isotope exchanges measure that individual molecule scale. What counts is the time taken for the CO2 excess to go.
98% of the atoms in a human body are exchanged every year. Residence time is a matter of months. That’s what isotopes would measure. We stay around a lot longer.
CO2 is a trace gas in a large atmosphere.
Does Skeptical Science or anyone else mention much CO2 is in the
atmosphere?
Wiki:
“Carbon dioxide in earth’s atmosphere is considered a trace gas currently occurring at an average concentration of about 390 parts per million by volume or 591 parts per million by mass.The total mass of atmospheric carbon dioxide is 3.16×1015 kg (about 3,000 gigatonnes).”
So various natural processes are emitting hundreds of gigatonnes, plants even though they consume hundreds of gigatonnes also emit about 100 gigatonnes per year. The largest absorber of CO2 is weathering of rocks from rainfall, such CO2 and limestone. The dissolve minerals end up in the ocean which is used in biologic process of all life and is also used to make shells, which can deposit on the ocean floor and after millions of years can make sedimentary rock [such as limestone] which can end up back on the land again and being dissolved by CO2. Rinse and repeat.
The ocean also absorbs and emits CO2.
The total amount of CO2 processed per year from biological, weathering, and warmer ocean water emitting CO2 and cooler Ocean water absorbing CO2, is unknown- though there are rough estimates. But in total it’s on the order of 1000 gigatonnes per year. And tens of gigatonnes are emited due to human activity per year. And so you have 1000 gigatonnes added to a 3,000 gigatonnes atmosphere and 1000 gigatonnes are removed each year. So roughly, a 1/4 of CO2 emitted by human activity is absorbed each year. The amount of total global CO increase per year is about 1/2 the amount of human emission per year [roughly].
So you have two different answers- about 1/2 human emission “looks” like it’s being adsorbed per year. And second answer is looking at the huge pot of CO2 in which Human emission is added and how much of total CO2 is being recycled each year- roughly 1/4 of all CO2 in atmosphere is turned over yearly.
Stop eating for a year, and there wouldn’t be much of you left. You have to intake new mass, and without that new mass to exchange the old mass with in the first place, there would be no turnover; only a slow leeching into decay.
The same is for CO2. If a molecule leaves the atmosphere its mass has to be replaced, and where does a replacement come from? The Ocean? Then how do you get a rise or decline of bulk CO2 in the first place if exchange is always 1 to 1? That’s the problem. If the residence time of a CO2 molecule is only 5 years (that is the kinetic rate of the sinks), then the response rate of the bulk CO2 amount is also 5 years; as any imbalance in input and output from the atmosphere will change the bulk no slower than the slowest kinetic rate of source or sink. Realize too, that this rate can be affected by concentration: the more CO2 is in the atmosphere, the shorter the residence time could become, or put another way the faster the kinetic rate for uptake by sinks. This is dependent on the -rate law- that covers CO2 equilibriums, be it zero order, first order, second order or so forth. And some of these rate laws, like second order, are independent of concentration and fixed; while others like zero and first are dependent and variable. Which is CO2?
So, to show that elevated CO2 levels stay elevated, you have to show the kinetic rate and equilibrium constants for all the sources/sinks, and why additional CO2 would suddenly change those rates as would be necessary to say elevated CO2 can last centuries when residence time is 5 years.
I fear there is a huge lack of understanding for kinetics in these debates, and just what equilibrium actually means.
The size of a mixed-layer (upper 50-100m) ocean CO2 reservoir is approximately equal to that of the entire atmosphere, so we can expect the rate of increase in atmospheric co2 to be half of that of emission. And incidentally, that’s what happens. Approximatly half of 14 GT of co2 are going somewhere. If they go into mixed layer ocean, it means that time for mlo to equilibrate with atmosphere is on order of weeks to months. How can they claim then that individula molecule residence time is 5 years? Are they nuts?
“The size of a mixed-layer (upper 50-100m) ocean CO2 reservoir is approximately equal to that of the entire atmosphere, so we can expect the rate of increase in atmospheric co2 to be half of that of emission.”
STOP DAMN IT STOP.
The 14CO12 from the Atmospheric H-bomb tests had a t1/2 of 15 years and a t1/4 of 30 years. We know this to be true. It is a fact.
Now, if the Ocean CO2 Reservoir was the same size as the AtmosphericCO2 Reservoir then it could NEVER drop below half max. If the Ocean CO2 Reservoir was 10 times bigger than the AtmosphericCO2 Reservoir, then the 14CO2 end point would be 10% higher than before the bomb tests.
The end point for 14CO2 is within the noise, 1-2%, of the starting point.
We therefore KNOW that the Ocean CO2 Reservoir, which is in exchange with the atmosphere, is >30 times greater than the AtmosphericCO2 Reservoir .
This is true. This is not speculation. This is not pulling figures out of my ass. This is reality. This is basic dilution. The 14CO2 spike’s from the 50’s until 1965, increased the 14C background by a factor of 10. 45 years later (three t1/2’s), the 14CO2 is back to the background level it was before. So, all the 14CO2 generated by the bomb tests has been diluted;
The dilution due to the increase in atmospheric CO2, from 320 ppm to 390 ppm, explains only 18% of the disappearance, ocean buffering and biotic has taken the rest.
She does have a good answer to Hal’s question, and she knows it, and it is exactly the source you noted. But she doesn’t like the right answer, so she presents it with a lot of absolutely wrong, uninformed and willfully misinformed gobbledygook, to dredge up page hits by pretending that propaganda from oil- and coal-financed doubt merchants are legitimate scientists or have anything to add to scientific knowledge. In fact, the likes of Hal Doiron and Jack Schmitt do nothing but subtract from the sum of human knowledge, and by lending the credibility of her academic stature to their ilk, Curry aids and abets their fraud.
The Earth, mainly the oceans, has been absorbing about 50% of human emissions in recent decades, so doesn’t it follow from this that if human emissions were to halve, and the 50% rate were maintained, CO2 concentrations would stabilise?
And in the very unlikely event that they ceased completely, doesn’t it also follow that CO2 concentrations would fall at approximately the same rate that they have risen?
i>”so doesn’t it follow from this that if human emissions were to halve, and the 50% rate were maintained, CO2 concentrations would stabilise?”
No, if the airborne fraction remains at 50%, CO2 concentrations would rise at 50% of the former rate.
“doesn’t it also follow that CO2 concentrations would fall at approximately the same rate that they have risen?”
No, on that logic they would then stabilise. The rate at which they rose was determined by the rate at which we mined and burnt the carbon. That won’t be reflected in any natural redistribution process. Excess CO2 would be absorbed by the sea, but on a different and much longer timescale.
Why does CO2 at Mauna Loa fluctuate by as much as 2.3ppm in 7 days, and quite regularly by 1ppm ore more?
Why would that be a sign of man-made Co2?
How about 2ppm over 1 day?
http://cdiac.ornl.gov/ftp/trends/co2/Jubany_2009_Daily.txt
Isn’t this supposed to be a well mixed gas?
http://geology.com/nasa/carbon-dioxide-map/carbon-dioxide-map.jpg
Nick and tempterrain
The answer to tempterrain’s question is: we don’t know.
IF (the BIG word) the CO2 half-life in our climate system is really 120 years (upper end of range suggested by Zeke Hausfather at a recent Yale climate forum), this means that the annual decay rate (at the beginning of the decay curve) is 0.58% of the atmospheric concentration, or around 2.2 ppmv today.
This represents a bit less than half of the CO2 emitted by humans today.
This tells me that IF (there’s that BIG word again) the increase in atmospheric CO2 is caused primarily by human emissions, and IF we were to reduce these emissions to around half what they are today, the net in and out would be in balance, and the CO2 concentration would stabilize.
But, hey guys, there’s a lot of IF there.
IF Professor Salby is right, it all has very little to do with human emissions.
More IF.
Max
Nick,
No. If human CO2 emissions were reduced by 50% the Earth wouldn’t actually “know” that’s what had happened. Unless you believe in Gaia that is! It’s just coincidental that, at present atmospheric concentrations of CO2, the natural absorption rate is approximately half of emissions. Therefore, if emissions are halved the concentration of CO2 should stay constant.
By the same logic, neither would the Earth be “aware” that CO2 emissions had stopped completely. In the unlikely event of that happening, CO2 concentrations would start to fall at approximately the same rate as they would have otherwise risen.
No it’s simpler than that – essentially Henry’s Law. The Earth “knows” we’re emitting because pCO2 goes up. The top layer of the sea in response takes in CO2 to reach an equilibrium concentration. Then CO2 is transported downward, more absorbed at the surface, and so on. All driven by our addition of CO2. If we add half as much, the driving pCO2 is less, and so are the fluxes.
Henry’s Law is for equilibrium, and overall this isn’t equilibrium. But the partitioning has worked out to be fairly stable at 50% of added CO2, and that would be the strating point for estimating the effect of an emission reduction.
Nick,
Any argument that relies on the Earth “knowing” anything is a bit suss, IMO.
If you aren’t convinced by my few lines of argument , maybe you should take a look at:
http://www.ipcc.ch/pdf/assessment-report/ar4/wg3/ar4-wg3-ts.pdf
I’m not saying anything different to what the IPCC have already said. They make the point that to reduce CO2 concentrations, human CO2 emissions have to be reduced by more than half.
Does the residence time even make an difference to anything???? Is a CO2 molecule from a fossil fuel substantially different in behavior from a CO2 molecule from a breathing human or animal or a CO2 from a non-fossil fuel.
Surely what matters are the quantity of CO2 produced by all means vs the quantity of CO2 consumed by all means and the quantity of CO2 sequestered in the oceans. There are multiple factors and no single “forcing” will necessarily dominate unless the mechanisms to consume and sequester CO2 are overwhelmed.
The question that needs to be asked is how much of the annual increase of 2ppmv/year is due to humans. There is no published literature that demonstrates conclusively that this is any more than 5% with at least the other 95% being naturally sourced.
http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
The seasonal variation in atmospheric CO2 concentration is in the order of 6ppmv over the course of a year and since this is entirely natural and three times the year to year increase of 2ppmv it seems reasonable that using 95% for the amount of increase in CO2 being naturally sourced is likely valid.
If this is the case then the annual contribution from humans would be no more than 5% of 2ppmv/year or just 0.1ppmv/year.
Over the past ten years there has been no detectable increase in global temperature on all five global temperature datasets (NCDC, HadCRUT3, GISS, RSS MSU, and UAH MSU) in spite of a 20ppmv increase in atmospheric CO2, so it is highly unlikely that 200 years of human emissions of 0.1ppmv/year producing the same 20ppmv increase in atmospheric CO2 will have any detectable temperature effect.
One must remember that the global temperature is an absolute temperature value and not the temperature anomaly value that is used in climate discussions. The IPCC 2001 Third Assessment Report stated that the global temperature increase was estimated at 0.6°C +/- 0.2°C per century. This is only 0.006°C/year.
The absolute global temperature from NCDC plotted by Junk Science
http://junksciencearchive.com/MSU_Temps/NCDCabs.html
shows that the annual seasonal variation in global temperature is in the order, of 3.9°C/year or 650 times greater than the year to year increase of jusr 0.006°C/year.
Since this seasonal variation is entirely natural and due to the seasonal effect from the significantly larger Northern Hemisphere Landmass, it would only take a change of 1/650 in the completely natural seasonal variation to account for the entire annual change of 0.006°C without invoking any change to the greenhouse effect from the annual 0.1ppmv increase in CO2 from fossil fuel emissions.
To carry the argument one step further the increase in atmospheric CO2 concentration from 337ppmv in 1979 to the 390ppmv concentration today according to the CO2 forcing parameter of the IPCC climate models should cause a reduction in OLR of precisely 0.782Watts/m^2 or just 0.025watts/m^2/year
The measurement of OLR (www.climate4you.com under the heading “global temperatures” titled “Outgoing longwave radiation Global”) shows that the annual seasonal variation in OLR is in the order of 10Watts/m^2 which is 400 times what the IPCC states would be the effect from the 2ppmv increase per year and 8000 times greater than the portion attributed to the 0.1ppmv/year human contribution.
What makes the whole thing totally ridiculous is that even in spite of the mostly natural 2ppmv/year increase in CO2 and the 57.1% increase in CO2 emissions over the past 31 years; there is no detectable decrease in OLR and in fact the OLR has increased over these 31 years proving conclusively that there has been absolutely zero enhanced greenhouse effect from CO2 increases human sourced or otherwise
Man is responsible for 4 ppm/yr based on Gt of fossil fuel burning, so it looks quite easy to conclude who is responsible for the 2 ppm/yr.
Jim D
A bit too “easy”, I’d say. [For an alternate “conclusion”, check the suggestion by Murry Salby.]
Max
Max,
You’re making tha same mistake as Nick Stokes in thinking the Earth “knows” that humanity is reponsible for a 4ppmv/yr contribution, and has somehow decided to help us out by absorbing 2ppmv/yr !
The absorption rate is determined by pC02 as Nick rightly says. That’s just another way of saying that the natural absorption rate is proportional to atmospheric CO2 concentrations and not the rate of human emissions.
Concentrations would not change that much, in the course of one year, if human CO2 emissions were halved. The natural absorption rate would stay approximately constant, and, so would CO2 concentrations. As Jim D says ” it looks quite easy to conclude who is responsible for the 2 ppm/yr.”
There are really no prizes for getting the answer right to that one!
tempterrain,
If humans add 4 ppmv/year and one measures an increase of 2 ppmv/year, then nature is a net sink of 2 ppmv/year. Thus humans are fully responsible for the increase. If humans were to reduce the emissions to 2 ppmv/year, then we would see stable levels in the atmosphere, but until now human have emitted twice the amounts as measured as increase over the past 150 years, with a slightly exponential increase per year over the years. That is the reason that there is no leveling off of the increase in the atmosphere and that the sinks also follow the emissions at a near constant rate.
The question that arises is why did the sink efficiency increase by a factor of 3? eg Sarmiento 2010.
Except for interannual variability, the net land carbon sink appears to have been relatively constant at a mean value of −0.27 PgC yr−1
between 1960 and 1988, at which time it increased abruptly
by −0.88 (−0.77 to −1.04) PgC yr−1 to a new relatively constant mean of −1.15 PgC yr−1 between 1989 and 2003/7 (the sign convention is negative out of the atmosphere). This result is detectable at the 99% level using a t-test
Mr. Englebeen,
how long do you think we will be able to continue putting 4ppm into the atmosphere even without moronic politicians interfering in drilling and useage?
Would you estimate even 1000ppm total?? I personally would be very happy if we gould get the CO2 level on earth to 1000ppm. Unfortunately the capacity of the earth to use CO2 is not going to allow it unless you have suggestions for how we can process caco3 or other compounds economically to release the co2?
kuhnkat, with the newest drilling techniques (horizontal shale fracking) gas reserves increased with many decennia and oil is going the same direction. Thus it looks that there still is cheap energy available for the foreseeable future, including for expanding countries like China, India, Brazil,…
Thus as long as the emissions are slightly exponentially increasing, I expect that the airborne fraction in the atmosphere remains a fixed percentage of the emissions, thus also slightly exponentially increasing. If the year by year emissions don’t increase, we may expect a fixed level in the atmosphere somewhere in the future (when sinks equal emissions). If we halve our emissions (not very likely) at the current sink rate of 4 GtC/year, then the increase in the atmosphere would be zero, etc…
The alternative natural gas deposits show large initial production rates but deplete rapidly. Oil has the same problem; Bakken depostis have a short production life. And where we do find large deposits, such as the tar sands, these require significant amount of natural gas to achieve an energy return on investment. Hard oil shale is the worst.
The cheap and plentiful fossil fuel energy is still coal, and it is getting dirtier with time as we continue to access lower grades of coal.
A good example of lower grade of coal is lignite, which is classified somewhere between ancient peat moss and carbonaceous mud.
When the future is exploiting lignite, we know prospects are bleak:
http://arkansasnews.com/2011/05/16/lawmaker-sees-promise-in-lignite-as-alternative-fuel/
Mr. Englebeen,
glad to hear you are not a peak oil fan!! 8>)
Webby,
“When the future is exploiting lignite, we know prospects are bleak:”
I hear ya. The most successful renewable energy user in Europe counts burning wood chips as renewable and produces more useable energy from them than their enormous wind capacity!!!
If you check the CO2 emissions data you will see that because of the skyrocketing oil price CO2 emissions decreased from 1979 to 1980, and again from 1980 to 1981, and again from 1981 to 1982.
If you check the CO2 dfata from Mauna Loa Observatory this had no effect on the year to year increase in atmospheric CO2 concentration.
If a decrease in CO2 emissions from fossil fuels three years running does not even propduce even a deflection in the atmospheric CO2 concentration curve it must be concluded that the observed increase is definitely not primarilt from CO2 emissions from fossil fuels.
Oceans are a very large depository foe CO2 containing far more CO2 than the atmosphere.
The Argo Buoys deployed in 2003 show a slight overall cooling in the sea surface but an overall increase in the heat content of the oceans.
Oceans are saturated in CO2 and this saturation is controlled by temperature and pressure with the deep ocean containing virtually all ofm the CO2 because of the high pressures. The overall increase in heat content of the oceans leads to increased outgassing of CO2 as the saturation point gets lowered by the increased heat. This is the primary source for the increase in atmospheric CO2 concentration.
A close look at the infamous Al Gore demonstration of the 650,000 years of ice core data showing warming and cooling cycles and changes in CO2 will show that temperature leads CO2 by about 800 years. This is because it takes time to heat the oceans which then outgas CO2 creating this 800 year lag.
There is plenty of physical evidence to demonstrate this; do you have any actual physical evidence to demonstrate your conjecture that “Man is responsible for 4 ppm/yr based on Gt of fossil fuel burning”?
25 Gt CO2 annually emitted from fossil fuels is nearly 1% of what is in the atmosphere, hence 4 ppm/yr. 6 Gt CO2 from the US alone. These are known numbers you can find anywhere for yourself.
The atmosphere contains approximately 2800 Gt of CO2 and each year approximately 750 Gt is added and an approximately 750 Gt of CO2 is removed. The annual difference between what is added and what is removed resdults in an increase or decrease in the year to year concentration. The question is whether changes to this 750 Gt from changes in the 33.158 Gt (2010) human portion of this is primarily responsible for the observed near linear average increase in CO2 concentrarion of just over 2ppmv/year for the past dozen years. If your contention is correct then year to year changes in CO2 emissions should be seen as year to year changes in CO2 concentration. The devil is in the details so I have listed the details that cover the data for the past five years.
Current levels of CO2 emissions from fossil fuels for the past five years are: 33.158 Gt (2010) 31.3387 Gt (2009) 31.915 Gt (2008) 31.641 Gt (2007) 30.667 Gt (2006) and 29.826 Gt (2005). (BP Statstical review 2011)
http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
Shows that the annual average concentration for these years has been:
389.78ppmv (2010) 387.36ppmv (2009) 385.57ppmv (2008) 383.72ppmv (2007) 381.86ppmv (2006) and 379.78ppmv (2005).
If your conjecture that fossil fuel emissions are the dominent source for the increase in atmospheric CO2 concentration then year to year changes in CO2 emissions should match precisely the year to year changesm in CO2 concentration.
From 2009 to 2010 emissions increased by 1.820 Gt and CO2 increased by 2.42ppmv.
From 2008 to 2009 emissions decreased by 0.577 Gt but CO2 increased by 1.79ppmv.
From 2007 to 2008 emissions increased by 0.274 Gt and CO2 increased by 1.85ppmv.
From 2006 to 2007 emissions increased by 0.974 Gt and CO2 increased by 1.86ppmv.
From 2005 to 2006 emissions increased by 0.841 Gt and CO2 increased by 2.08ppmv.
The last two values show that a smaller increase in emissions of 0.841 Gt produced a larger increase of 2.08ppmv than the greater increase in emissions of 0.974 Gt which only produced a 1.86ppmv increase in atmospheric CO2 concentration. This would not happen if CO2 emissions from fossil fuels were the dominant source for observed increases in atmospheric CO2 concentration.
Far more obvious is the demonstration that the 1.86ppmv increase in concentration and the 0.974 increase in CO2 emissions from fossil fuels is very similar to the 1.79ppmv increase in CO2 concentration from 2008 to 2009 but during this period there was a year to year reduction in CO2 emissions from fossil, fuels of 0.577 Gt!!
If CO2 emissions from fossil fuel were in fact the prime source for increase in atmospheric CO2 concentration; a decrease in emissions would cause a proportionate decrease in CO2 concentration and since this is not the case it is an absolute certainty that some other source within the 750 annual addition of CO2 to the atmosphere is the primary source for the observed average 2.046ppmv increase in atmospheric CO2 concentration of the past six years.
Don’t make statements that you can’t back up with hard physical evidence!
I don’t think you understand how response functions work. To first-order you have to convolve the emissions amount with the CO2 impulse response function. This will smooth out the atmospheric concentration much like a signal processing filter works.
That is not necessarily the way that research works. A theorist can spend lots of time coming up with a great explanation for some behavior. Since he may be talented at that aspect but perhaps not at running experiments, the researchers who are better at experimental work take on the challenge of proving or disproving the theorist’s assertions.
More importantly, I would add that one shouldn’t make claims without having a basic understanding of the physics.
There is a seasonal variation in atmospheric CO2 concentration of approximately 6ppmv over the course of a year. This is easily seen on the detail measurements from MLO http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
If you look closely at this graph or the actual monthly data you will see that the variation is not a smooth sinusoid demonstrating that MLO can pick up changes over time periods as short as a month so year to year changes are therefore perfectly represented.
Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration and since this is not the case changes in something much larger than the CO2 emissions from fossil fuels is the primary source for the observed increase.
IOf you average the data and smooth out the variations all you do is remove the evidence that CO2 is not the prime contributor.
By the way there is a three year period from 1979 to 1982 during which there was consecutive year to year decreases in CO2 emissions from fossil fuels with no visible change to the rate of increase in atmospheric CO2 concentration proving once again that CO2 emissions are not the prime source of the observed increase.
If you want some more proof the rate of increase in CO2 emissions changed slope around the year 2000 with the previous 10 years having a significantly lower slope than the later 10 years. If you take the slope of the first ten years and compare it to the increase in CO2 concentration you should be able to calculate a rate of increase in concentration per Gt of emissions increase. If CO2 emissions from fossil fuels are the prime driver this same relationship should hold for the ten years after 2000; if it doesn’t then emissions from fossil fuels are definitely not the primary source for the observed increase. I will leave it up to you to do the exercise and try and prove yourself correct.
If you do not want to go to that trouble just look at the longterm record.
The increase in CO2 emissions from fossil fuels was essentially zero prior to 1850 but the CO2 concentration was already increasing at a slowly accelerating rate. How do yopu explain a hundred years of CO2 concentration on the same accelerating curve that was in place until reaching the current asymptote trend of 2ppmv/year that occured before there were increasing CO2 emissions from fossil fuels.
I can demonstrate this half a dozen different ways but if you are a true scientist you only need one to abandon this error ridden concept and research the actual source of the CO2 concentration increase. If you are not a true scientist but merel;y a researcher with a preconcieved notion then you will simply dismiss physical evidence and attempt to justify what is clearly false.
The basic physics is that the CO2 molecule is linear and symetrical and therefore doesn’t have the necessary permanent dipole moment tyo allow interaction with all wavelengths radiated by the Earth and is limited to just a single resonant wavelength band centred on 14.77microns. Because clouds and water vapour account for well over 90% of the Earth’s 33°C greenhouse effect the remaining 3.3°C greenhouse effect possibly arrtibutable to CO2 represents at least 80% of the energy within this 14.77micron band already being accessed leaving only 20% of 3.3°C fruther possible effect from CO2 regardless of how large the concentration becomes. This is basic physics and basic physics trumps fabricated computer models and unfounded estimates of the source of observed CO2 increases.
“Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration”
There would be change in the year to year CO2 concentration even if CO2 emissions from fossil fuels did not change from year to year. Which may very well make it impossible to detect what you are trying to detect.
Norm, you are kind of all over the map here. Especially at the end you bring up CO2 absorption characteristics which has nothing to do with its residence time.
The small sinusoidal ripple is very easy to fit as it is modeled as a steady state response to a long-term natural variation. In other words, according to signal processing theory, a response function applied to a sinusoidal function in the steady state will result in a scaled version of the original sinusoidal signal. This is one of those mechanisms that is so common that an engineer or scientist should not even bat an eye in doing the analysis.
From the Fourier transform of the impulse response, one can accurately predict the scale of the periodic excursions of the natural CO2 emissions prior to the filter being applied. Read the amplitude response at the natural frequency and that is the filtering scale factor.
As far as the noise is concerned, knock yourself out trying to figure out what causes it. It could be measurement noise after the response occurs which would make it problematic to distinguish from random natural events.
Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration and since this is not the case changes in something much larger than the CO2 emissions from fossil fuels is the primary source for the observed increase.
Norm, I’m on your side on this one: CO2 emissions from fossil fuels do indeed change from year to year, and we should indeed see the effects of these changes within a year, or even six months.
But let’s sit down and do the math here. The atmosphere weighs 5140 teratons (aka exagrams), and a mole of air weighs 28.97 g, so the volume of the atmosphere is 5140/28.97 = 177 examoles, agreed? A millionth of that is 177 teramoles. So a fluctuation of 6 ppmv is 6*177 teramoles or about one petamole of CO2.
Since the carbon in a mole of CO2 weighs 12 g, a petamole represents 12 petagrams, aka gigatons, of carbon.
Now the annual carbon emissions from fossil fuel in 2009 amounted to around 9 gigatons (this year it should hit 10 GtC). So in order to compete with the annual 6 ppmv CO2 fluctuation, mankind would have to suspend all fossil fuel emissions for the year, and even then that would only get you a 4.5 ppmv reduction.
I don’t know what fluctuations you had in mind, but if you look at the record of global fossil-fuel CO2 emissions maintained by the US Department of Energy’s Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory, you can easily see that any departures from a smoothly growing curve are less than 100 megatons or 0.1 gigaton.
I fully agree with you that the CO2 level at Mauna Loa should fluctuate with fluctuating fossil fuel emissions. This fluctuation will however be less than 1% of the annual 6 ppmv fluctuation, or at most .06 ppmv.
Since the month-to-month fluctuations at Mauna Loa are much greater than this, what you’re looking for will be completely masked by random noise.
Vaughan,
Is this a strawman? The change in the year to year CO2 concentration is the annual growth rate. It looks like this:
http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf
Does it look like the growth is caused by anthropogenic emissions? Do the math.
Edim – are you a strawman? Let me suggest you open an Excel worksheet. Column B will be the years 1960 to 2010. Column C should start at 2.8, and increase by 0.025 a year. Column D should be =2*RAND()-3. E1 should be C1+D1, E2 should be =E1+C2+D2.
Here, C is meant to represent human emissions, D to represent ocean and ecosystem uptake, and E to represent concentrations. If you then make a column F which is E2-E1, this represents the annual mean growth rate. Plot F versus B. You’ll get something that looks a lot like the annual mean growth rate plot that you link to.
Now, does it look like the growth in CO2 is due to column B? Do the math.
-M
Does it look like the growth is caused by anthropogenic emissions? Do the math.
Yes, let’s resolve this by doing the math. The year-to-year fluctuations in that graph are on the order of 0.5 ppmv or approximate 1 GtC. Annual fossil fuel emissions are 9 GtC. Nature’s contribution is around 210 GtC, so the total natural and human CO2 emissions come to around 220 GtC.
So even before looking at anthropogenic emissions, these half-ppmv fluctuations in that graph represent 1/210 < 0.5% of the total natural emissions.
Do I understand you to be telling nature she’s not allowed to fluctuate by 0.5% of her total annual emissions?
I don’t know what you’re seeing in that graph that proves your point, but what I’m seeing there is a 0.5% noise level in natural CO2 emissions. Though small in the big picture, this is nevertheless large enough to dwarf the fluctuations Norm is looking for. The signal is simply too noisy to allow him to tell whether those fluctuations are present.
Norm, you have echoed the Salby argument about correlations. The data on CO2 rise fit the anthropogenic emissions extremely well when you take into account that the ocean/biosphere sink becomes less efficient when it is warmer. In warm years CO2 rises faster even if emission stays fairly constant because a warmer ocean can’t take up so much.
Jim D,
You need to prove CO2 has reached or near saturation in the ocean in order to make such a statement ‘a warmer ocean can’t take up so much’. The ocean is far from CO2 saturation. I thought AGWers say warmer induce more water evaporation and more precipitation. More precipitation will absorb more CO2 in the air.
The ocean doesn’t need to reach “saturation” to lower its rate of CO2 uptake. You’re applying only chemical stoichiometry, but failing to consider chemical kinematics (rates of chemical reactions).
settledscience,
I hope you digest what I said and what Jim D said. Did I mention it did not change rates? Do you know the ocean CO2 absorption rates between 1C SST difference? If you know, you would make such a statement.
Sam NC, it is not saturation, it is the equilibrium ratio, that depends on temperature. Warmer temperatures favor keeping more CO2 in the atmosphere versus in the ocean (much like water vapor in this way).
Digest this, Sam.
The ocean doesn’t need to reach “saturation” to lower its rate of CO2 uptake. You’re applying only chemical stoichiometry, but failing to consider chemical kinematics (rates of chemical reactions).
Jim D told you this too.
“Sam NC, it is not saturation, it is the equilibrium ratio, that depends on temperature.”
The equilibrium ratio changes as the rate of emission exceeds the rate of absorption.
Finally, your claim that due to warming, higher levels of “precipitation will absorb more CO2 in the air” is irrelevant because it is not quantified.
Do you believe that absorption is higher than the amount emitted? Based on what study?
Nothing. You just pulled it out of your arse.
Jim D,
Ah, so now you realize that CO2 in the atmosphere is not entirely due to man made.
Sam NC, maybe you just realized that. Everyone else knew that 280 ppm was the natural level, and this cycles between the atmosphere and ocean.
Norm,
Nice analysis. AGWers are having a big difficult time to respond adequately.
Quite the opposite: the calculations above show that the fluctuations at Mauna Loa due to the fossil fuel emission fluctuations Norm asked about, which are on the order of 100-150 megatons of carbon (= 350-550 megatons of CO2), can be responsible for at most 1% of the annual oscillation in the Keeling curve. The noise in the Keeling curve is substantially more than 1%, making a 150-megaton variation in annual carbon emissions essentially invisible.
Incidentally one thing I forgot to take into account in my calculations is that only half the emissions remain in the atmosphere. So my 1% should have been 0.5%. That only makes fuel fluctuations even less visible.
Norm is talking about something too small to be observable in the Mauna Loa data.
Strawman. Norm is talking about the annual growth rate of atmospheric CO2 and the rise of anthropogenic CO2 emissions (~3 Gt in 1960 and 10 Gt now, according to your link).
Yes, Norm is indeed talking about those, as am I. How does that make my calculations a strawman? Are you saying I made a calculation error somewhere?
It seems that you are talking about anthropogenic emissions fluctuations (100-150 MtC). He’s talking about the anthropogenic CO2 increase (~4 GtC in 1960 and ~9 GtC now, according to your link). He compares it with the annual atmospheric CO2 growth rate. Does it look like human emissions are causing the atmospheric CO2 growth?
http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf
Edim
It looks like CO2 annual mean growth varies with temperature.
Who of thunk it?
Cheers
Does it look like human emissions are causing the atmospheric CO2 growth?
Yes it does, on the assumption that nature is drawing down 45% of fossil fuel emissions (not all of our emissions remain in the atmosphere). Let’s look at the emissions for 2005 since that’s the middle of the rightmost decade in this graph. That was 7.97 GtC. The 55% of that left in the atmosphere is 4.38 GtC. Now 1 ppmv of atmospheric CO2 equals 5.140/28.97*12 = 2.124 GtC. So we should have seen 4.38/2.124 = 2.06 ppmv for 2005.
Looks like that to me. Are you sure we’re looking at the same graph? Or have I misunderstood what you meant?
It looks like CO2 annual mean growth varies with temperature.
I think CH wins this one. I lined up the NH temperature graph from WoodForTrees for 1960-2011 with Edim’s CO2 graph and put the result up at http://thue.stanford.edu/TempCO2corr.JPG .
While the CO2 isn’t perfectly tracking temperature (after all there are other natural contributors to CO2 such as volcanoes that would throw off any such correlation), it’s still a pretty impressive correlation.
One very noticeable place where the temperature rises while CO2 goes down is 1990-1991. The cataclysmic explosion of Pinatubo on June 15, 1991 in the Philippines, 5000 miles west of Mauna Loa, might be relevant. In 1992 the NH temperature declines, consistent with a massive ash cloud lasting a couple of years. The CO2 then rises in 1993. Hard to tell what’s going on there. The rest of the half-century doesn’t seem to have as big a reverse correlation as 1990-1991.
Little busy, but have a look at this paper; especially Figure 3.
http://www.whoi.edu/cms/files/Zhang_et_al_12Feb08_final_DSRII_34850.pdf
Marine photosynthetic microorganisms fix more than 10x their body mass each year. They move up and down the ‘mixing layer’ during a full seasonal cycle. Moreover, the feces of the animals which eat them fall as ‘snow’ and are oxidized all the way down to the bottom.
Treating the oceans as a chemical onion misses the whole oceanic biosystem.
If you could follow an isolated CO2 molecule through the atmosphere, ocean, biosphere, etc you would find that it would exchange reservoirs rapidly (of order years or less). However, exchange does not necessarily imply net CO2 drawdown.
The relevant piece of information for climate change is moreso involved with the perturbation timescale of the excess CO2 (i.e., if we throw an extra slug of 100 ppm of CO2 into the atmosphere, how long before it decays back to pre-perturbation levels?). An individual number as an answer to this question is not incredibly useful due to removal processes that act on multiple timescales, ranging from decades to hundreds of thousands of years, and the governing processes range from ocean chemistry to silicate weathering. In fact, it would take many millennia to completely draw down all the excess CO2, as indicated in carbon cycle models and past analogs (e.g. PETM, which took some 150,000 years to recover). In fact, a recent NAS report was dedicated to precisely this issue, looking at the long-term impacts of excess CO2. This page summarizes the relevant timescales and removal processes.
http://www.nap.edu/openbook.php?record_id=12877&page=75 (and following pages)
Another review of this topic is in
David Archer, Michael Eby, Victor Brovkin, Andy Ridgwell, Long Cao, Uwe Mikolajewicz,Ken Caldeira, Katsumi Matsumoto, Guy Munhoven, Alvaro Montenegro, and Kathy Tokos, Atmospheric lifetime of fossil-fuel carbon dioxide., Annual Reviews of Earth and Planetary Sciences 37:117-134, doi 10.1146/annurev.earth.031208.100206, 2009.
The timescale commonly cited (~100 years) by Fred and others, is in fact a poor representation of the carbon cycle and based on largely on application of linear kinetics that tell only a part of the story. Equilibration with the ocean takes a couple centuries (depending also on the size of the perturbation), while something around 25% of the excess CO2 is fated to be removed by slower chemical reactions with CaCO3 and igneous rocks. This is especially relevant for slow feedbacks like ice sheet responses.
On the extreme short side, the “5 year” timescale for CO2 lifetimes are often meant to imply that if we stopped burning CO2 today, we’d return back to 280 ppm in a few years. None of this is in line with anything we know about carbon cycle physics, nor can make sense of in observed records of CO2 levels. It’s just as nonsensical as the “anti-greenhouse effect” stuff played up by Claes Johnson or Postma. In fact, a substantial fraction of anthropogenic CO2 will persist in the atmosphere for much longer than a century.
This, and the underlying long-term temperature response was also the subject of a very good paper by Matthews and Caldeira a couple of years ago, on why it takes near zero emissions to stabilize atmospheric CO2
https://www.see.ed.ac.uk/~shs/Climate%20change/Data%20sources/Matthews_Caldeira_%20Instant%20zero%20C%20GRL2008.pdf
Chris – I haven’t cited a time scale of 100 years, but rather emphasized the multiple trajectories with their different timescales, including the very long tail of the distribution. However, for readers interested in a single number, we have to come up with something, and a number in the order of about 100 years is not unreasonable to convey a sense of the long residence time, even though it has no formal mathematical meaning No single number can do that, but 100 years may not be too far off from the time it would take about half of the excess to be absorbed, even though that is not a “half life” in the exponential decay sense, and understates the slowness by which the remaining half would disappear. I don’t think there is much disagreement about the actual nature of the reduction in CO2, as your comments, plus those of some others of us below make clear.
However, for readers interested in a single number, we have to come up with something, and a number in the order of about 100 years is not unreasonable to convey a sense of the long residence time.
I cringe when I read things like this. It gives science a bad name to overstate its confidence like that. If science doesn’t have a number it shouldn’t make one up.
The problem with a made up number is that people may start repeating it and pretty soon you’ve got everyone agreeing that that must be the correct number, for no better reason than that everyone says it is.
It is not a matter of overstating confidence. It is a matter of what is the best value to characterize a highly non-exponential process. I.e., the main problem is not a lack of knowledge but a lack of a way to express that knowledge by a single number.
Agree with Fred and Joel. To sate my curiosity, I created a coupled set of differential equations described with a diagram below in this thread.
Fat-tails are not well described by conventional statistics as they do not present any of the conventional statistical moments such as mean, variance, etc. The best I have been able to come up with is presenting the shape of the curve along with a characteristic time that is usually related to a median value. Again this is less than ideal because you might find a characteristic time of 20 to 30 years but the fat-tail still dominates. So we need a canonical representation of the impulse response.
I would suggest a hyperbolic function as an impulse response curve. Previously I was able to take the fossil fuel emission curve and convolve with this impulse response and found that it kept track of the atmospheric CO2 over the industrial age.
From oil discovery to oil production to emission and to sequestering, everything is a compartment model, and it is all connected. It’s kind of magical that the math works on a statistical scale but not surprising.
Vaughan,
do you ever have nightmares of people in white smocks surrounding your home with burning torches chanting??
I know the kind you’re thinking of, kuhnkat, but I seriously doubt enough of them could afford the bus fare to California to be much of a nuisance.
Anyway we’re used to that sort of thing here around Christmas. After a couple of chants we hand out cookies and they move on to the next house.
However if by some remote chance Ottawa’s ferocious Greenfyre were to come upon the two of us conspiring to confuse the public, I wouldn’t have to be able to run faster than him, only faster than you.
Not a problem. I am protected by the Second Amendment. Have fun leading them on a Long Beach or San Francisco Marathon.
Dr. Pratt, I feel your pain. I cringe when when people use control theory and global averages with simplified radiative physics neglecting varying responses temporally and spatially of conduction and convection impacted by pseudo-cyclic perturbations. :)
You must be on an appointments and promotions committee, Dallas, you sound like you’ve read one tenure case folder too many.
Athletes get tendonitis, A&P committee members get tenuritis. Who can afford to pay attention to the rule that the candidate’s case be presented in language that the Board of Trustees can understand? Most institutions would have a hard time maintaining a reasonable senior-junior faculty ratio if they took that rule seriously.
I have never been on a blog comment thread for this long where the insight keeps coming. And it really doesn’t matter if it is good insight or misdirection — like entropy and disorder and disorder getting in the way, one has to be able to reason around the perturbations.
Sorry, I was attempting a pithy summary. :)
I am agreeing with you. You have some good ideas, and so keep them coming.
In fact, it would take many millennia to completely draw down all the excess CO2, as indicated in carbon cycle models and past analogs (e.g. PETM, which took some 150,000 years to recover).
Chris, how do the carbon cycle models account for the 4 GtC/yr increase in natural removal of CO2 from the atmosphere, offsetting our 9 GtC/yr by nearly 50%? And how confident are they of their accounting?
Also is PETM a good analogy to the present? The current rate of increase is over 2 ppmv/yr, and the drawdown is tracking that increase at around 1 ppmv/yr. Unless PETM witnessed something remotely resembling that rate of increase it may also not have experienced the rapid drawdown we’re currently in the middle of.
We may be entering a time when nothing in the past is a reliable analogy. For all we know a speedy rise may be followed by a speedy decline.
Or not, but absent suitable precedents it’s hard to say one way or the other. This is a relatively speculative corner of climate science, unlike better understood things like the rate of onset of CO2 and temperature.
Vaughan: around 20 to 30% of the CO2 we emit will stay in the atmosphere until slow processes like sedimentation, weathering, and carbonate formation can act – that’s tens of thousands of years.
The other 70-80% will eventually end up in the oceans and ecosystems, but even that takes time, and is controlled by diffusion rates in the ocean and convective currents – so even to get down to that 20-30% takes decades to centuries.
I did a rough back-of-the-envelope calculation once, and think I decided that we’re already emitted enough CO2 so that if we stopped emitting today, we’d slowly relax back to a lower limit of 310-350 ppm CO2 (low end assumes that land-use change emissions are reversible and only 20% stays in the atmosphere, upper end includes land-use change emissions and a 30% persistence). The initial drop would be at current rate of natural uptake, but would slow as it asymptotically approaches that lower limit over centuries… and then after tens of thousands of years it would start dropping below that lower limit as it presumably would eventually return to the 280 ppm preindustrial level.
-M
M, the fallacy in any reasoning that talks about “20 to 30% of the CO2 we emit” is that, as soon as it’s emitted, it is indistinguishable from the 210 GtC nature emits in parallel with our 9 GtC/yr contribution.
For that reason it only makes sense to talk about 1-1.5% of atmospheric CO2, not about the 20 to 30% of what we no longer own. It doesn’t exist as “ours” any more.
Nature is currently removing 214 GtC, up 2% from what it was a century ago, and if that were to increase by another 0.5% for some reason, right there we’d see the rate of increase decline by 1 GtC/yr or around 20% of the 5 GtC/yr that carbon is currently increasing by.
Columbia’s Klaus Lackner, as well as Exxon, have been working on directly extracting CO2 from the atmosphere. Leveraging Nature’s 214/yr GtC removal program strikes me as a potentially powerful alternative to Lackner’s very labor-intensive approach.
… if we stopped emitting today … the initial drop would be at current rate of natural uptake,
Apologies for continuing to contradict you, I’m starting to feel like a wet blanket. However if you have a human pouring 300 tons/sec into a leaky bucket that has a natural leak of 100 tons/sec, and the human turns off the hose, the rate of increase does not drop by 100 tons/sec. It drops by 300 tons/sec because the rate of increase went from +200 to -100.
If the hose output is reduced suddenly, the filling rate drops initially by whatever the hose rate dropped by, up to and including all of it. Nature has no say in that.
we’d slowly relax back to a lower limit of 310-350 ppm CO2
Assuming exponential decay, that’s two parameters: the asymptotic limit (which you gave) and the rate it is approached (which you didn’t). It would be very interesting to see how you arrived not only at the number you gave but also the one you didn’t. (Error bars would be even better but I’ll settle for just the two numbers for starters.)
For all I know you may have a compelling analysis. I don’t like trying to second guess these things.
” if we stopped emitting today … the initial drop would be at current rate of natural uptake,”
I think you misread me, and we actually agree on this point. What I meant was that if we see 5 GtC natural uptake per year today (along with our 10 GtC emissions), that if we eliminate human emissions, atmospheric loading of CO2 will drop by 5 GtC per year, because the rate of uptake is controlled by the difference between the atmospheric concentration and the concentrations in the various ecosystem, soil, and ocean reservoirs, so, to a first order, the uptake rate in any given year is independent of human emissions in that year (though it does depend on how much was emitted in the previous decades, which controls how far out of equilibrium the atmosphere is).
“M, the fallacy in any reasoning that talks about “20 to 30% of the CO2 we emit” is that, as soon as it’s emitted, it is indistinguishable ”
I’ll disagree with you here. Dollars are indistinguishable, but if my bank account is approximately in a steady state (income in equals rent plus food plus entertainment money out), and then my grandmother gifts me $100, I can perfectly well note that 20% of that $100 will end up permanently increasing my retirement account, but the other 80% will go towards more entertainment and food (rent being fixed). There’s a fixed quantity of labile carbon in the ocean/atmosphere/ecosystem/soil system. If I dig up coal and burn it, I have increased that total amount of carbon. I can calculate how that extra carbon will distribute itself between the reservoirs at equilibrium. Therefore, I can talk about how 20% of that extra carbon will remain in the atmosphere after the system reaches its new equilibrium.
“Nature is currently removing 214 GtC”:
I also disagree with the way this is formulated: I’d argue that Nature is currently removing only a few GtC. The rest of it is in balance – plants breathe in and breathe out, leaves grow, fall, and decay, carbon enters the ocean and leaves the ocean. An increase in atmospheric concentration disturbs the balance a bit – a little more enters the ocean than leaves, plants grow a little bigger than they would have otherwise – but only a bit. Yes, you can calculate a Gross Primary Productivity of the ecosystem of 120 GtC per year, but I think the net is more informative than the gross for most purposes. Especially when you are talking about the 70 GtC going into (and out of) the ocean. It might be possible to change this, but only at great expense: iron fertilization the oceans probably won’t actually work (and would be ecologically disruptive), and we are already doing a bunch of terrestrial ecosystem harvesting and storage (eg, making wooden buildings) but the sheer volumes of matter we’d have to store/bury would be… well, in the gigatons per year to make a difference. That’s a lot of trees buried. I’m also unconvinced that Lackner’s work will ever be practical due to thermodynamic energetic constraints as well as the volumes of material the process would create. (I like Caldeira’s thinking: any carbon-capture system that can compensate for our emissions will have to be at least as large as our current fossil infrastructure, if not 3 times as large if you have to deal with the CO2 and not just the carbon)
“It would be very interesting to see how you arrived not only at the number you gave but also the one you didn’t.”
My “20-30%” comes from Archer et al. 2009 (http://geosci.uchicago.edu/~archer/reprints/archer.2009.ann_rev_tail.pdf). The fossil & land-use emissions I used come from CDAIC (http://cdiac.ornl.gov/). I assumed the unperturbed CO2 concentration was 280 ppm. So, 280 ppm + (347 GtC)*0.2/(2.12GtC/ppm) equals about 310 ppm (lower bound) (upper bound uses 0.3 and included land-use change emissions). I didn’t give an asymptotic approach rate: Archer says “2 to 20 centuries”, but probably looking at the actual model results in his paper will give you a better feel for those rates.
-M
I think you misread me, and we actually agree on this point. What I meant was that if we see 5 GtC natural uptake per year today (along with our 10 GtC emissions), that if we eliminate human emissions, atmospheric loading of CO2 will drop by 5 GtC per year, because the rate of uptake is controlled by the difference between the atmospheric concentration and the concentrations in the various ecosystem, soil, and ocean reservoirs, so, to a first order, the uptake rate in any given year is independent of human emissions in that year (though it does depend on how much was emitted in the previous decades, which controls how far out of equilibrium the atmosphere is).
So far I’m not convinced we agree, but perhaps you can persuade me otherwise. Continuing with my 300 tons/sec bucket example, I believe you’re saying that turning off the 300 tons/sec when there’s a leakage of 100 tons/sec will take 200 tons/sec off the loading. That is, we were adding 200 tons/sec and now we’re not.
Where we disagree is on the importance of the 100 tons/sec continuing to leak. Whereas you don’t want to count that, I do.
Let me stop for a second and see whether you still think there’s no difference between our respective viewpoints.
Okay. We have a bucket with 100 stones in it. Every day, I add 10 stones, and nature takes away 5 stones. So, under current conditions, the bucket increases by 5 stones per day (eg, it would be at 105 stones tomorrow). If I stop adding stones, nature is still taking away 5 stones per day, so the “loading” of the bucket is decreasing by 5 stones per day (eg, it would be at 95 stones tomorrow).
Eventually, nature’s uptake will drop below 5 stones per day, because it turns out that the natural uptake is controlled by the difference between bucket 1 (with 100 stones) and bucket 2 (with 50 stones), and once the two buckets are equalized, nature will stop taking stones out of the first bucket. But, to first order, nature’s stone removal tomorrow is not dependent on whether or not I’ve added another 10 stones today: nature will take 5 stones out either way.
I do think we’re still in agreement: to use your 300 ton/sec minus 100 ton/sec, we’re going from +200 ton/sec to net -100 ton/sec: the 100 ton uptake is constant (in the short term), regardless of whether we’re adding 300 tons or 0 tons. Right?
Good information, thanks.
When I first looked at the IPCC Bern profiles, I thought the CO2 impulse response appeared as a diffusive fat-tailed curve. From what I have learned from you and Bart and Manacker, I thought to bit the bullet and create a mesh of first-order rate equations to model the diffusion to the deeper sequestering sites.
http://img534.imageshack.us/img534/9016/co250stages.gif
This model goes on for about 50 stages, with the steady state showing an equal amount of carbon at each stage. The interesting feature is the shape of the atmospheric CO2 curve; this indeed shows a 1/(1+k*sqrt(t)) dependence which is very close to the IPCC Bern model.
I also now agree with Bart and Vaughn and a few others that think that the definitional residence time of the CO2 in the atmosphere is a bit of a red herring. The residence time in the atmosphere may be 10 years but when one factors in the interchange of the CO2 between the boundaries of the system, the actual impulse response does get these fat-tails.
I think it explains much of what is happening.
The interesting feature is the shape of the atmospheric CO2 curve; this indeed shows a 1/(1+k*sqrt(t)) dependence which is very close to the IPCC Bern model.
This would be very reassuring if it included estimates of (i) any recent increase in vegetation biomass, and (ii) how quickly that vegetation will throw in the towel upon commencement of a program of starving it of CO2.
If the answer to (i) is “negligible” I would be more comfortable with the estimates people are coming up with based on exponential decays. However in my experience life is not into exponential decay, quite the opposite in fact according to Malthus. (The Black-Scholes-Merton partial differential equation for the price of a financial asset is similarly opposite to what happens in physics.) And not just for human populations but all populations of living creatures including vegetation.
This is why I’m suspicious of these arguments about residence time: they start from the premise that there is no life on Earth at all, let alone intelligent life. This may be the impression created by Internet blogs, but for my money some vegetables are pretty intelligent. Those Venus flytraps can outsmart flies for example, and generally speaking flies seem smarter than many bloggers, present company excluded of course.
Black-Scholes is derived from the Fokker-Planck which is the so-called “master equation” used quite often in physics. Financial returns randomly walk about some central value. The problem is that Black-Scholes like many formulas don’t work when game theory strategies and human psychology is involved.
But that’s beside the point. The set of equations I solved was precisely the Fokker-Planck formulation with drift removed. It is a purely diffusional process set up as a staged slab model. It is only odd in that I have the top layer with different rates than the diffusional slab layers.
And I agree with you about the residence time. What I devised was a set of somewhat stiff equations, and the stiffness is due to a faster rate interfacing the atmosphere than the slower rate between the slab layers.
This is a fascinating problem to model and if we don’t do it, the engineers will. If any engineering is done with respect to sequestration they will likely do it, because it is all about solving problems that their bosses lay in front of them. In other words, engineers use models out of necessity and wanting to keep getting paid. I think we are here because we see it as a challenge.
WHT, your diagrams are a tad cryptic but I get some of the idea.
Have you looked into reconciling your modeling with results such as those in the Domingues et al paper I cited earlier?
The connection I intended between Malthus and Black-Scholes is that both are simply time-reversed versions of what we can observe in simple physical situations, respectively exponential decay and diffusion (but you knew that).
Thanks for the ref. The paper does talk about doing detailed balance, which is what I am trying to. Master equations at the most fundamental level are about doing mass balance and accounting for conservation of material.
Domingues also includes sequences of volcanic events as disturbances which means that they are doing more than a single impulse response, and they are looking at more than just CO2 concentration change. Also, if they are looking at heat content of oceans then that is inferring to a couple of steps down the road.
Somebody else mentioned pH with ocean depth, which would tell us about CO2 diffusion through the slabs. That would seem a better comparison, as CO2 would random walk down from the atmospheric disturbances.
“I would be more comfortable with the estimates people are coming up with based on exponential decays”
It is important to remember that the real carbon cycle models are NOT based on exponential decays. The four term exponential Bern cycle approximation is NOT the Bern cycle model: it is a four term exponential FIT TO the real Bern cycle model, for a case where the system is in equilibrium and a pulse of carbon is being added. Real carbon cycle models have plant types which respond to increased CO2 concentrations by changing the modeled stomatal conductance (basically, they can expend less energy, and use less water, to fix the same amount of carbon). Sophisticated models also include nitrogen in the modeling, so that the fertilization effect from increased CO2 drops when nitrogen becomes a limiting factor: but, even more sophisticated models include the fact that as temperature increases, the rate of decay in leaf litter increases, which makes more nitrogen available. The oceans are modeled at a similar level of detail, with mixed layers, diffusion rates, thermohaline circulation currents into the deep oceans, and biological pumps where organisms suck carbon out of the upper mixed layers and then sink (upon depth) into deeper waters where they turned back into dissolved inorganic carbon through their own decay rates.
This is like claiming that IPCC models are wrong because they don’t account for the diurnal cycle because all the Sky Dragon people have looked at are the 1-D approximation equations. No, the real models are actually quite sophisticated – not perfect, but reasonably physically realistic – and it is only the teaching tools that are stripped down to their bare minimums…
That is a good point. Since most of the papers are vague about the origins of the Bern model, I probably got the impression that the fat-tails were caused by a superposition of a few exponentials. But after looking at it from the perspective of a diffusional slab model going into the earth interacting with a faster steady-state balanced flow between the surface and atmosphere, I can see why they made up that heuristic. It is all a matter of matching some simpler expression to the detailed model.
Consider that my slab model has 50 layers below the surface and so includes 100 rate flows between the layers, all balanced out in a master equation. Of course this would create an ugly analytical expression for the CO2 impulse response. So instead of selecting a few exponentials to match the response profile, it makes a lot of sense to fit the curve to the natural controlling factor. From the model, this factor is diffusion into the more permanent layers, so we can try something from the 1/sqrt(t) family. This heuristic certainly works well to explain both the Bern impulse response curve and the impulse response from the multiple layer model.
In my opinion, the exponentials are still there but they are buried in a mesh of layers. The mesh is regular but that doesn’t make the approximating heuristic any easier to derive.
Just a few quick points:
(1) The concept of a single decay time for CO2 is not very good because we use such decay times to characterize exponential decays. The decay of a slug of CO2 is highly non-exponential: almost half of it partitions into the other easily-accessible reservoirs (ocean mixed layer, biosphere) in a matter of months or a few years at most but the rest decays very slowly…and very non-exponentially, so that even after, say, 1000 years there is still expected to be something like 25% of the original amount (I think “the original” after the initial rapid partitioning). You need to read David Archer’s book or papers to understand this.
(2) Jeff Glassman, up to his usual tricks, tries to complain that something that even a hardened skeptic like Willis Eschenbach (and Hans Erren) agree with is somehow unfathomable: Why does a CO2 pulse decay less rapidly than CO2 exchanges between the atmosphere and the hydrosphere / biosphere? It is really not that complicated. The point is that the ocean mixed layer, the biosphere, and the atmosphere form a subsystem where the CO2 rapidly exchanges back and forth between these reservoirs. This means that when you add a new pulse of CO2 to the atmosphere, it doesn’t take long until it has partitioned between these three reservoirs. However, from then on, the decay rate out of this subsystem is governed by slower processes like exchange between the ocean mixed layer and the deep ocean or absorption by the lithosphere.
(3) An analogy is useful: Imagine you have three connected containers containing water (and let’s have some sort of pump that mixes the water around between the containers just to make even the exchange of water molecules between containers reasonably rapid). If you add water to one of the three containers, what happens is that the level in all 3 containers goes up. Now, what Jeff would want you to believe is that if you measure the residence time of the molecules in the container that you added water to to be, say, 3 minutes, then the level of the water will decay back down with a characteristic decay time of that value. However, in reality, the level of the water in this example won’t decay at all…except at the much slower rate determined by evaporation of the water out of the containers!
Joel- The multiple trajectories by which excess CO2 declines toward an equilibrium baseline have been analyzed by David Archer, as you mention. The different rates have been estimated by a variety of models, as the linked article indicates. However, it is easy to assign a minimum lifetime for excess CO2 from a simpler set of observations (Archer mentions this). The current CO2 concentration slightly exceeding 390 ppm is about 110 ppm above a baseline for climates of previous centuries. From observational data we know that current emissions, if not absorbed by sinks, would add about 4 ppm per year, but the observed rise has only been about 2 ppm, with the remaining 2 going into sinks. Since the absorption into sinks is a response to the 390 ppm (the sinks don’t care whether they are absorbing old or new CO2), a linear return to baseline would require 110/2 = 55 years. The true rate is of course considerably slower due to the asymptotic nature of the approach to baselines values and to various climate feedbacks. It therefore appears that an “average” value of about 100 – 200 years is a reasonable estimate, as long as it is realized that the true rate involves a long “tail” that declines over many thousands of years.
Interestingly, of the six links in Dr. Curry’s original post, plus the link to the Dyson/May dialog she added, there is really no substantial disagreement about this long lifetime. Rather, some of the links focus instead on the exchange rate of individual CO2 molecules, which involves a much shorter lifetime, some on the decline of an excess concentration as discussed above, and in the case of both Essenhigh and Dyson, both phenomena are acknowledged and distinguished from each other.
“a linear return to baseline would require 110/2 = 55 years”
That is just awful reasoning. The effective gain is independent from the time constant.
What are you referring to? 110/2 = 55 years is simply the arithmetic that would describe the time for all the excess to disappear if its disappearance rate were 2 ppm/year, which is about the current rate. No-one argues that to be the case, but it’s easy to argue that the disappearance rate won’t be faster.
The quandary: to explain and have you not understand, or blow it off and allow you to imagine you are anywhere close to being right?
It’s too late to worry about tonight…
Yep. Given realistic emissions scenarios, it looks like we’re heading for 800ppm in a century or so. Dyson has a proposal to mitigate that that does not depend on stifling industrial society, but rather on a worldwide planting program. His proposal relies on the short residence time of a single CO2 molecule in order to achieve its results, and for his purposes, that’s the correct residence time to use.
The long-term residence of “net” CO2 is only the “relevant” number if your only policy for mitigation is shutting down fossil-fuel burning. The insistence that this longer duration is correct and the shorter one is incorrect is a sign of ingrained policy bias. If the world environmental community had launched a crusade for planting lots more carbon-sequestering plants, and tried to get an international treaty to set targets for that, etc., then the short-term number would be the “orthodox” answer given to the public.
Optional Study Question: Since Dyson’s proposal acts fairly quickly, is reversible, and doesn’t require the abandonment of high-density energy sources (and the concomitant impoverishment of the world), why isn’t there more emphasis on figuring out how to do it practically?
Where is he gonna go stick it?
I didn’t realize that CO2 follows the rules of political science more closely than it does physics and chemistry.
Your sarcasm is misplaced. Read the argument–Dyson’s response to his critic–and then reread what I wrote.
No one disputes that a given CO2 molecule circulates out of the atmosphere in about five years. The dispute is whether that residence time is relevant, and it is indeed not the relevant time if we want to know how quickly a cutback in CO2 emissions will take effect on the atmosphere
But exactly which laws of physics are relevant depends on the proposed mitigation policy. Dyson’s proposal for carbon-eating plants makes the five-year molecular residence time the relevant one. If that were the “baseline” policy proposal on the table, then that would be the “standard” answer to the CO2 residence question. Your inability to understand this point on the first pass bespeaks your intellectual entrenchment in a single policy position.
Joel Shore 8/24/11, 9:48 pm, CO2 residence time
JS: (1) The concept of a single decay time for CO2 is not very good because we use such decay times to characterize exponential decays. The decay of a slug of CO2 is highly non-exponential: almost half of it partitions into the other easily-accessible reservoirs (ocean mixed layer, biosphere) in a matter of months or a few years at most but the rest decays very slowly…and very non-exponentially, so that even after, say, 1000 years there is still expected to be something like 25% of the original amount (I think “the original” after the initial rapid partitioning). You need to read David Archer’s book or papers to understand this.
You suggest some arbitrary representation, when the result is from first year physics:
The formula for the residence time from any reservoir, M, at a rate S, is T = M/S. AR4, Glossary, Lifetime, p. 8. Since S = -dM/dT, the formula provides a differential equation for M: dM/M = -dt/T. The solution is M = M_0*exp(-t/T).
The decay of a slug is exponential and it has one decay time.
This formula conflicts with the Bern formula also used by IPCC.
I have read Archer’s work. He appears to have been responsible for the material on carbonate chemistry in AR4, Chapter 7, where he was a Contributing Author. He erred to rely on the equilibrium in the surface layer, but he was facilitating AGW so it would buffer against dissolution, making the MLO bulge anthropogenic, and making the atmospheric CO2 concentration with its weak greenhouse effect sufficient to cause a calamity in just the right time frame.
Archer’s Chapter 7 only said that 20% may remain in the atmosphere for many thousands of years, but the Technical Summary said as much as 35,000 years was needed for atmospheric CO2 to reach equilibrium(!). But Archer’s paper addressed the time to completely[!] neutralize and sequester anthropogenic CO2. … The mean lifetime of fossil fuel CO2 is about 30-35 kyr. Archer, D., The fate of fossil fuel CO2 in geological time, 1/7/05, p. 11. Earth’s climate is divorced from carbon sequestration, but IPCC incorrectly converted the ocean sequester time into atmospheric residence time.
JS: (2) Jeff Glassman, up to his usual tricks, tries to complain that something that even a hardened skeptic like Willis Eschenbach (and Hans Erren) agree with is somehow unfathomable: Why does a CO2 pulse decay less rapidly than CO2 exchanges between the atmosphere and the hydrosphere / biosphere? It is really not that complicated. The point is that the ocean mixed layer, the biosphere, and the atmosphere form a subsystem where the CO2 rapidly exchanges back and forth between these reservoirs. This means that when you add a new pulse of CO2 to the atmosphere, it doesn’t take long until it has partitioned between these three reservoirs. However, from then on, the decay rate out of this subsystem is governed by slower processes like exchange between the ocean mixed layer and the deep ocean or absorption by the lithosphere.
What was unfathomable was you and Eschenbach both insisting over at WUWT that the lifetime of a molecule of CO2 was somehow different than the lifetime of a slug of CO2 molecules. The former is only deduced from the latter.
Now you’ve confused yourself. You are correct about the pulse of CO2 rapidly distributing into the three reservoirs. But that’s the end of the problem — you should have stopped there. The question in the introduction is
How long does CO2 from fossil fuel burning injected into the atmosphere remain in the atmosphere before it is removed by natural processes? Bold added.
You’re trying to answer a different question: how long does the slug of CO2 take before it is sequestered, before it reaches its final fate. The answer is as I have given previously 1.5 years with leaf water, and 3.5 years without. Obviously, those times obviously include uptake to the land. However, the IPCC model discusses four different time constants for the lifetime of a pulse of CO2 only in the context of the air-sea flux:
Consistent with the response function to a CO2 pulse from the Bern Carbon Cycle Model (see footnote (a) of Table 2.14), about 50% of an increase in atmospheric CO2 will be removed within 30 years, a further 30% will be removed within a few centuries and the remaining 20% may remain in the atmosphere for many thousands of years (Prentice et al., 2001; Archer, 2005; see also Sections [¶7.3.4.2 Ocean Carbon Cycle Processes and Feedbacks to Climate] 7.3.4.2 and 10.4) Bold added, 4AR, §7.3.1.2, p. 514.
I didn’t say anything as foolish as you suggest with your unrecognizable analogy. Kindly quote what you want to critique.
Jeff,
As usual, you have added nothing useful to the discussion…just confusion and obfuscation. I think the posts of Chris, myself, Fred, and the others who are here to enlighten rather than to obfuscate stand on their own.
It is sad to see people so wedded to their ideology that they are willing to sacrifice science on the alter.
Joel Shore 8/25/11, 9:00 pm CO2 residence time
JS: As usual, you have added nothing useful to the discussion…just confusion and obfuscation. I think the posts of Chris, myself, Fred, and the others who are here to enlighten rather than to obfuscate stand on their own.
It is sad to see people so wedded to their ideology that they are willing to sacrifice science on the alter.
Unable or unwilling to defend his ideology as science, Dr. Shore, physicist, takes refuge in an unsupported, vitriolic attack. This response once again reveals how he, along with a few others, some of whom he is willing to out, post here to enlighten, to Spread the Word, according to the Gospel of IPCC and the tracts in advocacy journals, and to bring Salvation to the Unwashed. They are here neither to defend their new religion, nor to engage in any substantial debate over it.
Uncomfortable in his compromised position, Dr. Shore tries to strengthen it by analogy, comparing criticism to the attacks on evolution by fundamentalism. Just in a single thread, he wrote:
JS: You really haven’t a clue what you are talking about. You are just throwing around phrases … that seem to be as ignorant as when a Young Earth creationist says that evolution violates the Second Law. Slaying the Greenhouse Dragon. Part IV, Joel Shore, 8/14/11, 10:31 pm.
JS: While there are some legitimate scientific issues … , most of this is not about science at all. It is simply an attack on science that is inconvenient for some people to accept, just as is the case for evolution… . Id., 8/15/11, 10:22 pm.
JS: [I]mportant discussions occur in the scientific literature. It is understandable that when bad science, be it challenging evolution or challenging AGW, fails in those venues (or is so bad it could never even get into those venues), the proponents try to take their case directly to the public. It is a way to replace public policy based on science with public policy based on ideologically-inspired nonsense. Id. 8/16/11, 4:32 pm
The parallel is quite the reverse of what Dr. Shore imagines. Fundamentalism (belief) is to Evolution (science) as skepticism (science) is to AGW (belief). Like the other AGWers, he learned his physics without learning science.
Joel Shore 8/25/11, 9:00 pm CO2 residence time
Errata: The last paragraph should read:
The parallel is quite the reverse of what Dr. Shore imagines. Evolution (science) is to Fundamentalism (belief) as skepticism (science) is to AGW (belief). Like the other AGWers, he learned his physics without learning science.
Yeah…Right. With evolution, you have the National Academy of Sciences and all the other academies and the various scientific professional societies on one side … and for AGW, the same thing is true. Alas, it is not in the direction that you claim it to be. So, the question is: Whose interpretation of the science are you going to believe, these societies or a few ideologues like Jeff who want you to believe that their obvious Right-wing views have nothing to do with their conclusions and that all of the scientific societies have somehow been corrupted and only he and his fellow travelers can see the light!?!
Oh, and it doesn’t help your case that one of the few “skeptical” scientists who is not talking complete nonsense is on record as saying “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism.” ( http://www.ideasinactiontv.com/tcs_daily/2005/08/faith-based-evolution.html )
Joel Shore 8/27/11, 5:39 pm, CO2 residence time
JS: Yeah…Right. With evolution, you have the National Academy of Sciences and all the other academies and the various scientific professional societies on one side … and for AGW, the same thing is true. Alas, it is not in the direction that you claim it to be. So, the question is: Whose interpretation of the science are you going to believe, these societies or a few ideologues like Jeff who want you to believe that their obvious Right-wing views have nothing to do with their conclusions and that all of the scientific societies have somehow been corrupted and only he and his fellow travelers can see the light!?!
Oh, and it doesn’t help your case that one of the few “skeptical” scientists who is not talking complete nonsense is on record as saying “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism.” ( http://www.ideasinactiontv.com/tcs_daily/2005/08/faith-based-evolution.html )
We already figured out that Dr. Shore believes in consensus science, and to the exclusion of science. Slaying the Greenhouse Dragon. Part IV. 8/16/11, 10:15 pm. Because it’s important to him, he should keep on repeating the mantra.
For other readers, science has nothing to do with consensus forming, voting, or endorsements — nor anything to do with personal foibles of supporters or detractors of one model or another. It has nothing to do with political orientations, left or right, toward models or modelers. Science is about models of the Real World that (1) violate no facts in their domain, and which (2) make predictions for fresh facts. The AGW model fails on both counts.
Advancements in science always evolve from one person with one idea and no support.
All the societies and professional journals in the world, every school of science or otherwise, and every media outlet and commentator could be unanimous to a man about a model, and fall, defeated by one person pointing out one error. In AGW the task is complicated only by which fault to chose among a dozen or so, first magnitude, fatal errors. Click on my name in the header, and follow the links to IPCC Fatal Errors and SGW.
But AGW supporters, committed to a scientific-like model as a matter of belief, find themselves heirs to a multitude of errors from the model owner, IPCC. Constitutionally and professionally unable to admit their mistakes, they respond to scientific challenges not with technical point and counterpoint, but with digressions, irrelevancies, and insults.
Jeff: Your dichotomy between “consensus science” and “science” is a false one. You are right that one person can overturn a consensus. However, in order to do so, that person must convince his or her fellow scientists of their point of view. And, until such time, it is imperative that public policy be decided by what the scientists judge to be the best science. The only other alternative is to have science politicized as each political group believes their own “pet” scientists who support their ideologically-driven point-of-view.
Of course, the fact is that the overwhelming majority of AGW “skeptics” are not really trying to convince scientists, probably because they know that their arguments are too weak to be convincing to scientists…In some case, like the Slayers, Postma, and arguments that man is not responsible for the current CO2 increase, they are ridiculously so! So, instead, they (you) try to take their case to the public where the techniques of sophistry compete much better against science!
Let’s face it, your arguments are losing badly in the scientific community (for good reason!), which is why you are trying to go the way of all pseudoscience and say that the public should accept your view of the science and discard the view of the scientific community. And, they should ignore the fact that your ideology is much, much stronger than your science.
Joel Shore 8/25/11, 9:00 pm CO2 residence time
JS: “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism” followed by a link.
Dr. Shore doesn’t say that his citation is from an article posted on the blog Ideas in Action with Jim Glassman (no relation), dated 8/8/05, which acquired no comments. The author was Roy Spencer.
Check this:
“Those who cavalierly reject the theory of evolution,” writes Spencer, “as not adequately supported by facts seem quite to forget that their own theory is supported by no facts at all.[“] Scopes Trial Transcript, 1925, p. 262, quoting Herbert Spencer, 1852.
If Roy is descended from Herbert, we have evidence that evolution has no preferred direction.
And, that’s relevant or changes how the statement should be interpretted how exactly?
Joel Shore 8/28/11, 2:29 pm CO2 residence time
JS: You are right that one person can overturn a consensus.
I didn’t say that, and wouldn’t. Who cares what fiction the people believe? I addressed the failure of the model believed by the consensus.
JS: However, in order to do so, that person must convince his or her fellow scientists of their point of view.
Where did you get such a notion? Do you know of any example, and take the most famous of all, where the overturning scientist had to go around convincing his fellow scientists? Poppycock.
JS: And, until such time, it is imperative that public policy be decided by what the scientists judge to be the best science. The only other alternative is to have science politicized as each political group believes their own “pet” scientists who support their ideologically-driven point-of-view.
You not only believe in consensus science, but in technocracy! What a horrible, and thoroughly discredited, idea! Publicly funded academics in charge of public funds. A little healthy skepticism, augmented by a little science literacy, is enough to move the swing Policymakers, those in the middle, to reject AGW.
JS: Of course, the fact is that the overwhelming majority of AGW “skeptics” are not really trying to convince scientists, probably because they know that their arguments are too weak to be convincing to scientists…In some case, like the Slayers, Postma, and arguments that man is not responsible for the current CO2 increase, they are ridiculously so! So, instead, they (you) try to take their case to the public where the techniques of sophistry compete much better against science!
You make the case for why we don’t vote in science. You also show no skill with distinguishing signal from noise.
JS: Let’s face it, your arguments are losing badly in the scientific community (for good reason!), which is why you are trying to go the way of all pseudoscience and say that the public should accept your view of the science and discard the view of the scientific community. And, they should ignore the fact that your ideology is much, much stronger than your science.
There you go again, keeping imaginary score. Because I reason against your simplistic, left wing financial catastrophe, foisted on “Policymakers” to prevent a fantasized calamity, you assume I am following some right wing agenda. I follow no agenda, nor do I deny my reasoning to any, left, right, up, or down. Let science and objectivity prevail.
Chris and Joel give very good explanations for the CO2 residence time.
The long residence time is a fat-tail effect, very close in temporal behavior to what you would find in radioactive waste decay. The collection of different rates of decay leads some people to believe that it is a fast removal, due to the initial high slope decay. However, the fat-tail response is where the lengthy decay takes precedence. (That’s why Fukishima radiation went down fast but will be hanging around for hundreds of years, not the same physics but similar temporal response)
The key math is when you do a convolution of a CO2 emission forcing function with a fat-tail impulse response. The result will show this peculiar lag that will generate a continual increase in CO2 concentration, long after the forcing function is removed. That is what has everyone spooked — even if we can immediately remove CO2 emissions, the CO2 will continue to increase for years.
WHT – It is hard to see a reason for CO2 to increase much or for long if emissions cease – see for example Climate Change Commitment . I can envision perhaps a brief increase if the warming from the residual forcing (before it disappears) causes some efflux from sinks, but even that is likely to be minimal. Warming itself should not last long before transitioning to a very gradual cooling that maintains an elevated temperature for millennia.
On the other hand, if all anthropogenic emissions cease, we do have reason to expect a persistent increase in temperature for a while due to the reduction in cooling aerosols – see Climate Change Commitment 2.
Fred,
I think that I am fairly consistent in my saying that if we can do little else we should do something about the risk of an aerosol overhang.
If we face the bizarre prospect of having to use renewable energy to pump sulphates up chimneys then that needs fixing.
Alex
Fred,
You are looking at the temperature response with that link. That is not the same as the CO2 response. The CO2 response is very latent and has an inertia against change, just check against Archer’s papers. The link you provided also mentions Wigley, and his CO2 curves are shown here:
http://www.globalwarmingart.com/wiki/File:Carbon_Stabilization_Scenarios_png
You can definitely see the CO2 concentration continues to increase even if the fossil fuel emissions are cut back. If the temperature is not as sensitive to CO2 then that will not show as strong a latency.
I have done the calculations myself, and this is what I get if I keep emissions constant.
http://img39.imageshack.us/img39/9001/co2dispersiongrowth.gif
The chart also shows first-order kinetics for CO2 sequestering, demonstrating the difference between classical exponential decay and a fat-tail response. Wigley shows pretty much the same thing but he does not have a constant emission scenario.
It’s important to distinguish between zero emissions and stabilized emissions.
Webby,
“The result will show this peculiar lag that will generate a continual increase in CO2 concentration, long after the forcing function is removed. That is what has everyone spooked — even if we can immediately remove CO2 emissions, the CO2 will continue to increase for years.”
ghosts always spook superstitious people. Where is the empirical evidence to suggest that CO2 would have a fat tail impulse response? Pekka didn’t even pull citations to support the Revelle Buffer Myth!!
Where is your evidence that it is thin-tailed?
Another gas that is showing a hockey stick rise, Methane, is thin-tailed because it is exothermic and will decompose in a few years, 2 to 10 years is the quoted number I see.
In comparison, CO2 is relatively inert and is endothermic in breaking down, so it needs special pathways to sequester out of the system. The skeptics quote CO2 residence times also at 2 to 10 years, same as Methane.
My mind sees an inconsistency here. I would expect CO2 to have a much higher residence time than Methane. Perhaps that CO2 might be closer to the residence time of a relatively inert gas like Nitrous Oxide (N2O) which is quoted anywhere from 5 to 200 years. That will make it a fat-tail because of the large uncertainty in the mean.
I would guess that the empirical evidence comes from historical forcing functions, such as volcanic events, that generated large impulses of CO2 into the atmosphere. Sample records would show a slow decline over time. I don’t know of any citations off-hand though. So shoot me.
Webby,
I make no claim. YOU claim it is fat tailed. Let’s have something other than arm waving.
kuhncat,
This fellow is nothing but arm waving. He assumes a function – applies it in his imagination – and lo and behold there it is exactly as predicted with a fat head – I mean fat tail.
He guesses that we can find out from valcanic emissions. I think he is a candidate for being thrown in the volcano.
Cheers
CHIEF HYDROLOGIST said this:
When the death threats start, I know I am touching some nerves.
Not really – it is an in joke about not throwing virgins into the volcano to placate the climate gods. Useless gits however qualify.
You flatter yourself that I would give a rat’s arse about any of your pointless and distracting comment. My only concern is whether I am getting too bored to continue with this nonsense. Well I certainly am – but it is a question of whether the field should be left to the a few noxious individuals intent on play a spoiling role.
Just now the Numbnut gang is dominating the threads with almost 100% of the recent comment between them. Most of it simply nonsense and insults. I don’t know what the solution is – but it is getting to be uncomfortable scrolling through the large number of rude and abusive posts.
There are two options, thin-tailed and fat-tailed. You asked me to prove that something is fat-tailed, while the thin-tailed advocates haven’t demonstrated any agreement with data. It is all indirect hand-waving evidence by the thin-tail advocates.
No – the natural variability is big enough to swallow anthropogenic emissions tail and all.
Ah, so now you believe in natural variability.
So, say you have a single CO2 molecule in the atmosphere. The thin-tail theory is that this molecule only has a 2% chance of still being in the atmosphere after 20 years (given a 5 year residence time). In other words, it is 98% likely to be sequestered out.
The fat-tail version is that it will have a 50% chance that it will be removed after 20 years. But in the first few years the fat-tail distribution could have a faster apparent sequestering rate.
It is an interesting premise that the CO2 could bounce around the atmosphere, occasionally reaching the surface resulting in a wide dispersion and natural variability in residence times.
‘And at some I didn’t believe in natural variability?’ You are the one who assumes a steady state – it is not true even remotely in biological or other complex systems. You follow up with made up numbers.
http://www.nature.com/embor/journal/v9/n1/full/7401147.html
‘Feldman remembers watching the first dramatic example of SeaWiFS ability to capture this unfold. The satellite reached orbit and starting collecting data during the middle of the 1997-98 El Niño. An El Niño typically suppresses nutrients in the surface waters, critical for phytoplankton growth and keeps the ocean surface in the equatorial Pacific relatively barren.
Then in the spring of 1998, as the El Niño began to fade and trade winds picked up, the equatorial Pacific Ocean bloomed with life, changing “from a desert to a rain forest,” in Feldman’s words, in a matter of weeks. “Thanks to SeaWiFS, we got to watch it happen,” he said. “It was absolutely amazing — a plankton bloom that literally spanned half the globe.”‘ http://www.sciencedaily.com/releases/2011/04/110404131127.htm
You need to understand some science – and not just make it upo as you go along.
I am learning more as I go along, that’s for sure.
In my comment I was just talking about the unlikelihood of a specific CO2 molecule being conclusively removed from the atmosphere after some lengthy time. I have an alternative derivation that only incorporates a statistical mechanics POV (which I believe is real science), read this
comment in a thread below.
Webby,
again you make an unsupported statement. That it has to be either fat or thin. No support, just assertion. Bye.
Interesting CO2 residence chart at this site – http://www.c3headlines.com/2011/06/the-ipccs-missing-co2-remains-a-major-embarassment-of-its-consensus-science-its-still-awol-maybe.html
Fanney, please read mine, Joel’s, Fred’s, and Nick Stokes comments above. This isn’t even close to getting a cigar.
There he goes on about that tobacco industry again. :)
The answer is that they used 100 years or more to confuse and mislead people. Now they are trying to back track.
Yup, “they” are doing a great job…
http://geosci.uchicago.edu/~archer/reprints/archer.2009.ann_rev_tail.pdf
Chris,
I think part of the problem with the skeptics is that they only use the tools in their toolbox. So they immediately think response times have to be exponentially damped, and so look at the initial decay, plot that on semi-log paper and come up with residence times on the order of a few to 10 years. The reality as you and others indicate is that the fat-tails completely screw up the classic extrapolation. It’s that guy Segalstad who originally plotted these short residence times (here http://www.co2web.info/ESEF3VO2.htm) and really the embarrassment is on him and the scientists that he listed.
The concept of fat-tail statistics is not something that you immediately pick up on in school and from shrink-wrap tools, but it comes with experience, and from watching how disorder and randomness plays out in the real world.
It appears this C3 link is also confused about the so-called missing carbon emissions and where that has gone. I did my own convolution-based modeling while carefully accounting for the actual fossil-fuel carbon emissions and discovered that very little CO2 has gone missing since the industrial age has started. To get a really good fit, I did have to raise the CO2 baseline to 294, but that may actually be the correct value.
The writeup originally appeared on my blog:
http://mobjectivist.blogspot.com/2010/05/how-shock-model-analysis-relates-to-co2.html
Could it be that the missing carbon is caused by the assumption of an incorrect residence time on their part? With a long residence time, the impulse response will store much of the CO2 in the atmosphere, which is only slowly sequestered over time.
Essenhigh’s abstract is from: Potential Dependence of Global Warming on the Residence Time (RT) in the Atmosphere of Anthropogenically Sourced Carbon Dioxide Robert H. Essenhigh, Energy Fuels, 2009, 23 (5), pp 2773–2784, DOI: 10.1021/ef800581r, April 1, 2009
Such quantitative combustion/chemical engineering approaches including ALL the sources and sinks are essential to get a grasp of the overall fluctuations and consequent net differences.
Essenhigh references Tom Segalstad http://www.co2web.info/
Carbon cycle modelling and the residence time of natural and anthropogenic atmospheric CO2: on the construction of the “Greenhouse Effect Global Warming” dogma.
Se also Tom V. Segalstad: Correct Timing is Everything – Also for CO2 in the Air
Referencing Essenhigh see:
Ryunosuke Kikuchi, External Forces Acting on the Earth’s Climate: An Approach to Understanding the Complexity of Climate Change, Energy & Environment, Volume 21, Number 8 / December 2010
Alan Carlin, A Multidisciplinary, Science-Based Approach to the Economics of Climate Change Int. J. Environ. Res. Public Health 2011, 8, 985-1031; doi:10.3390/ijerph8040985
Arlan summarizes Essenhigh & Segalstad etc.
SOURCES AND SINKS OF CARBON DIOXIDE by Tom Quirk, Icecap.us
Fred Haynie posts detailed models of CO2 driven by natural causes, especially polar fluctuations, not anthropogenic. His analysis of the different shapes between Arctic, tropics and Antarctic is thought provoking as the primary CO2 drivers. http://www.kidswincom.net/climate.pdf
Isn’t CO2 residence time a property of the system rather than an inherent property of CO2 molecules? If so, then it will vary and can be made to vary.
It is a simple stocks and flows problem – that could in principle be modelled with commencial software such as STELLA.
The sources are:
‘80.4 GtC by soil respiration and fermentation (Raich et al., 2002)
38 GtC and rising by 0.5 GtC per annum by cumulative photosynthesis deficit(Casey, 2008)
by post-clearance deflation (See Eswaran, 1993)
7.8 GtC (IPCC, 2007 – Needs peer reviewed reference)
2.3 GtC by process of deforestation (IPCC, 2007; Melillo et al., 1996; Haughton & Hackler, 2002)
0.03 GtC? by Volcanoes
by Tectonic rifts
by multi-cellular Animal Respiration
by multi-celluar Plant Respiration
The sinks are:
120 GtC by Photosynthesis (Bowes, 1991)
By Ocean Carbonate Buffer
Source: Wikipedia – it’s seems a reaonable list
I added multi-cellular to distinguish betwewen complex and simple organisms. The animal plant distiction applies to both complex and simple – as it is based on the distiction between trophic (food) sources. Food from photosyhthesis or other by eating other organisms – autotrophs and heterotrophs.
You would have to make some fairly heroic assumptions about how these things change with time, temperature and silicate weathering. One thing is for sure – this is not a problem of average decay rates or long tailed statistics. It is – as I said – a problem of stocks and flows.
Consider the negative feedback of carbonic acid in rainwater – increases silicate weathering and therfore carbonate buffering in the oceans – decreasing atmospheric CO2 concentration and deep sequestration at the same time – reducing carbonic acid in rainwater etc. Hydrogeologically speaking the time scale could be significant.
There seem quite a few back of the envelopes blowing through this thread – but I doubt that the quality of the data supports a number low or high to the level of certitude shown for the insustantial fluff seen in this post.
As far as I am concerned this is more angels on pinheads discourse by the usual suspects. Perhaps that should be pinheads on angels – as I obviously have yet to renounce my general distemper at these proceedings.
Quantum Gravity Treatment of the Angel Density Problem
http://improbable.com/airchives/paperair/volume7/v7i3/angels-7-3.htm
‘The only reason angels can fly is that they take themselves so lightly’ – so I would dispute the mass of angels assuption. But then I guess I’m being generally disputative.
STELLA can’t solve this problem because it doesn’t allow fat-tailed impulse responses and no way to add dispersion.
That is essentially what is known as compartment modeling. The two usual approximations to compartment modeling is either to assume a constant flow or to assume a flow proportional to an amount remaining (the stock). If CO2 is modeled as being fairly inert to sequestering, it ends up showing the fat tails which is neither constant or proportional flow. If users of STELLA could dial in one of these lengthy impulse response and then run a simulation, they would likely see a non-converging solution. Those same users would probably scream at the results and then toss STELLA in the garbage, not realizing that is the correct solution.
The point is that you can’t blindly use these commercial tools unless you have a good understanding of the underlying physics model. They will blithely sweep all the interesting behavior under the rug.
No – you would need to specify changes in flux with temperature and silicate weathering as I said. To do that – you would have to know what the rate of change of these variables was over time – so we are not talking physics at all but biology, hydrogeology, chemistry, geology etc – those processes are determined outside of a simple stock and flow model and only a simple rate function, look up table, spreadsheet, whatever is plugged into a stock and flow model such as STELLA.
The actual software is irrelevant – and I mention STELLA only as an example of the type of model I am thinking of – one that has stocks (of CO2 for instance) and flows (CO2 for instance) on some time increment.
What is specified is the flux based on the physical, chemical and biological constraints. The answer is the changes in the stores – atmosphere, terrestrial vegetation, ocean – over time. These can’t diverge from anything because they are based on physical and scientific realities and not just numbers pulled out of your arse. If the results were unrealistic – you would check assumptions and data.
The point is that you are thinking of this as a radiation decay type problem – and it clearly is not. It is a stock and flow problem – in which there are compartments and flows between them. You need to think simply about the problem – what are the sources and sinks and what influences the flow between them. This shows what data is required (and the deficit of data) to make good estimates of rates of flow and therefore changes in stocks.
All of the various processes could be plugged into a more complex model of course – but modelling will not help if the data is incomplete or wrong. We would still need to plug in a rate constant, look up table, whatever – representing actual real world processes and not just the product of physicists pulling numbers out of their fundament.
Dear Chief Hydrologist,
From your title, you must know what breakthrough curves are. You also must have run across the idea of dispersive transport in porous media. The point is that material transport is often very disordered in its natural state and that you really can use some clever stochastic math to understand how long it takes material to get from point A to point B along a tortuous path. Do a literature search on this topic and you will find that the data shows that it’s a fat-tail effect, and the tail is fatter the more disordered the media (underground) or pathway (i.e. runoff) is. Bear in mind that you will find a lot of the civil engineers and geologists studying this will look like they are scratching their butts trying to figure out what’s happening. But then again you have to remember that they are civil engineers and geologists after all :)=
My title derives from Cecil (he spent four years in clown school – I’ll thank you not to refer to Princeton like that) Terwilleger. Springfield’s Chief Hrydrological and Hydraulical Engineer
But I am trained both in hydrological engineering and environmental science and have decades of experience running computer models of various types and in analysis of complex real world environmental problems. People find it funny – but I started programming with punch cards and computers that filled a room.
‘Breakthrough Curve: A plot of column effluent concentration over time. In the field, monitoring a well produces a breakthrough curve for a column from a source to the well screen.’ It generally has a Gaussian normal distribution. They can be used for instance in tracing pollution sources in groundwater. Groundwater movement is commonly modelled using partial differential equations that conserve mass across finite grids.
This has no application to the problem at all – there is not even a fat tail in any of that. Which is commonly understood to involve a preponderance of rare and extreme events. Rainfall distribution has for instance a skewed power series distribution – commonly of a log-Pearson.
Essentially you just using names for things that you seem hardly to have any understanding of and suggesting that some stochastic maths (mathematicians pulling numbers out of their arses) can solve the problem of CO2 residence times without any reference to scientific data on how these physical Earth systems actually work.
I think I have your measure now – you are an uncommon idiot hoping to bamboozle your way through a technical discussion by using terms you don’t really understand in the hopes of confusing the picture in favour of your tribalistic leanings.
I assume it has worked elsewhere for you – but I assure you that you are playing with the big boys now.
Wrong. They rarely have a Gaussian profile and almost always have a significantly asymmetric tail that drops off slowly with time. Instead of going to the Yahoo Answers response, you might want to dig a bit deeper.
I think you are confusing me with Claes or Oliver or some other crackpot that comments on this blog.
Yes, identifying a crackpot is in the eye of the beholder. Lots of us are in this together and we get insight from each other and especially from those who have a slightly different perspective. The trick is to figure out who the crackpots are and who have genuinely unique insight.
The picture is confusing and can continue to get confusing until it starts to clear up. Of course tribalism will works its course as we gravitate toward supporting other commenters that are providing insight. That is the way that citizen-based science works.
Yes, indeed there are a quite a few commenters who try to invoke flowery and ornate dialog like this is some sort of Shakespearean literary outpost, but that is just fluff and in the end we are just trying to hammer home some logic and pragmatism. I am a fan of Jaynes, who has justifiably said that “probability theory is the logic of science”.
Chief fancies himself quite a poet.
As proven by comments on a blog.
Both when and when he isn’t insulting people and discussing his TV viewing habits, things he does with his laptop, and how he cleans it up afterwards (no, really).
When I’m no being a comedian. Here’ the whole post – in which I am making fun of myself.
http://judithcurry.com/2011/08/19/week-in-review-81911/#comment-103189
Good bye Joshua – it has been a distinctly creepy experience engaging with you.
Chief – don’t forget to take your ball with you.
If you’re going to dish it out, Chief, you should be able to take it.
Anytime you want to engage in mutually respectful posts, I’m more than game, but I’m afraid that would require you to leave your “pissant leftist”-type insults out of the discussion. I won’t take offense if you tee off on “numbnut,” and you don’t take offense when I laugh at Bruce.
Anytime you’re ready. As much as I enjoy trading snark, I enjoy trading respectful dialogue more.
Now let me think – what is the best response to the Numbnut gang of pissant progressives? The best they can organise is an electrical engineer who cannot put an idea together on physical oceanic and atmospheric systems without numbscull ideas about stochasticity, breakthough curves, groundwater diffusion and ‘maximum entropy’. The rest of a poor lot cannot string any idea of any substance together at all. Incoherency is us – but feel free to drop in with a distracting and pointless troll any time at all.
‘Anytime you want to engage in mutually respectful posts, I’m more than game, but I’m afraid that would require you to leave your “pissant leftist”-type insults out of the discussion. I won’t take offense if you tee off on “numbnut,” and you don’t take offense when I laugh at Bruce.’
Feel free to take offense whenever you like – but there is no licence to laugh at anyone. Perhaps my first instinct was right – it is wrong to leave natural born bullies to prevail.
Well – that didn’t last long did it, Chief.
Like putty in my hands.
And your knee-jerk response to everything is chaotic systems? For making progress, that is as fine a strategy as punting on first down.
WebHubTelescope, Do you ever wonder, if the progressives will even get around to building a ‘Strategic Hamlet’.
It is a bell curve with an exponential rise of a solute in soil water and an exponential decrease – and still irrelevant to carbon dioxide.
The mathematics of dispersion still applies. That approach to math modeling is not taught in schools.
What – precisely – do you mean by dispersion? Plume dispersion commonly is modelled by the Reynolds averaged Navier-Stokes non-linear partial differential equations. That is taught in engineering school – and is the standard method for floods, storm surge, tsunami, pollution plumes and groundwater movement. They are used as well as in atmospheric and oceanic simulations.
Non-linearities might emerge – but in that case you adjust the time step for the numerical solution because an unstable solution for these things in the real world is useless.
The Earth system as a whole has multiple negative and positive feedbacks with little understood thresholds and is in fact non-linear at a number of time and spatial scales. The correct model here is forcing with non-linear responses at critcal thresholds – just like earthquakes.
Now we can define a power function for just about anything – which I assume is what you are talking about – but we do need real world data to make it meaningful and not just some assumed function.
I can for instance fit a log-Pearson type 3 power function to 100 years of hydrological data and use it to calculate a 1 in 10,000 year storm. How accurate that is is a matter of conjecture – but it gives at least an agreed starting point which is all that matters in engineering public safety. It is a number that seems adequate from the experience and judgement of myself and my peers.
I have my doubts as extreme events – ‘dragon-kings’ – at times of chaotic bifurcation are unlikely to fit the power function distribution. I very much doubt – 99% – that there is enough data to adequately define a power function for CO2 residence in the atmosphere. Basic science is required – rather than as I put in my crude Australian way – pulling a number out of your arse.
Take the ordinary Navier-Stokes equation and vary the values of the diffusion coefficients and convection/drift terms. That produces the degree of dispersion that I am talking about, and it doesn’t have to show turbulent behavior.
The ordinary Navier-Stokes equation is also known as the Fokker-Planck equation. Solving these for disordered systems is a hobby of mine and it does show some very interesting agreements with experimental evidence. As far as I know, no one is working at it from the same angle I am.
Why do you continue to think that I am not applying a rigorous scientific analysis? All you have to do is take a gander at the link in my comment handle. Again, I thought this blog is partly about coming up with some potentially concise and neat ways of thinking about environmental phenomena.
Archer et al keeps being raised – but this is a crude model study substituting crude values for parameters for which data is not merely inadequate but entirely lacking.
I don’t run hydrological models without calibration against real world data. There is a diversity of results amongst the Archer models as one would expect – but is this the reflective of the actual range of variation in natural systems? How do we know what natural variability is without good long term data – or indeed for the most part without any data at all? How could you calibrate these models at all except through crude ensemble methods that might owe more to groupthink than anything real? How would you know what the results of sensitivity analysis might be? It is all hopelessly stupid.
Without a deep discussion of the data and the limitations thereof – and of the assumptions made and the limits of error – and of the limitations of the models – there is every reason for scepticism. Models are not science and you seem to assume that science can proceed by modelling with broad assumptions and without data at all. It is not true – and is profoundly unscientific.
Perhaps this intersects with the decision making under ignorance post, but assuming disorder is always a conservative approach. What is completely unscientific is those analysts that empirically guess at a residence time is 2 to 10 years by assuming that it follows an exponential decline.
Look, I laid out a rigorous propagation of uncertainty analysis downthread that can explain significant dispersion in the CO2 residence times. Do those guys that generate the 2 to 10 year values do this? No freaking way!
I should really say that models are analytical science at all – but are examples of synthesis and their proper use can only be understood in that framework.
Where there is a lack of an anlaytical underpinning – they are almost guaranteed to be wrong.
‘Assuming disorder is always a conservative approach.
A ‘rigorous propagation of uncertainty analysis downthread that can explain significant dispersion in the CO2 residence times.’
You discussed a simple function of some sort – I can’t imagine what you mean by disorder. I don’t know what you could possibly mean by a ‘rigourous progagtion’ of uncertainty. I can only imagine from some of your other posts that it is not science at all that is under discussion – but some fervid incarnation of the climate wars. Oh – I forgot – I should try to be less literate.
‘Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values.’
http://eprints.lse.ac.uk/24569/
er… not analytical science at all…
Entropy is generally considered as a measure of the amount of disorder in a system. A well-ordered subsystem such as a crystal lattice will have low entropy because of the predictable arrangement of the atoms. It becomes disordered if that lattice melts and the information entropy increases. There are accepted ways of characterizing this disorder — I don’t understand why you have difficulty with accepting this concept. If you mix a dye in a bucket of water, the disorder increases until it completely disperses through the water. This is modeled by what is called a uniform probability density function. Do you have difficulty even accepting this borderline pedantic example? Disorder can occur in spatial or in temporal terms, and even if you don’t accept it mathematically, there are certainly intuitive notions that you can apply.
Yes I know what entropy is – but fail to fail to see any particular correspondence with a bucket of water and Earth systems. The dye is mixed by simple Brownian motion – the simplest of the stochastic processes. But asserting an application of this to complex systems – and one significantly mediated by biology and therefore negative entropy – is crazy BS.
You are either deliberately misleading or obsessively unbalanced – either way I don’t give a rat’s arse.
There is no such thing as negative entropy. Entropy is defined by the negative log of a probability. Since probabilities are strictly between 0 and 1, entropy can never go negative.
Entropy is defined by energy flow in the 2nd law of thermodynamics. Maximum entropy is the state where every point has the same potential energy and thus there is no more energy flux. The probability of this state occurring is unity. Alternatively there is a 100% chance of entropy happening on average outside of a very specific circumstance.
The very special circumstance is life – life is self organising. The act of getting and eating my breakfast – tea and crumpets – for instance is an example of negative entropy. It is an example of energy flowing into a more highly organised system.
I expend energy to gain more energy (negative entropy – or negentropy) and avoid dying (maximum entropy).
OK, the negative entropy is just a relative or differential entropy where a negative sign is slapped in front of the actual positive entropy.
you wrote: People find it funny – but I started programming with punch cards and computers that filled a room.
I don’t find that funny. There are a lot of us out here that did that.
HEH,
you were the smart ones. I just wired the sorters and collators to sort and rearrange your cards and clean up the mess when you dropped a box or two!! 8>)
Chief,
Please take a look at limnology, and some recent papers on both the number of freshwater bodies and their part in the carbon cycle.
Hi Hunter,
Limnology was a favourite topic many years ago. I love water. I love being on it, in it and under it. Biogeochemical cycling is my speciality and the carbon cycle because of the importance in trophic networks and in diverse chemical reactions is complex.
I didn’t distinguish between fresh and salt above – and I am quite out of date. I will have a look.
Cheers
Chief,
You may well find this of great interest then.
Dr. John Downing was interviewed recently and I was able to hear it.
He thinks freshwater systems have been neglected in important ways.
Here is his homepage:
http://www.public.iastate.edu/~downing/
He made some very interesting points about recent research results.
Here is a transcript of the interview:
http://www.loe.org/shows/segments.html?programID=11-P13-00033&segmentID=2
I am not able to find the specific paper he is referring to. I think it might have some important implications, however.
Chief Hydrologist & Webhubtelescope
Please drop swords and explore how to model/distinguish causes.
Chief – please review Webhub telescope’s models, especially peak oil. I think I know enough math that those models look significant.
Webhubtelescope.
Please explain further on fat tails and what might drive them vis CO2 with dominant natural sinks/sources >> anthropogenic causes.
What if we have nonlinear bio feedback with increasing CO2.
e.g. some trees plants show very rapid increase in growth rates with increased CO2. How is that modeled?
What if Anthropogenic CO2 is rapidly absorbed in plants?
What if most CO2 comes from temperature changes.
e.g. see arctic pulsing.
IF clouds dominate temperature and cosmic rays & solar modulation controls clouds, then natural causes could dominate temperature changes and thus CO2 form the ocean.
How would you statistically model, test and distinguish such causes?
cf Tom Quirk and Fred Haynie and Roy Spencer.
I see difference in pulsing shapes for Arctic, tropics Antarctic.
Different bio response terrestrial No hemisphere vs So. Hemisphere.
Diff bio responses ocean biomass No. hemisphere vs so.
Diff fossil emissions no hemisphere vs so.
Differences in clouds and cosmic rays/solar ;during solar cycle; forbush events
(PS Forget 52 pickup. With punch cards, I seem to remember not wanting to play 520 pickup.)
Daivid,
I think those are good directions to push forward in.
I have a very simple model that works to explain fat-tails in a number of situations. The fat-tail is mainly with respect to some quantity observed over time. The general idea is that the observable changes transiently in response to some stimulus. As a premise, suppose that the response is largely driven by a velocity or rate, r, that follows an exponential decline: F(t | r) = exp(-r*t)
Next consider that the rate is actually a stochastic variate that shows a large natural deviation. It will follow an exponentially damped PDF if the standard deviation is equal to the mean:
p(r) = (1/R) * exp(-r/R)
If we then integrate over the marginal :
F(t) = ∫ F(t | r) * p(r) *dr
this results in F(t) = 1/(1+R*t)
I consider this a general dispersive result that demonstrates how a measurement which will normally show an exponential decline will switch to a 1/t decline with sufficient disorder. The 1/t dependence is a power-law decline, categorized as a fat-tail because it only shows a median value and no mean value. (This is OK because the physical variate, the rate, r, has a well characterized mean and the higher order moments are also bound. The lack of moments is a common characteristic of reciprocated variates and what causes endless consternation to the statisticians. NN Taleb dances around this topic in his book The Black Swan).
If you look at the IPCC residence time impulse response curves or Archer’s curves, they are even more strongly fat-tail, fitting to a curve that looks more like 1/(1+k*√t) — which is the reciprocal of the square-root of time. My feeling is that the square-root of time dependence comes from a diffusional growth law mixed in with the dispersion. Fickian growth laws are not linear but grow slowly as a square root of time due to diffusion. It also could be due to a chemical rate equation that proceeds more slowly than first-order kinetics. In either case, the value of k generates the dispersion that smears the dynamics.
Now what the short-residence-time skeptics gloss over is the quick transient that one will see if this curve is plotted. Yes it does drop down relatively quickly, but the tail is significant and it won’t fall off anywhere near as fast as an exponential.
Comparisons to the Bern SAR response curves
So that is an applied mathematics argument as to how a behavior can change from a thin-tail exponential decline to a fat-tail power-law decline through the introduction of disorder or randomness. What this is not is any kind of critical phenomenon that most scientists ascribe to power-law behaviors. Research scientists tend to drool over critical phenomena and become disappointed when it gets explained to them that garden-variety disorder can lead to the same result.
Your comments on the almost ubiquitous prevalence of power laws reminds me of Benoit Mandelbrot even more than Nassim Taleb. My own thinking accepts that only in part. I agree fully that Normal distribution with is very thin tails is overused and that tails are often fatter over a wide range of values.
I’m, however, much more doubtful on the generic value of any specific alternative formula and any specific class of distributions. The empirical evidence is usually reasonably accurate over a narrow range of values, which allows fitting it with many different fat-tailed distributions. Formulations presented by Mandelbrot or by you provide useful parametrizations over ranges where they have been verified empirically, but have little predictive power outside that range.
As an example, Mandelbrot has looked at several distributions that are known to have a strict extreme limit finding a power law that “predicts” breaking that limit with non-negligible probability. In such cases we know that the power law will fail, when the limit is approached, but we don’t know, where the failure starts to be significant. Finally we can ask, do we really have any power law in the actual distribution at all, or is the power law only consistent with data as long as the accuracy requirement is very modest.
Similarly I don’t have any reason to doubt the practical value of your approaches, but I’m doubtful as soon as the approach is assumed to have a precise theoretical basis. It’s rather a good rule of thumb that fat tails are very common and can be parametrized with some power law over a limited range. There are many reasons for that. Many of the reasons have a nature similar to what you describe, i.e, the overall distribution can be considered as a combination of many different distributions, some of which have very large variances. The combination may be formed in various ways, one of which applies to the persistence of CO2 in atmosphere.
More specifically any agreement in far tails with results of Archer is likely to be dependent on, how Archer did his work rather than on the real world facts, because the empirical support for any conclusions on the far tails is statistically extremely weak. Relevant empirical data exists only from very distant past, and the the results on far tails are determined by the methods used to extract information from the far tails. A very small change in the methods makes the outcome totally different.
Pekka,
What people miss is that just taking the reciprocal of an underlying stochastic variate is enough to cause a power-law fall off distribution. That is what I tried to show with my short derivation. Mandelbrot would be the last person to want to admit that something this straightforward could explain the ubiquity of fat-tails, as it would tend to marginalize the mystery behind his fractal story (it doesn’t matter anymore since he recently died).
Like Normal distributions and other well-accepted approaches, these have significant predictive power, but are curiously rarely applied. If you want to find more, you can look up the term ratio distributions in the applied mathematics literature.
http://en.wikipedia.org/wiki/Ratio_distribution
I agree that Mandelbrot has been careful to avoid presenting anything like simple derivations for power law tails, but I don’t think that he has been as careful in commenting on the empirical evidence on the existence of power laws. Many of his examples have been rather poor from the empirical point of view, i.e. the power law is seen on a too narrow range of values and with insufficient accuracy to tell, whether it’s really a power low at all or just some other fat tail.
Agreed, Mandelbrot and Taleb are horrible at actually trying to match their ideas to any empirical evidence. They are pretty sly at not being caught in a situation of having to defend some assertion (Taleb in particular in his books relies on hand-waving).
In many cases it just takes time to accumulate enough data to show a power-law dependence. The physicist/applied statistician Cosmo Shalizi wrote an article some ten or more years ago claiming that web-link statistics did not show a power-law dependence. Over the ensuing years as more evidence has become available, the actual statistics have converged to a power-law over many orders of magnitude and a very straightforward interpretation (IMO). That said, even if you say that it is power-law over 8 orders of magnitude, the wise-guy will point out that it is not provable because it doesn’t extend to 9 orders…
Again the subliminal tendency by physicists is to reserve power-laws for critical phenomena and they don’t necessarily want to ascribe it to a pedantic explanation. That is my conspiratorial rationale, otherwise I can’t explain the dismissive attitude that many of them display.
Webhubtelescope
Thanks for clarifying issues.
What if we have multiple chemical, bio and human emissions?
e.g. temperature-co2 from arctic pulsations
Temperature-CO2 from the ocean.
Ocean algae – about 50% of total biomass net productivity.
Temperature & soil moisture together for:
Tropical vegetation & agriculture.
Temperature vegetation, trees
Temperature annular agriculture.
Then weight temperate into No. vs So. by land etc.
Then anthropogenic CO2.
The anthropogenic fuel consumption in turn varies by resource/country with growth/depletion curves and economic development. e.g. see
Hook et al. Descriptive and predictive growth curves in energy system analysis Natural Resources Research, Volume 20, Issue 2, June 2011, Pages 103-116, http://dx.doi.org/10.1007/s11053-011-9139-z
There should be some predictability there to fit CO2 and distinguish natural annular cyclic, vs natural solar/cosmic cyclic vs anthropogenic.
However, each would have stochastic disorder components on top of overlapping variable physical causes.
Would those together give the fat tail distributions you note above?
What then the impact of the “fatness” on the long term increase/decrease of atmospheric CO2?
Anything that shows dispersion or significant variation in rates can lead to fat-tails. The fat-tail is in those paths that show a slow uptake in CO2 or a long time to get to the sequestering points.
The impact of the fatness long-term is that the slow pathways continue to build-up over time. That is what the convolution model reveals.
David,
Production curves for oil have been discussed since the 1950’s – http://en.wikipedia.org/wiki/Hubbert_curve
The issue is that it is a one dimensional understanding of a multi-dimensional economics problem better understood through the idea of substitution. For instance – I commonly use a 10% ethanol blend in my Mercedes SLS AMG. It is made from Queensland sugar residue, is cheaper (not subsidised) and works better. The evidence is that substitution is happening as the oil price remains high.
There are multiple substitutions possible at some price point. For instance we could with cheap enough electrical energy take CO2 from the atmosphere, blend it with hydrogen from water and make a liquid fuel.
There are no fat tails just a trailing skinny in an imaginary function.
Your greenhouse carbon example occurs with unlimited nutrients, light and water. That does not happen in the wild. Terrestrial plants are commonly water limited. They respond to elevated carbon in the atmosphere by reducing the number and size of stomata – limiting gas exchange but also water loss. This is not neccessarily a hydrologically or ecologically good thing.
Most of the recent warming happened in ‘climate shifts’ in 1976/77 and 1997/1998. NASA/GISS says that most of the rest was caused by cloud changes – especially in the tropics. Jim Hansen doesn’t believe them- but it fits into a pattern of decadal observations of cloud in the Pacific in particular.
I’m not sure about cosmic rays and clouds – difficult to distinguish from other solar modulated changes in e.g. the NAO, SAM, ENSO, PDO, QBO, PNA.
There are as well plenty of cloud nucleation sites over oceans in dimethyl sulphide emmissions from phytoplankton – which in turn respond to changes in upwelling of frigid and nutrient rich water at various location around the globe – but prominently in the eastern Pacific.
Dropping your punch cards is not a recommended SOP.
Cheers
No, oil discovery and production is a stocks and flows problem, which is exactly how I laid it out in the analysis I call the oil shock model.
Chief
Re: ” a one dimensional understanding of a multi-dimensional economics problem better understood through the idea of substitution.”
May I encourage looking at both “multi-Hubbert” curves AND economic substitution.
Economics: Globally we see long term shifts from wood to coal to oil to gas over centuries to decades.
Peak response: M. King Hubbert modeled US and world oil production as a logistic curve. This fits pretty well for each geographic area. e.g. see Tad Patzak’s multi hubbert curves for US production.
For such Hubbert analysis, we need to think of a given resource for a given technology within a given economic regime. Then apply multi-Hubbert analysis to summing across multiple regions.
Constraints on availability raise prices which can push to substitution. Webhubtelescope then adds economic shock impacts.
With sufficient high demand and prices, that justifies development of alternative resources. However, each region/resource/technology still follows the Hubbert type model pretty well.
For a good overview, see: Tad Patzek in Peaks Everywhere
Forecasting World Crude Oil Production Using Multicyclic Hubbert Model
Ibrahim Sami Nashawi, Adel Malallah and Mohammed Al-Bisharah Energy Fuels, 2010, 24 (3), pp 1788–1800
When Kuwaitis get very similar results, it is time to wake up and take notice.
So why don’t you just run along and do that?
You say you can do quite a lot, but you haven’t shown anything you claim to be legitimate.
Compliments on your irony, though.
Yes, you are.
Hal asks:
This question presupposes that the IPCC “mainstream” view is correct, i.e. that Earth’s natural CO2 cycle is in “equilibrium”, which is perturbed over longer periods only by human CO2 emissions.
IOW, it presupposes that the recent suggestion by Professor Murry Salby regarding natural causes for changes in atmospheric CO2 levels is false.
But back to Hal’s question: how long does CO2 “remain in the atmosphere”?
Tom Segalstad has compiled several independent estimates of the residence time of CO2 in the atmosphere, arriving at an average lifetime of around 5 to 7 years.
http://folk.uio.no/tomvs/esef/ESEF3VO2.htm
Is this “residence time” of CO2 in the atmosphere as reported by Segalstad equal to the “residence time in our climate system”? The mainsteam view (IPCC) does not believe so.
The Lam study cited by Judith states:
http://www.princeton.edu/~lam/TauL1b.pdf
As Willis Eschenbach has pointed out on other threads, model outputs are only as good as the assumptions, which were programmed in.
Since “there exists no observational data to validate this value”, it is meaningless as empirical scientific evidence.
But it does show the basis for the IPCC model assumptions.
IOW, if the actual value turned out to be significantly different from 400 years, then the IPCC model projections for future temperature rise, based on various assumed model scenarios and storylines (assuming these are correct), would be too high or too low.
The first IPCC assessment report (Houghton et al., 1990) gives an atmospheric CO2 residence time (lifetime) of 50-200 years [as a “rough estimate”]
Table 1 of the IPCC TAR WG1 report says that the CO2 lifetime in the atmosphere before being removed is somewhere between 5 and 200 years with the footnote:
This appears rather “loosy-goosy” to me. Yet the IPCC models have apparently used a “lifetime” of 400 years as the assumed value as is suggested by the Lam study cited above.
Zeke Hausfather presented data at a recent Yale Climate Forum, which points to a half-life of CO2 in the climate system of around 80-120 years, with a suggested long tail: i.e. ~80% gone within 300 years and a small residual remaining thousands of years.
http://www.yaleclimatemediaforum.org/pics/1210_ZHfig5.jpg
In an earlier exchange, Fred Moolten cited a model-based study by David Archer et al., which points to pretty much the same conclusion as reached by the ZH data, also pointing out that the CO2 lifetime has a long “tail”.
http://geosci.uchicago.edu/~archer/reprints/archer.2009.ann_rev_tail.pdf
Let’s ignore the long tail for now and let’s assume that the half-life of CO2 in the climate system is 120 years, or at the upper end of Zeke’s curve.
In actual fact, no one knows what the residence time of CO2 in our climate system really is, because there are no empirical measurements to substantiate this, as Lam has stated.
Now let’s switch from model estimates to actual physical observations and check out the ZH estimate.
The first thing we observe is that there is absolutely no statistical correlation between the annual increase in atmospheric CO2 and the total annual human CO2 emission (over the years this has varied from 15% to 88%, with the balance “missing”).
[In fact, there is a much better correlation with the change in average annual global temperature from that of the preceding year than with the human emission, which would suggest that the conclusion reached by Professor Murry Salby (that it is natural and temperature-related) is valid, but let’s ignore that for now.]
Since the annual human emissions show no correlation with the annual change in atmospheric concentration, we have to look at a longer-term average.
Over the past 10 years, humans have emitted around 305 GtCO2 from all sources (fossil fuels, cement production, deforestation, etc.).
[data from various sources: be glad to provide links if anyone is interested]
The mass of the atmosphere is 5,140,000 Gt.
So, if we assume that CO2 is a “well mixed GHG” and that the Earth’s entire carbon cycle is in “equilibrium” except for human emissions (as IPCC does), we should have seen an increase in atmospheric CO2 concentration of:
305 * 1,000,000 / 5,140,000 = 59.3 ppm(mass)
= 59.3 * (29 / 44) = 39.0 ppmv
Yet over this same time period we only saw
389.1 – 370.3 = 18.8 ppmv
IOW 18.8 / 39.0 = 48% of the CO2 emitted by humans “remained” in the atmosphere, and the remaining 52% (or 2.2 ppmv/year) is “missing”.
This would suggest that the half-life estimate of Hausfather is correct, and that the “missing CO2” is leaving our climate system. At a half-life of 120 years, this would represent an annual decay rate of 0.58% of the concentration or around 2.2 ppmv, which happens (by coincidence?) to be equal to the amount of “missing” CO2 today (based on the data cited above).
The same calculation holds for the period from 1958 (when CO2 measurements at Mauna Loa were first recorded) to today, with a calculated 49% of the emitted CO2 “missing”.
So this raises the basic question, which is as yet unanswered: is the “missing CO2” disappearing from the climate system and, if so, where is it going?
The question concerning the ”missing CO2” remains a mystery, if we believe that human CO2 emissions have been the primary cause for increased atmospheric CO2 levels. [Of course, if it turns out that the postulation by Salby is correct, then this is all a rhetorical discussion.]
But let’s assume that the mainstream view on this is correct.
Where is the “missing” CO2 going?
No one really knows because there are no physical measurements enabling the calculation of a real material balance.
Some hypothesize that the ocean is absorbing most of it and it is being buffered by the carbonate/bicarbonate cycle there.
Others believe that much of it may be converted by increased photosynthesis, both from terrestrial plants and marine phytoplankton.
Since photosynthesis absorbs around 15 times as much CO2 as is emitted by humans, once can see that a slight increase in photosynthesis resulting from higher concentrations could well absorb a significant portion of the human emission.
On an earlier thread Pekka Pirilä mentioned the exchange rate between the upper ocean and the immense CO2 reservoir of the deep ocean as a possible long-term “hiding place”.
This reservoir is so vast that the increase from the “missing CO2” would hardly be noticeable, even if human emissions continued over centuries. In addition, there is a finite amount of carbon in all the fossil fuels on Earth; even if these were all burned and the CO2 ended up in the deep ocean, it would barely be noticeable.
So, in summary, irrespective of the Salby postulation (which, if validated, would make this a rhetorical question), I believe we are unable to answer the question of the “missing CO2” today, despite several alternate hypotheses.
And as a result, we are unable to answer Hal’s question.
If anyone here has any hard data to refute this conclusion, I would be delighted to see it.
Max
“Since photosynthesis absorbs around 15 times as much CO2 as is emitted by humans”
I have not previously seen that figure (my ignorance, I suppose)
Do we know this for reasonable certainty ?
ianl8888
As far as estimates of CO2 removed annually by terrestrial and marine photosynthesis, Wiki gives a rough estimate on the terrestrial portion, but these studies, which I cited to Pekka below, give an estimate for the marine portion, which is roughly equal.
http://www.pnas.org/content/100/17/9647.full
http://www.sciencemag.org/content/281/5374/237.full
Max
Max, nice post, thanks very much for this. Rob
Nicely written argument but several points need to be made. For a fat-tail response, any notion of a mean needs to be reconsidered. As I indicated in a comment upthread, power-law distributions do not show a mean or for that matter any higher order moments. However they can mimic exponentials over the initial transient, and a “half-life” number does appear, but it has nothing to do with conventional notions WRT to exponential decline. That is where Segalstad made his mistake.
Also significant is this point you made:
I would change this to state “The first thing we observe is that there is almost perfect statistical correlation between the annual increase in atmospheric CO2 and the total annual human CO2 emission”.
If you do the convolution of the actual emissions with a fat-tail impulse response, the agreement is stunning. I have done this myself and don’t expect the skeptics to do the same because they don’t believe in the fat-tail residence time. (I also argued this upthread)
Other than that, good try.
Manacker, missing CO2 is your own invented uncertainty. There is no debate in the science community, or even blogosphere, about missing CO2. Ocean uptake and acidification, of course, accounts for it. The carbon cycle explains the flows and balances well. It would actually be more surprising if none of the emitted CO2 went into the ocean because an equilibrium is maintained in their water/air surface ratio, and I don’t think you know about this part from reading what you wrote.
Sorry, JimD, there are lots of hypotheses but no empirical data that verify your statement that the “mystery” of the “missing CO2” has been solved. [If you have such data, please correct me.]
WebHubTelescope
I’m less interested in the “fat tail residence time” (a rather hypothetical discussion IMO) than in the current “rate of decay”. If, indeed, the 120-year half life is correct, as postulated by the data presented by Zeke Hausfather, which I cited, this would point to a theoretical annual reduction equivalent to 0.58% of the concentration, which just happens (by coincidence?) to be the average annual level of “missing” CO2.
The observation that the amount “remaining” in the atmosphere seems to correlate on an annual basis with the change in global temperature from the previous year (more “remains” in atmosphere if temperature has warmed, less if temperature has cooled) would point to the suggestion that there is a temperature correlation and that the ocean is absorbing more or less depending on temperature changes (Salby?)
There are apparently still a lot of unknowns on CO2 residence time in our atmosphere.
Max
Manacker, you need to point to someone who sees the carbon budget as not being closed. Where are people discussing this mystery you speak of?
Manacker, check my analysis on CO2 rise and you can see where it fits in. This is the result:
trend
Max,
I skip now totally our disagreement on the reliability of the main stream view on the persistence of CO2 in atmosphere over the first 100 or 200 years. I only note that I keep to what I have written earlier.
Concerning the role of deep ocean, I’m still looking for any analysis on it’s real effective size and on the reliability of the estimates on the rate of exchange of CO2 with the deep ocean.
One comment on its size is that the total amount of carbon in deep oceans is given approximately as 40 000 GtC or more than 50 times that of present atmosphere. In comparison, the total amount of fossil fuels underground is given in IPCC material as 4000 gtC. This is certainly very inaccurate. Whether that much can ever be extracted and burned is certainly questionable. If the Revelle factor of deep ocean is 10, that 4000 GtC added to deep ocean would be in balance with the atmosphere that has doubled its concentration from the level in balance with 40 000 GtC, or most probably the preindustrial level. That’s, however, almost certainly a gross overestimate, of the ultimate achievable extraction of fossil fuels. Even half of that is a high estimate.
Whether the Revelle factor of deep ocean is 10 or significantly different is unknown to me. Estimating that would require fair knowledge on the chemistry (specifically pH and buffering) of deep ocean.
An additional question is the speed of transfer of CO2 to deep ocean, which is likely to be slow with an effective time “constant” of hundreds to thousands of years, but again not well known.
Pekka
Let’s address your points one at a time.
I doubt that we have a disagreement on the “persistence of CO2 in atmosphere over the first 100 or 200 years”. I simply stated that there are no empirical data to substantiate this, as the Tam study concluded (and you have been unable to show me any such empirical data).
The role of the deep ocean as a sink for future carbon emissions could be immense and the chemical and biological processes involved largely unknown or poorly quantified. We apparently agree.
We disagree on the amount of carbon remaining in all the optimistically estimated fossil fuels inferred to be in place (whether extractable or not).
I have cited 2010 estimates from the World Energy Council, which tell me that this could reach a maximum atmospheric CO2 level of around 1,000 ppmv – you have cited other estimates, which put this at a higher level.
On the other points, we apparently agree.
Max
Max,
My point was not an actual disagreement, but I wanted to emphasize that the amount of carbon that the deep ocean will absorb given enough time is badly known. It’s not as huge as the present amount of carbon would indicate, when the Revelle factor is not taken into account, but it’s certainly very large even so. When a low value is used, the share of CO2 remaining in the atmosphere in balance with oceans may be around 20%, for a larger estimate the ratio of atmospheric carbon would be smaller.
I would like to find scientific work that has studied this question, but I haven’t seen such. The answer influences also the reasonableness of ideas to store CO2 from sequestration in deep ocean. This idea was discussed widely at one stage, but not recently. I have failed in my search for proper justification for this change. Underground storage alternatives are certainly preferred at first, but the question of the suitability of deep ocean for storage is an interesting question as well.
Every year, from May to September/October there’s a 5 – 6 ppm drop in atmospheric CO2 (in dry air). The efflux from the atmosphere is very large during this period.
It can change 2ppm in a day and quite regularly changes by 1-2ppm in a week at Mauna Loa.
Yes, but it’s only when there’s no wind that CO2 concentration can rise above “background” concentration. With wind, it falls to the background level quickly. Real, measured CO2 can change over 100 ppm in a few hours.
http://www.rug.nl/ees/onderzoek/cio/projecten/atmosphericgases/lutjewad2/data/index
This animation is interesting:
http://www.esrl.noaa.gov/gmd/ccgg/globalview/co2/co2_intro.html
Nick,
Any argument that relies on the Earth “knowing” anything is a bit suss, IMO.
If you aren’t convinced by my few lines of argument , maybe you should take a look at:
http://www.ipcc.ch/pdf/assessment-report/ar4/wg3/ar4-wg3-ts.pdf
I’m not saying anything different to what the IPCC have already said. They make the point that to reduce CO2 concentrations, human CO2 emissions have to be reduced by more than half.
Atmospheric CO2 concentration will be reduced according to the change of climatic factors. When the cooling really gets going (SST decrease, sea ice increase), CO2 will be reduced. Human CO2 emissions are almost irrelevant.
Edim,
You’re obviously of the school of thought that carts push horses rather than horses pull carts. Your opinion is noted but I think we’ll have to agree to differ on that point.
Yes, we will agree to differ on that point.
By the way, in your analogy temperature is the horse and CO2 is the cart.
Dr Curry: Between them Nick Stokes (August 24, 2011 at 6:08 pm) and Fred Moolten|(10.39 pm) seem to have answered your question, at least to a ballpark figure. If annual anthropogenic CO2 emissions are sufficient to increase atmospheric content by say, 3ppm per year, but the actual increase is, say, 1.5ppm, then the remainder is being absorbed by sinks at a rate of 1.5ppm per year. If this process is occurring when CO2 is at around 390ppm, while the background before notable anthropogenic emissions was about 280ppm, then it will take (390-280)/1.5 years (so 73 years) for the anthropogenic addition so far observed to be absorbed by sinks. However as Fred has pointed out, if emissions ceased now it would take longer than that to revert to the background value, as the process would likely be logarithmic.
With the above scenario, if emissions were halved now, the atmospheric level would stabilise at the present value.
The mean residence time of CO2 in the atmosphere is scientifically interesting, but a separate issue. It may be about 5 years.
There are plenty of weaknesses in the IPCC account of climate science, but this issue is not favourable ground for climate skeptics to attack the orthodox position.
You continue to make the impossible assumption that sources and sinks aren’t variable. You persist in simple calcs – simple linear concepts – for complex systems. It is all a very silly charade.
Have you actually done the convolution calculation yourself?
The concept that you and others seem to miss is that probability and statistics in the large can often simplify complex systems.
Using probability models allow us to model this variability. You seem to be very conflicted in your stance.
The mad theory that you perpetuate is that data is need to define probabilities and statisitcs.
I had to look this up.
‘In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original functions.’
http://en.wikipedia.org/wiki/Convolution
No it is just nonsense – to think that we could apply probabilities without experience has no possible meaning. The probability that it varies x is this y that and z a herd of giraffe – great use for the infinite improbability generator.
Why do you think you are entitled to bother me with nonsense?
Definitely annoyed and incoherent –
The mad theory that you perpetuate is that data is not needed to define probabilities and statisitcs…
I have had enough of your insanity as well.
It may not be your fault that you haven’t been exposed to the approach. I have noticed that certain branches of engineering and science never introduce the topic of convolutions. I am not exactly sure why this is but it is certainly short-sighted. Convolutions form the basis for everything from the Central Limit Theorem (applied probability) to linear and nonlinear response functions. The fact that it gets used for to explain both deterministic and stochastic behavior makes it a very general approach.
You may want to bone up on the topic. Climate scientists use the approach all the time because they realize that forcing functions are not idealized delta impulse functions and thus they need to convolve the input with the response function to model the result.
You might want to be a little less condescending to people like Pekka. ‘
‘More specifically any agreement in far tails with results of Archer is likely to be dependent on, how Archer did his work rather than on the real world facts, because the empirical support for any conclusions on the far tails is statistically extremely weak. Relevant empirical data exists only from very distant past, and the the results on far tails are determined by the methods used to extract information from the far tails. A very small change in the methods makes the outcome totally different.’
‘In probability theory, the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability density function of a sum of random variables is the convolution of their corresponding probability mass functions or probability density functions respectively.’ A simple concept involving the sum of two or more nominally random variables.
The key term however is having empirical support for observations of what Pekka calls the far (as opposed to fat) tail. Without observation there is no rational basis for applying probabilities.
Show me first the variability of the specific CO2 compartments – and then the sum of those in the atmosphere? Rather – don’t bother because I won’t believe you without more detailed data than we have available. We can fit a curve to anything – or make one up out of blue sky – but it is just maths and not science.
The Taylor series approximation for exp(-k*t) and a power law like 1/(1+k*t) is exactly the same to first-order.
That explains why the short residence crowd gets the time constant wrong.
The next issue is that the convolution of random variables can go to one of two classes of stable distributions as a central limit, one is thin-tail and the other a fat-tail.The Normal distribution is considered a thin-tailed stable and it requires thin-tail distributions for the input variates. The other analytic stable distribution are the Levy and Cauchy, which both have fat-tails and require fat-tails as input. The Cauchy PDF has a power law which follows an inverse square dependence.
Is this a game? Throw in a few inapplicable functions and pretend it is meaningful? Fool the sceptics because they are idiots and won’t know any different?
Listen buddy, I am serious as a heart attack about this stuff. I have no idea what your deal is, but you quote something and I can’t tell if you are referring to some imaginary friend. I do the best I can to respond.
Passionate about applying functions to a problem with no empirical base. One of them might fit – but how would you know?
I keep saying that and you refer somewhere else to volcanic emissions – without reference to anything – but they are so small and the problem remains is the old one of too many unknowns in too few equations. We need more knowns and not more functions.
Now there is no solution in what you say – just another theorist driven mad by the climate wars with a line of enquiry that no one appreciates.
What are you talking about? The empirical base is all around us. We are soaking in it. Egad man, do you not see the record of CO2 increase. Do you just want to close your eyes and stick fingers in your ears?
That is the role of a probability analysis, as one tries to understand the situation with limited information.
You do not understand how impulse response functions work. I surmised this when you had to look up what a convolution was. The records of a volcanic event can be used to judge the response function independent of the size of the event. Maybe you can also learn something about signal processing.
It must make you very angry that I am a citizen scientist. My interest in the environment is broad reaching and this is confirmed in the fact that I spent only 17 pages of my 750 page book on energy on the topic of CO2 rise and residence time (I didn’t write anything on AGW temperature rise since I am still learning). You like to cast aspersions my way but all I can say is that I have a serious interest in this subject and am not beholden to anyone’s agenda. I am also a fan of human behavior and I find it interesting to see how wound up some people can get when another person just talks.
‘That is the role of a probability analysis, as one tries to understand the situation with limited information.’
I think I am getting a clue as to the role of ‘probability analysis’ – it is filling in the gaps with imaginary data and then believing it absolute certainty.
Just getting up to speed eh? Nowadays news weather forecasting is 100% probability analysis. All insurance is probability analysis. All betting is probability analysis. These all use limited information because they are trying to anticipate something that will happen in the future and no behavior will repeat in exactly the same way .
Weather forecasts are based on initialised models – accurate for a week at most. Long range forecasting of rain is sometimes expressed as a broad probability – if we have a La Nina there is a probability of higher than average rainfall. This is based on correlation with persistent patterns of SST. Insurance is based on demographics and the law of large numbers. Gambling is either a zero sum game or the house takes a slice.
You have a single variable – the concentration of CO2 in the atmosphere -that you think you can divine by means of an ontological argument alone.
That is what you think it looks like can be described by a function of some sort – and it must be right because it is based on what you think it should look like.
I can call black 17 – but it doesn’t mean I am right. That’s called gambling?
OK, so it looks like you do believe in the utility of probability.
Ontologically speaking, the environment is ruled by disorder. Tell me something that doesn’t show disorder in the environment and we can stop using probabilities.
No – weather and climate are not predicted using probability at all. You are conceptually all over the in trying to defend a nonsensical, impractical, misguided and deeply unscientific approach. You are defending solutions without data.
The world is ruled by cause and effect – entropy may occur but it tells you nothing of how energy or matter moves through the world. There is nothing random even in the fall of the roulette ball – it is all deterministically chaotic. We define the statistics of events where we cannot identify cause and effect and it works in simple applications – climate is far from simple.
For as long as I can remember, weather forecasts are presented with probabilities. For example, probability of precipitation (POP) is routine. This is all so common sense to me and so I started looking into this and yes, some years ago “The American Meteorological Society endorses probability forecasts and recommends their use be substantially increased.”
http://www.ametsoc.org/policy/enhancingwxprob_final.html
AccuWeather has a thing they call AccuPOP and forecasters seem to use ensemble averaging.
You do not seem to know that of the origins of the master equation, the Fokker-Planck equation, the Navier-Stokes equation and the entire class of problems that introduce the concept of diffusion. All these at the core are based on probability and on conservation of mass, i.e. through the analysis of divergence and gradients. You seem not to realize that a Monte Carlo simulation needs to draw from a probability density function to even work. I suppose you are going to tell me that no one uses Monte Carlo, egad.
Once again you are engaging in a moot philosophical discussion. Yes, unless you get to the quantum level, actions are deterministic. Most physicists and scientists realize this and then move on and apply practical ensemble approaches.
I can engage in my own philosophy and find that most of the most ardent determinism followers are also religious dogmatists who believe everything is preordained and there are no shades of gray. I’ve worked with these people and gone to school with them, and I find that there is no reasoning with them. I kind of doubt this applies to you, but you never know.
That would imply every other non-human CO2 process is identical from year to year. Hard to believe.
http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf
The annual CO2 growth rate varies between 0.3 and 3 ppm. In 1998 it was 2.98 ppm and it was the maximum growth (el nino). In 1999 it was only 0.9 ppm. I hope nobody hides the coming decline.
year ppm/year
1998 2.98
1999 0.90
2000 1.76
2001 1.57
2002 2.60
2003 2.30
2004 1.55
2005 2.50
2006 1.73
2007 2.24
2008 1.64
2009 1.89
2010 2.42
Thanks, Edim for these figures. The observed
pattern seems reasonable for short-term fluctuations (due to regular (seasonal) and less regular changes in the balance of CO2 sinks and sources)
superimposed on a steadier, larger amplitude background trend (the addition of new CO2). It’s like waves which modify the water level short-term by small amounts while the larger, longer-term trend is due to the rise or fall of the tide.
And unless the Mauna Loa data are fudged, there is extra CO2. The important thing is (as Tall Bloke suggests below): does it matter?
Skeptics and critics of the climate orthodoxy might be better employed in assessing whether the increased CO2 is reflected in worryingly – or even measurably – increased surface temperatures.
From May 15th to 21st CO2 went up 1.84ppm
From July 17th to 23rd CO2 went down 1.34ppm
ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_weekly_mlo.txt
How do explain short term (7 day) fluctations like this? Do all the trees hold their breath for a week and then exhale all at once?
It would be interesting to plot those fluctuations against wind direction. My WAG is that when the sea surface is cold, less airborne co2 will be blown in off the ocean. How localised is the effect? Dunno.
The daily data I found for Antarctica shows 2ppm fluctuations in a day. I suspect temperature isn’t that different day to day there.
Tallbloke,
On this site:
http://www.carboeurope.org/education/schoolsweb.php
you can plot and download some CO2 and wind speed (and other) data.
And if you choose station/school StellingWerf College, Oosterwolde and
-start date ~12/2008
-end date ~1/2011
You will see nice seasonal CO2 variation and DECREASING CO2 concentration!
Do you know how to plot that data, or do you want me to do it for you again?
You are welcome.
http://img847.imageshack.us/img847/4835/noaaco2.gif
It’s the wind and mixing. Measured CO2 can change over 100 ppm in only a few hours.
Co2 is heavier than air, does it settle and get more concentrated when the wind drops?
I think so. There may be some other causes.
The concentration of CO2 would be altitude dependent without convective mixing, but the diffusion that works in that way is very weak and slow. Even a very weak convective and turbulent mixing counteracts that so strongly that the effect becomes insignificant in the troposphere and rather weak in the stratosphere.
This 1985 paper tells on one set of measurements up to the altitude of 35 km.
I cannot believe that people want to tackle this extremely complex and hitherto poorly understood problem with primitive equilibrium models, Henry’s laws and “boxes” with time constants.
Consider just this paper : http://www.ess.uci.edu/~jranders/Paperpdfs/2001ScienceBehrenfeld.pdf.
We learn there that the NPP (Net Primary Production of oceanic and terrestrial plants) is around 110 billions tons carbon per year what is about 14 times the CO2 emissions.
We learn farther that not only is the NPP highly variable by amounts roughly equivalent to the CO2 emissions but also sensible to oceanic oscillations.
We have here nothing that “averages out” and this is no simple physics with Henry’s law.
About half of it is phytoplancton. This is a huge living carbon storage governed by biology and metabolism depending on parameters like light availability (e.g cloudiness), nutrient availability (e.g oceanic currents), salinity, temperature.
This can’t absolutely be treated by a “box” with one time constant.
It is a huge stock + flow poorly understood system with no conservative parameter, chaotically fluctuating on large time scales (bieannal, decadal and more) on orders of magnitude that are equivalent to the variable we are talking about.
Of course one must take this paper with a huge grain of salt. The observations cover only a short time period and the models don’t agree precisely with each other.
My purpose is merely to tell to interested readers that the CO2 fluxes and their dynamics especially for larger (decadal and multidecadal) time scales are very far from naive 2 or 3 “independent box” models each described by one time constant.
I agree with Tomas Milanovitch’s assessment of the situation.
http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104528
In any case, before we get worried about the residence time of co2 in the atmosphere, we need to work out whether changes in co2 levels are worrying anyway.
The magnitude of the total energy contained in the ocean is such that minor fluctuations in radiative balance between the components of the longwave flux are pretty much irrelevant except on extremely long time scales.
http://judithcurry.com/2011/08/19/planetary-energy-balance/#comment-104533
The problem with your argument is that the biological carbon storage of oceans is actually very small, only of the order of 0.5% of the carbon of the atmosphere and almost negligible compared to the inorganic carbon of surface ocean. Because it’s so small, it cannot contribute to the changes in atmospheric CO2.
The biologic processes that remove carbon from the surface ocean and move it down to deeper ocean are important, but the uncertainty in the net transfer is not so large that it could change the overall conclusions.
The biosphere of land areas is much larger and interacts strongly with the carbon in soil. How much these reservoirs may have changed can be estimated and such limits obtained that conclusions cannot change much from the main stream views. On annual level the variations are large, but over longer periods they cannot be more than a small fraction of the increase in atmospheric carbon.
Pekka you are being silly now.
The variation of the yearly fluxes is of the order of the total CO2 emissions and the storage is 15 times that amount.
You are using in the other direction another silly argument which says that 2 ppm emission per year is so small compared to the carbon in the atmosphere that it can’t possibly matter.
I am sure that you agree that the second argument is silly so now you only need to realize that the first is equally so.
I hope you DO realise that a process that removes 15 times as much CO2 from the atmosphere than what you put in it and is highly variable will definitely have an influence on the RATE with which the CO2 concentration in the atmosphere changes.
And how variable it is, just read the paper I linked.It is not so long.
The continuous exchange has a large volume, but it cannot move over longer periods more than the reservoirs can take. Oceans 3 GtC of biological carbon is really small. It’s total amount is recirculated in less than a month. Thus it cannot have much influence even on seasonal level, and really no effect on annual or longer periods.
The variations in the net uptake in continental biosphere and soil appears to be the main reason fro year to year variability, but that variability cannot continue so strongly in the same direction for long, because the changes in the related reservoir are not changing that much.
I may have a different opinion on, who is silly and leaves essential factors out of consideration.
“Oceans 3 GtC of biological carbon” “It’s total amount is recirculated in less than a month.”
I would be interested in reading your references if you happen to have them handy.
This is interesting. The 105 petagrams is 105 gigatons of carbon or about 315 gigatons equivalent carbon dioxide reduction due to photosynthesis per year. The oceans release a net 50 gigatons carbon per year. So this is headed to the down welling debate issue where its the net that counts. :)
Plankton savor the
Sultry ultry-violet.
Layers of turtles.
=========
The uv and shorter wavelength impact on the deeper ocean is interesting. But that small percentage, tail if you will, is evidently not as popular as other tails.
Pekka Pirilä
You write to Tomas Milancovic:
It is correct that the carbon sink (storage) in the oceans is estimated to be only a fraction of that on land (including vegetation and solis), but the rate of CO2 absorption (which impacts the global carbon balance and henece the changes in atmospheric CO2) is roughly equal.
See:
http://www.pnas.org/content/100/17/9647.full
http://www.sciencemag.org/content/281/5374/237.full
Max
Max,
Your quotes confirm what I have written.
It might confirm what you say. It rather depends on how quickly it is incorporated into the food chain. The carbon is still locked in biomass regardless of where on the food chain it lies.
Pekka
Right.
It basically confirms that photosynthesis (terrestrial and marine) removes roughly 15 times the annual emissions from humans.
Max
But it also tells that this cannot affect the CO2 content of the atmosphere nearly as much. In particular the amount of carbon in the marine life cannot vary much, because it’s always so small that there’s almost nothing to vary.
How can it be that the gross rates are brought up time after time although only net rates matter, and net rates cannot change without corresponding changes in the amount of stored carbon.
Understanding this cannot be so difficult.
Pekka
Gross rates?
Net rates?
What counts is the rate of CO2 conversion on an annual basis (or over any other multi-seasonal time scale).
This rate of conversion is estimated to be roughly 15 times the rate of human emissions over the same time frame.
If this rate of conversion shifts (as a result of higher atmospheric or oceanic CO2 concentrations or any other factor), it can become significant.
What then happens to the terrestrial plants or phytoplankton is another story. Do the phytoplankton end up in the marine food chain, in some cases re-creating CO2 which is absorbed in the ocean’s buffering process or released to the atmosphere or in other cases going into sea shells which end up eventually sinking to the ocean bottom?
Who knows?
I’m afraid that the unknowns here exceed the knowns.
Max
I can only repeat:
It cannot be so difficult to understand.
There may be a climatic impact through UV interaction with stratospheric ozone. There may be a climatic impact through UV interaction with oceanic phytoplankton. If both exist, might not they interact in concert with the UV variability?
Hi Pekka,
You are in my world now. The sequestration of carbon in oceans is by two processes – the so called biological and chemical pumps. http://earthguide.ucsd.edu/virtualmuseum/climatechange1/06_3.shtml
The chemical pump is rate limited by the supply of calcium from rock weathering – which is influenced by temperature, groundwater moisture, carbonic acid in rain and mechanical breakdown by roots and fungi. There is no compelling reason to suggest that this is static.
The biological pump is limited by macro and micro nutrients – including (important for carbon sequestration) silicate for diatoms and carbonate for coccolithophores. The ecosystem is light limited – reliant on phytoplankton for primary production as the basis of the food chain. The difference between this soup of micro-organisms on the surface of oceans and the terrestrial plants can be explained in terms of life cycle. Some terrestrial plants can live for a 1000 years – when they die the most of the carbon is returned to the atmosphere through the respiration of microscopic heterotrophs. The life cycle of oceanic plankton is a matter of days – whereby some sinks into the depths where it forms thick organic, silcate and carbonate layers at the ocean bottom. The micro-organisms with silicate and carbonate shells sink relatively quickly.
The abundance of these organisms varies especially as nutrients are returned to the surface in deep ocean upwelling. The major area for this is in the region of the Humboldt Current off the coast of South America.
Upwelling does change considerably – firstly in the ENSO cycle leading to short term booms and busts in oceanic and terrestrial biology but also in decadal, centennial and millennial timescales that we know of. The well known Pacific Decadal Oscillation involves decadal changes in upwelling in the Pacific north east. The frigid and nutrient rich upwelling is super-saturated with carbon dioxide – but also leads to blooms in carbon fixing organisms.
All in all – I suspect that an assumption that these systems don’t change over time cannot be substantiated.
Rob,
While I’m certainly not an expert in your field, I’m aware on what you describe, and I have tried to formulate my statements so that they are valid over a wide range of variability and uncertainties of the type you describe.
Hi Pekka,
It is a pleasure to hear from you. I hope you are well and happy and take care to stay so.
Cheers
Hi Rob,
I’ll try to maintain my physical and mental health better next week, which may mean lead to little or no contribution to these discussions over that period.
I hope that you’ll also take care.
Chief,
I believe that when you read up on what limnologists are doing you will find a new dimension tot he carbon sink question.
Dang, Chief, I wanted to put my question here about the possible interactive climate effect of UV on oceanic phytoplankton and stratospheric ozone.
========
kim, Right on.
Hi Kim,
The micro-organisms release dimethyl sulphate into the atmosphere – and so provide their own sunscreen and are thus impervious to UV. Then it becomes very cloudy and rains a lot blocking IR and causing the stratospheric oxone to cool in turn inducing a runaway cooling effect with NAO and PDO feedbacks and the inception of the nest glacial – due in about a week. Don’t forget to rug up.
Cheers
Do you mean Dimethyl Sulphide?
BTW, this has a very short residence time.
Pekka
That is a large assumption you are making.
Thank you for your reply above
When I read the 15x figure, I knew it would cause angst
Depends on the type of phytoplankton.
“Until now, it was thought that all the photosynthetic algae and bacteria living in the ocean drew carbon dioxide out of the air and used it to build sugars and other carbon-rich molecules to use as fuel. But two new studies by researchers at Stanford and the Carnegie Institution show that Synechococcus, a type of cyanobacteria (formerly called blue-green algae) that dominates much of the world’s oceans, has evolved a mechanism that short-circuits photosynthetic carbon-dioxide fixation while still producing energy. The alternate approach is found in regions of the ocean where some of the ingredients necessary for traditional photosynthesis are in short supply.
“The amount of carbon dioxide being drawn down by the phytoplankton in nutrient-poor oceans might turn out to be significantly lower than we thought,” said Shaun Bailey, a postdoctoral researcher working in the Carnegie Institution’s Department of Plant Biology with Arthur Grossman, a staff scientist at the institution and a professor, by courtesy, in Stanford’s Biology Department.”
http://news.stanford.edu/news/2008/april2/plant-040208.html
Thanks Tom, a good starting point of actually discussing the issues, except Cheif has already tried to bring in the the stocks and flow model with variable rates, but it appears, that some do not want to discuss these issues. One you left off your list is the acidic effect on releasing nutrients that tend to increase the rate limiting variable for a net effect of increased flux, that which would indicate astatic constant for the fat tailed Bern model tends to overestimate the residence time. ANd being fat tailed means that small changes can greatly increase of decrease the residence time. Nor do some seeem to appreciate that such stiff equations in a model well past the ability to assume or measure means that such claims are speculative, not verified nor validated.
Here is Railback’s take on residence time. Their take on the issue is certainly interesting.
http://www.gly.uga.edu/railsback/Fundamentals/AtmosphereCompV.jpg
The range of 2-10 years is reasonable considering the unknowns involved.
This thread is better described as one discussing the current understanding of CO2 lifetime in the atmosphere. Salby’s paper, while dismissed by many, has not been actually read. Until it is available, it is not reasonable to claim this is resolved.
Even Phil Jones of CRUgate was forced to admit that there has been no significant global warming since 1995. After all of the shenanigans, involving data corruption and data gone missing, the global warming house of cards collapsed in the UK.
We then learned that the raw data for New Zealand had been manipulated; and, NASA’s data is the next CRUgate: satellite data shows that all of the land-based data is corrupted by the Urban Heat Island effect.
Manipulation of the data is so bad that the recent discovery concerning a weather station in the Antarctic where the temperature readings were actually changed from minus signs to a plus signs to show global warming almost comes as no surprise.
And then, there was a peer-reviewed study showing the ‘tarmac effect’ of land-based data in France where only thermometers at airports–in the winter–showed any warming over the last 50 years. Since then, the problem of data corruption due to continual snow removal during the winter at airports where thermometers are located–while all of the surrounding countryside is blanketed in snow–has been shown to extend far beyond the example in France (e.g., Russia, Alaska).
In reality, there essentially has been no significant global warming in the US since the 1940s. The only warming that can be ferreted out of the temperature records is in the coldest and most inhospitable regions on Earth, such as in the dry air of the Arctic or Siberia where going from a -50 °C to a -40 °C at one small spot on the globe is extrapolated across tens of thousands of miles and then branded as global warming.
Warming before 1940 accounts for 70% of the warming that took place after the Little Ice Age ended in 1850. However, only 15% greenhouse gases that global warming alarmists ascribe to human emissions came before 1940. Obviously, the cause of global warming both before and after 1940 is the same: solar activity during that period was inordinately high. It’s the sun, stupid. Now we are in a period where the sun is anomalously quiet; and, now we are in a period of global cooling and have been for almost a decade.
And what about the measurement of atmospheric CO2? We learned that the CO2 readings are based on measurements taken on the site of an active volcano (Mauna Loa) and have been completely fabricated out of whole cloth by a father and son team who have turned data manipulation into a cottage industry for years. (e.g., “Time to Revisit Falsified Science of CO2.” by Dr. Timothy Ball)
Wagathon,
While there is good reason to question the amounts of CO2 i the atmosphere and the accuracy with which they are measured, and even to question the consensus view regarding long term CO2 resdience times, I htink attributing motives to the Mauna Loa group is out of place.
Additionally, CO2 is measured at several points and these results seem to concur with Mauna Loa.
Skeptics do not need to argue against the GHG/Tyndel theory to point out that the idea co a cliamte crisis caused by CO2 fails.
The lack of crisis- the lack of any meaningful trend lines in cliamte events- makes that argument.
The juicy bits- climategate, Mann’s ‘fudge factor’ hockey stick, the lack of OA, the obvious politial bias of many AGW promoters, the utter lack of any actual mitigation policy/technology, the profiteering, etc. etc. etc. are all icing on the cake.
If Salby’s paper, when finally available for actual review, holds up, then great.
But that will not change the basic point, no matter if Salby’s paper survivies or not: the world is not facing a climate crisis, and additionally the AGW community has offered nothign in the realm of reality to mitigate this crisis in the first place.
hunter
Thanks for a very concise summary of the situation.
Max
Do you appreciate by how many parts per million the atmospheric CO2 levels at Mauna Loa can vary–in a single day?
Wagathon,
Yes, but if it trends up or down the daily fluctuations are not important.
Last time I checked, the other CO2 stations around the world corroborate the trend, but that has been awhile.
Now that is not directly addressing the change in CO2 measurement regimes from the 19th century to the present. That may be worth revisiting.
And the persistent underlying probelm of data source credibilty that you bring up is also important.
But think on this: evenif the climatocracy is getting it wrong and evenmore are indulging in nobel cause corruption that cliamtegate showed, what have they actually demonstrated?
Nothing that is actually out of historical ranges of climate.
If there is systemic book cooking in the AGW promotion industry, it will be found out over time. Climategate certainly gives us some pretty strong hints. But be patient. It will come out in the end.
But even before then, AGW still fails based on simply critically reviewing what is claimed.
Facts are facts. “Carbon dioxide is 0.000383 of our atmosphere by volume (0.038 percent). Only 2.75 percent of atmospheric CO2 is anthropogenic in origin. The amount we emit is said to be up from 1 percent a decade ago. Despite the increase in emissions, the rate of change of atmospheric carbon dioxide at Mauna Loa remains the same as the long term average (plus 0.45 percent per year). We are responsible for just 0.001 percent of this atmosphere. If the atmosphere was a 100-story building, our ant