by Judith Curry
A new report from the IPCC implies that “climate exceptionalism”, the notion that global warming is a problem like no other, is coming to an end. – Economist
In spite of the spin being put on the WG2 Report by the IPCC spokespersons and other advocates for climate alarmism and mitigation, there is some thoughtful analysis going on, and I’ve selected articles from The Economist and the Atlantic for discussion.
The Economist
The Economist has an article entitled Climate Change: In the Balance. The article discusses the WG2 Report. Excerpts:
The report describes three different sorts of problem. The first are those in which climate is the dominant influence, so that no human action other than stopping it changing will have an effect. The second are those in which the climate’s influence is modest and where the news is not entirely bad. The third are the ways a changing climate alters which species (both natural and agricultural) thrive where—which from a human perspective can be both good and bad.
Rising sea levels are an example of the first sort of problem. Another example of a problem of the first sort is ocean acidification.
The second sort of problem, in which the climate’s influence is more modest and manageable, includes its effects on health. By and large, the report says, the bad impacts will outweigh the good, but in neither case is climate the dominant influence on mortality or morbidity. Public health and nutrition matter more. Malaria cannot spread if it has been exterminated.
The third category, the way a changing climate alters species’ ranges, is in some ways the most intriguing. To the surprise of a lot of conservationists, for example, global warming does not seem to have caused many extinctions. Roughly half of studies of likely cereal yields over the next ten years forecast an increase, whereas the other half forecast a decline. Forecasts for the 2030s are even more sobering: twice as many predict a fall as a rise.
Dividing up the effects of climate change in this fashion leads to different ideas about how to respond. Defending low-lying cities against a rising sea level is difficult and expensive, and it is impossible to adapt to ocean acidification. These problems would best be dealt with (if at all) by attacking the cause: ie, by cutting carbon-dioxide emissions.
Problems in the second category, however, can be approached in other ways. As the report itself says, “the most effective vulnerability reduction measures for health…are programmes that implement and improve basic public health [like] the provision of clean water.” Such measures would be beneficial even if there were no climate change.
The third category lies somewhere in between. It requires measures that should be undertaken anyway, but need to be tweaked because of the climate. Farmers are always trying out new crop varieties, but increasingly those varieties will have to be drought-resistant. That may mean choosing between different aims, for there is often a trade-off between drought resistance and yield.
This way of looking at the climate is new for both scientists and policymakers. Until now, many of them have thought of the climate as a problem like no other; and best dealt with by trying to stop it (by reducing greenhouse-gas emissions). The new report breaks with this approach. It sees the climate as one problem among many, the severity of which is often determined by its interaction with those other problems. And the right policies frequently try to lessen the burden—to adapt to change, rather than attempting to stop it. In that respect, then, this report marks the end of climate exceptionalism and the beginning of realism.
The Atlantic
The Atlantic has an article The UN’s new focus: surviving, not stopping climate change. Excerpts:
But in a 2007 article for Nature, a team of academics gave three reasons for why the “taboo on adaptation” was gradually disappearing:
1. The “timescale mismatch”: Even if world leaders take decisive action to cut emissions (a big “if”), it won’t have an impact on the climate for decades, and greenhouse-gas concentrations will continue to increase in the meantime.
2. The emissions fallacy: People are vulnerable to the climate for reasons other than greenhouse-gas emissions, including factors like socioeconomic inequality and rapid population growth along coasts.
3. The demands of developing countries: While wealthy countries account for most greenhouse-gas emissions, poor countries suffer the most damage from climate change. And these developing countries want the international community to help them become less vulnerable to the extreme climactic events they’re facing now, rather than arguing over emission targets that will theoretically protect them in the future.
The IPCC’s early climate reports in the 1990s barely mentioned climate-change adaptation. But that changed in the panel’s 2001 edition, which noted that “adaptation is a necessary strategy at all scales to complement climate mitigation efforts.” The IPCC spent two pages discussing “adaptation options” in its 2007 study, and this week has devoted more than four chapters to the strategy, including a graph that shows our ability to adapt to climate change in three eras: the present; the near-future we’ve committed ourselves to based on current emissions; and the distant future we still have the capacity to shape.
This shift is a positive development for two reasons. First, adaptation measures are less politicized than mitigation measures. People may not agree on the science of climate change, but uncertainty about the future is no excuse for failing to prepare for the worst. “The dam of orthodoxy is cracking,” Simon Jenkins wrote in The Guardian on Monday. “If Rome is burning, there is no point in endlessly retuning Nero’s fiddle.”
Second, preparing for the worst actually presents major opportunities for the private sector and local governments. In its report this week, the IPCC is indeed calling for action—but not in the form of grand international declarations or promises. “Among the many actors and roles associated with successful adaptation, the evidence increasingly suggests two to be critical to progress; namely those associated with local government and those with the private sector,” the report states. The implicit message: Citizens should stop waiting for world leaders to legislate climate change away—because that can’t be done. Instead, individuals and communities need to show entrepreneurial initiative and figure out how best to survive in an increasingly volatile climate.
JC reflections
The ‘end of climate exceptionalism’ was first articulated (as far as I can tell) in this article by Andrew Lilico.
I find the Economist’s separation of impacts into three categories to be very interesting, although I’m not sure exactly how I carve out categories 2 and 3. Category 1 is the only category where CO2 mitigation in principle makes sense – the timescales of the impacts are slow and long, with unambiguous losers (and no winners). However, our understanding of ocean acidification and its impacts is in its infancy (see this previous post). Anthropogenic sea level rise is an unknown fraction of total global sea level rise (some question that there is evidence of anthropogenic acceleration); more significantly, sea level rise from climate change is only a fraction of the total sea level rise in many vulnerable locations, with geologic subsidence and land use (especially ground water extraction) dominating in many locations.
For CO2 mitigation to make sense given the uncertainties in sea level and acidification impacts, CO2 migration would need to benefit the whole host of potential adverse impacts from extreme weather events, public health, species extinction, etc. Attribution of these adverse impacts to AGW is weak, with both winners and losers, and a large number of confounding factors. The WG2 focus on adaptation measures in concert with other natural and societal factors, on measures that we can implement NOW for benefits that we can receive NOW, whether or not climate variability/change is anthropogenic or natural.
I like this statement from The Atlantic:
First, adaptation measures are less politicized than mitigation measures. People may not agree on the science of climate change, but uncertainty about the future is no excuse for failing to prepare for the worst.
The question then becomes NOT what is causing climate change or how we can prevent it, but rather: How much resilience can we afford? I suspect that Andrew Lilico may be right in that the WG2 report reflects a fundamental change in the climate change debate.
A reminder that the WG3 Report on mitigation is forthcoming. As far as I know, there haven’t been any leaks on this one yet, but I am inferring from some twitter comments that the costs of mitigation may be higher than previous estimated. Stay tuned.
IMO, this is a much healthier and realistic place to be on the public debate regarding climate change and what to do about it.
‘However, our understanding of ocean acidification and its impacts is in its infancy’
…and is likely to remain so until at least the basic terminology is sorted out into something rigorous.
The ocean is not ‘acidifying’. It is ‘neutralising’.
The tiny changes involved tend towards making its chemical characteristics more like pure water than its current slightly alkaline state.
Amen! But “acidification” is much more scary. It conjures thoughts of going swimming in H2SO4.
The ocean is acidifying. If it sounds scary, it probably does scare most people who understand the implications. The whole “neutralization” meme is a straw man.
Of course, wild-eyed schemes to deal with the problem of fossil carbon by raising the price of energy scare me a lot more, and probably most sensible people.
@AK
No strawman here. Unless somebody’s managed to change the fundamentals of chemistry in the last few years. Did I miss it?
Simple fact is that the oceans are alkaline (pH>7), and even if all the ‘free’ carbon in the world were burnt and turned into CO2 they will remain so. There just isn’t enough carbon and carbonic acid is too weak to ever make the oceans acidic.
By contrast, much ‘fresh’ water *is* acidic. And its amazing how much life thrives in both environments…and can survive in both.
@Latimer Alder…
Chemistry based on “fresh” water doesn’t apply to water with as many ions as sea water. Even with “fresh” water, a pH of 7 is just an arbitrary line. Acidity is a measure of how many H+ ions there are, and any time something increases the concentration of H+ ions it’s acidification. Even salty water with a pH of 10 or 11 is an acid. Just not a very strong one.
@Ak
And at pH 8 there are 100 times more hydroxyl (alkaline) ions than acidic. The solution is alkaline overall and shows alkaline chemistry. At pH 9 the ratio is 10,000:1 and at pH 10 it is 1,000,000:1.
pH=7 is not at all an ‘arbitrary’ line. It represents the point where the concentration of of hydroxyl ions is equal to that of the acidic ions. The two balance out and shows neither acid nor alkaline characteristics. We call this ‘neutral’ or ‘pure water’
Below pH7 the acidic ions are in the majority and the solution displays acidic chemistry (as in some fresh water, ‘acid rain’ etc).
The oceans are usually somewhere between pH 7.7 and pH 8.2.
This is not hard stuff. In UK we certainly understood it at Chemistry ‘O’ level. I doubt its changed since I passed that examination in 1971
“Acidity is a measure of how many H+ ions there are…”
Latimer understands that very well. There are approximately 10 times as many H+ ions at ph7 as there are at pH8. And CO2 is not changing the other ions in seawater.
Low internal pH is common to both eukaryotic (that includes you and me) and prokaryotic cells. That is, it is more acidic inside the cell than it is outside.
@Latimer Alder…
The ratio doesn’t matter for most biochemistry. All that matters is the concentration of H+ (or OH-) relative to what specific enzymes respond to. That’s why “acidification” is the correct term.
Less alkaline has the same number of letters as acidification but it conveys more information. Less basic has fewer letters and conveys more information as well.
The problem is that global warming science is in the business of providing less information and more FUD so acidification gets the nod. It isn’t complicated.
“Less alkaline” and “less basic” don’t convey any more chemical information than “acidification”. However, they carry a false message of security.
If people are really worried about public misunderstanding of “acidification”, why don’t they stress that the vinegar most people toss on their salads is also an acid, so it isn’t really that big a deal? Rather than pushing what amounts to an implied lie: that more CO2 is making the oceans safer.
Of course they are more descriptive. Are you stuipid? Less alkaline or less basic implies pH >7 that has moved closer to 7. Acidify could be any reduction in pH.
So could “less alkaline”. Just like “acidify”, it applies to the entire range: a solution with a pH of 1 is less alkaline than one with a pH of 2.
And speaking of “Are you stuipid?”,
I hope you’re just playing “stuipid”, but I’m really beginning to wonder. And if you are (just playing), that’s a very deceptive form of argument. Totally dishonest, IMO.
@michael hart…
Anybody who knows anything about biochemistry knows that a variety of trans-membrane enzymes and receptors depend on specific differences between internal and external concentrations.
It’s sheer denial to insist that because more acid conditions exist somewhere, acidification of the external environment won’t have any effect.
AK, people have looked at the gene expression in E. Huxlei at different concentrations of CO2. The only difference they found was that carbonic anhydrase expression went DOWN.
This suggests that increased CO2 made the environment more benign so the cells had to do LESS work maintaining the pH difference between the interior and the environment, consistent with what you would expect from my comment above.
@michael hart…
So what? They looked at the response of one strain of bacterium, and from that you decide that acidification doesn’t matter? How foolish!
Even if it makes things easier for one strain, that doesn’t preclude a destructive effect. What if that strain runs wild because “the cells had to do LESS work maintaining the pH difference between the interior and the environment”? While that’s probably very unlikely, in general changing that pH will change the relative fitness of many, probably most, species in any ecosystem. Not only that, but as newly successful species expand their population and/or range, they will undergo other evolutionary changes allowing them to invade niches they formerly couldn’t compete for.
Overall, a change to any general factor such as pH+, or pOH-, or pNA+, or pSO4–, will tend to cause ecosystems to re-balance, often suddenly. While the overall biomass might remain unchanged (improbable), the populations of (almost all) species humans depend on will have a much greater chance of declining than expanding. (Because any species humans depend on has a very large population. Else it couldn’t be very useful.)
AK | April 4, 2014 at 1:18 pm |
“So what? They looked at the response of one strain of bacterium, and from that you decide that acidification doesn’t matter? How foolish!”
So provide an example of a negative response, fercrisakes butthead.
I hadn’t realized you indulged in such blatant straw man arguments. As an example of what I actually mentioned, kudzu.
More generally, why should I go to the trouble to dig up peer-reviewed research, and review it myself to assure that it’s actually relevant and makes proper use of its ref’s, just so some ignorant f00l can dismiss it as “pal reviewed”?
There’s only one instance of kudzu on this page. And we were talking about ocean acidfication. The beneficial example given was a bacterium, Emiliania huxleyi, a phothsynthetic plankton. You come back with kudzu, a pea plant that has jack schit to do with ocean becoming a tiny bit less basic. Are you playing with a full deck, AK?
Getting transplanted to the new world was beneficial for kudzu. As for the rest of the environment…
More straw men.
Awfully trusting, aren’t you, when you call me “stuipid”? As it happens Emiliania huxleyi is a Coccolithophore, “an armour-plated, photosynthesising single-celled” Eukaryote. IMO my mistake was (marginally) justifiable, I assumed (and we both know what that does) that “E. Huxlei” referred to Escherichia, as in E coli. But you were looking things up, why did you follow suit?
AK-
Help me understand you concern about ocean acidification. From what I have read the PH in the ocean at any specific location can vary more over the course of a month than it would be impacted by AGW in years. Doesn’t this seem to indicate that the ccean can easily adapt to the long term PH change?
@AK
You’re sure “acidification” is the “correct” term, eh?
Please sign my petition to ban hydroxylic acid. Ingesting it kill thousands every year, and it’s found in every municipal water supply in the U.S.! Something must be done! But it’s a powerful industrial solvent, and, so far, giant corporations have managed to convince Congress to take no action!
Latimer, you can adapt to the language in use or you can leave the pub.
It’s not the pH that really matters anyway, it’s the solubility of Calcium carbonate, which goes up markedly with “ocean neutralization”
The tiny changes can be significant, that’s an active area of research.
@Rob Starkey…
The same concern as with “global warming” and, most important IMO, simple increases in pCO2: ecosystem destabilization. In the case of the ocean, humankind gets most of its seafood by simply harvesting high-population species from the native, “wild” ecosystems.
If a sudden ecosystem reorganization takes place, the identity of species that are “high-population” is highly likely to change. There might well be replacement species, but it could well be necessary for humans to engineer some new way to harvest, process, and store them. Or they might be too poisonous to be practical. In any event, such sudden changes would impact our economy.
Now, most of the variation you refer to is either very local, or seasonal. Expanding the local adaptations to cover a larger fraction of the ocean would almost certainly result in sudden, substantial, eco-reorganization. As for ecosystems adapted to such seasonal changes, many species will be adapted to using those changes as signals for season-appropriate behavior. Such behavior is often associated with substantial changes to populations of plankton (phyto- and zoo-). By causing such changes to be touched off at inappropriate times, such global changes have a significant chance of also causing a sudden, substantial, eco-reorganization.
As I said above, the thought of trying to “solve” the fossil carbon problem with “solutions” that substantially raise the price of energy seem like a much greater risk to me. But that doesn’t mean the risk should be ignored. It should, IMO, be addressed with low-regrets options such as greater incentives for research, especially into profitable industrial methods that can remove CO2 from the atmosphere or ocean surface.
In fact, I’ve frequently expressed the opinion that the whole “global warming” thing is being used as a stalking horse by people pursuing an effectively socialist agenda. Not so much in trying to solve the problem at all, as in the solutions they always seem to reach for: higher energy prices that would require lots of subsidies for the needy, an almost ideal breeding ground for bureaucracies and “public service” organizations easily subverted by methods used by Lenin, among many others. Also an expanding rat’s nest of incomprehensible regulations, a similar breeding ground.
But when you deny the problem, IMO, you actually serve such people’s purpose, since it’s easily demonstrated that the problem exists, and such deniers make a useful tar with which to smear more sensible people (such as Robert Pielke Jr., or our hostess).
AK | April 4, 2014 at 4:17 pm |
“If a sudden ecosystem reorganization takes place, the identity of species that are “high-population” is highly likely to change. There might well be replacement species, but it could well be necessary for humans to engineer some new way to harvest, process, and store them. Or they might be too poisonous to be practical. In any event, such sudden changes would impact our economy.”
Or there could be a population boom in desirable species making harvest easier and seafood less costly.
You just don’t know. What your are doing is handwaving and making up just-so stories for which you have no evidence not unlike a parent telling a child about the bogeyman so they stay snugly wrapped up in bed at night.
You don’t really understand this or how you come across to others, do you?
Nobody knows. However, as I pointed out, economically significant human usage of marine resources requires populations that are already high. Thus, the chance that they’ll get reduced is much larger than the chance that they’ll get even bigger.
And, btw, it’s not a “just so story”. Not until it’s so, anyway, and by then it’ll be too late. If it happens, and I agree the chance of any particular scenario happening is pretty small. Which is why I support only “low-regrets” efforts to address the risk.
What? To deniers who’ll take any opportunity to rationalize sticking their heads in the sand? Who cares? They won’t listen anyway.
I know how averse you are to actual topical observations getting in the way of your opinions so I beg forgiveness in advance.
http://link.springer.com/article/10.1007/BF00334344
Coral growth in high-nutrient, low-pH seawater: a case study of corals cultured at the Waikiki Aquarium, Honolulu, Hawaii
Abstract
Fifty-seven species of hermatypic corals have been maintained and grown in high-nutrient seawater at the Waikiki Aquarium, Honolulu, Hawaii. In this study we document the chemical conditions of aquarium water in terms of dissolved nutrients and carbon. Aquarium water is characterized by concentrations of inorganic nutrients that are high relative to most natural reef ecosystems: SiO3∼200 μM; PO4∼0.6 μM; NO3∼5 μM; NH4∼2 μM. In contrast, concentrations of organic nutrients are lower than most tropical surface ocean waters: DOP ∼0.1 μM and DON ∼4 μM. The incoming well-water servicing the facility has low pH, creating over-saturation of carbon dioxide. The coral communities in aquaria took up inorganic nutrients and released organic nutrients. Rates of nutrient uptake into aquaria coral communities were similar to nutrient uptake by natural reef communities. Coral growth rates were near the upper rates reported from the field, demonstrating corals can and do flourish in relatively high-nutrient water. The growth of corals does not appear to be inhibited at concentrations of nitrogen up to 5 μM. Statements implying that corals can only grow in low nutrient oligotrophic seawater are therefore over-simplifications of processes that govern growth of these organisms. Some basic guidelines are given for maintenance of coral communities in aquaria
Riiiiight!
The article is talking about growth in aquaria, where intervention can remove destructive predators (grazers?). Not relevant to the subject at hand. Another straw man.
Oh dear another straw man. Got our panties in a wad over them now do we, dear?
bob droege
Yep.
And as more carbonate is dissolved, pH increases ever so slightly.
Ain’t ocean chemistry grand, bob?
And, as Latimer Alder has pointed out to you, the teeny weeny bit of CO2 we puny humans are able to put into the ocean by burning the paltry amount of fossil fuels left on our planet is totally insignificant when compared with the complex chemistry of the entire ocean and the vast amount of carbon already there.
But hey, if you want to worry about “ocean acidification”, go right ahead. It’s a free world and there are all kinds of imaginary hobgoblins out there for easily frightened people to worry about.
Max
“the teeny weeny bit of CO2 we puny humans are able to put into the ocean by burning the paltry amount of fossil fuels left on our planet is totally insignificant when”
Oh dear it’s the “tiny human CO2 emission” argument again. Might want to check this out.
http://upload.wikimedia.org/wikipedia/commons/1/1c/Carbon_Dioxide_400kyr.png
Max,
The more you reply to me, the more I learn that you really do not know anything about anything.
Adding CO2 to the ocean which results in lowering the pH and increasing the solubility of calcium carbonate, doesn’t at the same time increase the pH. Ain’t buffer chemistry grand?
The ocean chemistry isn’t all that complex, pretty much a simple carbonate buffer solution any freshman chemistry student is taught how to do the calculations. In my experience most of the general chemistry students have a hard time with that though.
Further into the details though, the increase in solubility of calcium carbonate makes it more difficult for the small animals that the ocean food chain depends on to build their little houses.
Read about the hobgoblins here
http://icesjms.oxfordjournals.org/content/65/3/414.full
they are real
Not so much the average pH, but where the saturation for calcium carbonate occurs and where it will be changing if we continue to use the oceans and the atmosphere for our dumping grounds.
@bob droege
Well tanks for the advice, old buddy, but I think I’ll just stay in the pub and continue to point out the misuse of language in this connection.
Any cynic (were one to be present) might speculate that the incorrect use of acidification rather than neutralisation is prompted by the unpleasant vision the former might conjure up of taking a dip in sulphuric, or surfing on a fuming wave of conc. nitric. For that is about the limit of the general public’s understanding of acids.
But I’m sure that such cynical thoughts would be unworthy and that you, Bob , can reassure us on that matter.
“Oh dear it’s the “tiny human CO2 emission” argument again. Might want to check this out.
http://upload.wikimedia.org/wikipedia/commons/1/1c/Carbon_Dioxide_400kyr.png”
Splicing measurements onto proxies to “hide the decline” as it were are we?
Latimer,
I think the problem is that we need better chemistry education of the masses.
Or to get the american joe sixpack to enjoy a pint of bitter rather than the sweeter than it used to be local brew, budweiser, not to be confused with beer.
Bob Droege said:
I think the problem is that we need better chemistry education of the masses.</blockquote.
That's unrealistic. Education of basics like chemistry is going backwards in the developed (supposedly 'enlightened' countries). It is being replaced by studies in political correctness, media studies, 'communication techniques', spin, propaganda techniques and doomsayer adjectives.
It would be much better to expect the climate scienits to tell the truth, the whole truth and nothing but the truth. Stop misleading. Stop scaremongering. Stop doomsaying. Stop exaggerating the potential downside risks (consequences and probabilities) and downplaying the potential benefits. And stop lying.
Then perhaps those who are climate alarmists because of their allegence to left wing ideologies, might follow the lead of the new morals and ethics displayed by a new, honest breed of climate establishment.
More likely they’ll abandon climate and start looking for some other excuse to impose their agenda.
And good riddance.
-That’s unrealistic. Education of basics like chemistry is going backwards in the developed (supposedly ‘enlightened’ countries). It is being replaced by studies in political correctness, media studies, ‘communication techniques’, spin, propaganda techniques and doomsayer adjectives.-
I would a great and useful education if it were actually replace by such studies, but instead instead such “studies” are pre-school level.
About only truth to idea of learning everything you need to know in kindergarten, http://www.peace.ca/kindergarten.htm
@bob droege
‘I think the problem is that we need better chemistry education of the masses.’
Amen to that brother!
It would be nice if a certain member of my household knew the difference between bleach, washing up liquid and carpet shampoo and had not assumed that because they are all ‘chemicals’ they can be used interchangeably! Bye bye the seals on my brand new top of the range carpet cleaner. Hello expensive repair.
She was expensively ‘educated’ a while back and while knowing every Saints day, her rosary, the names of all the popes and every indignity carried out by Cromwell she knows f… all about science of any type. Science was not taught to nice Catholic girls in Dublin in the 1970s.
But this unhappy episode illustrates my point. Huge swathes of the population do not have even basic science. And that ain’t going to change anytime soon – no matter how desirable we can all agree it would be.
So responsible scientists – who wish to honestly and openly report their work to the public who (in the vast majority of cases) employ them – need to recognise that ignorance in the language they use to discuss it. The term ‘acidification’ can conjure up unhappy and scary visions in the public’s mind. The term ‘neutralisation’ does not.
Faced with that choice – and the connotations, which would you use to discuss the issue (if there is one) with the public? Why would you make the choice you do?
@kneel
The ‘increase’ in CO2 may be large in proportion, but the absolute amounts remain very small.
Your graph shows that a while back 9,997 molecules in every 10,000 in the atmosphere were not composed of CO2. Now it is 9,996.
Please recall that the commonly used units (ppm) are parts per million. A change of 100 in ppm – sufficient to cause apoplexy among the more excitable alarmists represents just 1 part in 10,000.
If you are a football fan you can imagine there being 4 away fans at Preston North End home games (average this season 10,136) rather than 3. It really ain’t a lot.
lolwot and bob droege
Go ahead and get all excited about the anthropogenic ocean neutralization hobgoblin.
There is a helluva lot of carbonate down there, guys.
And (in comparison) not a helluva lot of fossil fuel carbon.
Go figure.
Max
PS Thanks for the free chemistry 101 lesson, bob, but I really don’t need it, especially not from you. (Been there, done that.)
Latimer
You said;
“But this unhappy episode illustrates my point. Huge swathes of the population do not have even basic science. And that ain’t going to change anytime soon – no matter how desirable we can all agree it would be.’
Now substitute the word ‘science’ for ‘history’ and we can see how so many people, including ones on this blog, politicians and scientists, fail to put the current climatic era into a wider context.
The climate did not begin in 1979 with satellites or 1880 with Giss. History tells us of benign times-which we have been in for at least a century- and wild and turbulent times such as during the transition from the tranquillity of the MWP to the climatic see saw that was the LIA.
Unfortunately human observations of our climate and the extremes we can observe are disqualified by science-although not by those people who take a more considered view.
tonyb
Tonyb, to have a non-historic view of climate is to have no view at all. It is like determining a World Series after one ball is pitched, or a test match from one ball bowled.
If climate is not long-form, what is? I think it’s fine to dismiss paleo, past records and old docs as inadequate – obviously they aren’t all that flash – if you then cease to talk about climate. Words such as “record” and “unprecedented” have always been dodgy, but once you dismiss all points of comparison past a very recent date you should cease to use those words altogether, since they can then only serve as emotive and manipulative hype. (They usually do so anyway.)
Of course, some elements of past climate are just impossible to dismiss or ignore. One can point out time and time again the well known fact that sea level rise began in the late 1700s and hasn’t had quite the vigor since the 1860s. The response is simply to ignore the giant fact and then continue talking about the mathematical and physical intricacies of sea level rise in relation to AGW. The giant fact can go sit in the corner. If it continues to interrupt the consensus it will be sent to the special room with the MWP. That’s where naughty facts get their handles painfully straightened. Enough said.
@LA: If you are a football fan you can imagine there being 4 away fans at Preston North End home games (average this season 10,136) rather than 3. It really ain’t a lot.
So Latimer, by this reasoning you should have no objection to increasing each kg of your body weight by 100 mg of potassium cyanide, right?
Since the first 1.5 mg will kill you, you may as well be hung for a sheep as a lamb and take the whole 100 mg.
Interesting that you were allowed to graduate as a chemist. Have UK standards improved any since then?
Vaughan Pratt,
An individual single example of the Ebola virus is sufficient to kill a susceptible human being – oh hell, let’s make it ten.
You can probably work out the percentage weight relative to a one hundred kilogram person.
What is your point? In a deterministic dynamical system, an arbitrarily small change to an input may cause an unknown arbitrarily large change to the output. So what? Waffling about unknown unknowns does not help much.
Tell me something of benefit, that I don’t already know. Anything else is not terribly useful, is it?
Live well and prosper,
Mike Flynn.
@MF: In a deterministic dynamical system, an arbitrarily small change to an input may cause an unknown arbitrarily large change to the output. So what? Waffling about unknown unknowns does not help much.
Absolutely right. Your quarrel is not with me but with Latimer, who seems not to understand this.
Max.
“PS Thanks for the free chemistry 101 lesson, bob, but I really don’t need it, especially not from you. (Been there, done that.)”
You mean you flunked chemistry before?
Cause now you seem to be confused as to whether we are adding an acid or a base to the oceans.
@vaughan pratt
Since the supposedly malign influence of CO2 is manifested only as secondary or tertiary effects of its existence and take a very long time even to do those, while KCN is a direct poison, the two circumstances are not at all analogous.
And I’m very surprised that somebody who purports to have been a Professor at a relatively well thought of university (albeit in the USA), did not grasp this elementary point within a nanosecond.
But just in case it has escaped you, a small quantity of KCN poisons you immediately by interrupting the oxygen supply within your body. It is a direct poison…and I would counsel you to avoid the stuff.
CO2 by contrast is not a poison (at least at the atmospheric levels we speak of here). It’s effects are supposedly because it is a greenhouse gas and therefore (theory, though not much observation) suggest could raise the planetary temperature by some trivial amount that might have some adverse indirect effects on one’s health.
And theory (though again startlingly few observations) suggest that increased atmospheric CO2 might alter the seawater pH marginally in the direction of neutral. And that after many years it may be that the sealife of the earth will collapse leading to the eventual extinction of humanity (or some other such alarmist scenario). Or, since the alarmist scenarios are pretty much evidence-free, it, of course, may not
Since I am in mellow mood, I won’t charge for this brief tutorial, but hope that you will understand the lessons and avoid such an error of understanding in the future.
Toodle pip.
Latimer, you may have overlooked Mike Flynn’s point, ” In a deterministic dynamical system, an arbitrarily small change to an input may cause an unknown arbitrarily large change to the output.” The small change need not be cyanide for Mike’s point to hold.
For an analogy closer to the case at hand, consider a meter-thick wall of clear glass. Now thicken it by 0.1 mm. By your reasoning this increase of one part in ten thousand can’t make much difference to its opacity. While this would be true if the 0.1 mm thick additional layer were more of the same glass, if it were black plastic it would make a considerable difference to the opacity.
Oxygen and nitrogen are transparent to the 10-micron radiation that cools the Earth by passing to space. CO2 by contrast is relatively opaque at those wavelengths. If the amount of CO2 added to the atmosphere is a tiny fraction of the amount of oxygen and nitrogen, it does not follow that the change in opacity of the atmosphere at those wavelengths is tiny.
Vaughan Pratt:
Your argument about opacity may seem plausible ex ante; however, radiosonde data show that the optical depth of the atmosphere hasn’t changed detectably in the last 60 years.
What optical depth are you referring to, John? Aerosol, cloud, visible light, UV, IR, FIR, some combination thereof?
Depending on which notion of optical depth the radiosondes were measuring, the atmosphere could change from completely transparent to completely opaque at the wavelengths of Earth’s thermal emission without changing the atmosphere’s optical depth at all.
A paper or other citation relevant to the past 60 years would be helpful. A dataset even more so.
@vaughan pratt
Good…you have dropped the snark, so normal discussion can resume.
My football point – which I think you have over analysed far further than it was meant to be – was at heart a simple artithmetic one about values and the changes in them and how they are reported
Example: ‘There has been a 40% increase in complaints this year!’ Scary perhaps. ‘Complaints remain at less than 1 in 100,000 transactions’ Reassuring. And yet they refer to exactly the same data.
To make a judgement you need both the absolute and relative values. Our correspondent Kneel referred us to a graph showing a ‘dramatic’ increase in CO2 (c 40%). The football analogy was to remind us all of the absolute values too.
That’s it. End of point. Please do not read any more into it, nor find ‘conspiracy ideation’, nor criticise my alma mater on this basis. To do do would be to imagine hobgoblins that ain’t there.
Toodle everso pip.
Vaughan:
The optical depth at issue is, of course, in the IR range. For a poster presentation see: http://climateclash.com/ferenc-miskolczi-the-stable-stationary-value-of-the-earths-ir-optical-thickness/
Hi Judy – I agree that the the new WG2 report represents a much needed broadening of the assessment of social and environmental risks. As we summarized in our article
Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairaku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. Extreme Events and Natural Hazards: The Complexity Perspective Geophysical Monograph Series 196 © 2012. American Geophysical Union. All Rights Reserved. 10.1029/2011GM001086. http://pielkeclimatesci.files.wordpress.com/2012/10/r-3651.pdf
“We discuss the adoption of a bottom-up, resource-based vulnerability approach in evaluating the effect of climate and other environmental and societal threats to …critical resources. This vulnerability concept requires the determination of the major threats to local and regional water, food, energy, human health, and ecosystem function resources from extreme events including those from climate but also from other social and environmental issues. After these threats are identified for each resource, then the relative risks can be compared with other risks in order to adopt optimal preferred mitigation/adaptation strategies. This is a more inclusive
way of assessing risks, including from climate variability and climate change, than using the outcome vulnerability approach adopted by the Intergovernmental Panel on Climate Change (IPCC). A contextual vulnerability assessment using the bottom-up, resource-based framework is a more inclusive approach for policy makers to adopt effective mitigation and adaptation methodologies to deal with the complexity of the spectrum of social and environmental extreme events that will occur in the coming decades as the range of threats are assessed, beyond just the focus
on CO2 and a few other greenhouse gases as emphasized in the IPCC assessments.”
Roger Sr.
I am a huge fan of this perspective Roger Sr. Ultimately, we will have to include a globally aligned, long-term multi-gas strategy to get GH gases under control, but a bottom-up resource based vulnerability perspective is the practical first order of business.
R. Gates: but a bottom-up resource based vulnerability perspective is the practical first order of business.
I agree with you there. The examples I write about most commonly are flood control and irrigation in California, Texas and the Indus Valley. Reduction of CO2 output should certainly not come first in those areas.
What percentage of world governments look at the management of their current infrastructure in this manner? Why will the future be any different?
R. Gates
This “more inclusive way of assessing risks” proposed by RP Sr. certainly makes more sense that the (past?) myopic IPCC fixation on GH gas mitigation (principally CO2).
And the good news could well be that the postulated future action, which you refer to as “a globally aligned, long-term multi-gas strategy to get GH gases under control”, may turn out to not even be necessary, if new economically viable and environmentally acceptable non-fossil fuel energy alternates get discovered and developed in the meantime (as is likely IMO)..
So this approach truly buys us the time we need and allows us to adapt to those climate challenges that nature (or anyone else) throws at us, if and when it becomes apparent that such challenges could become imminent.
And we can do this without wasting our resources and efforts on mitigation actions, whose efficacy we cannot guarantee and whose unintended negative consequences we are unable to assess today.
Truly a win-win.
Max
Roger Pielke Sr. has been right on this issue for a long time. And he has been beaten unmercifully for the crime of being right by thugs such as Michael Mann.
He has remained a gentleman throughout. When his blog went inactive, the thugs transferred their attention and hostility largely to our hostess. Who has remained a gentlewoman throughout.
You can tell something–not everything–about a person’s character by their enemies. You can tell more by watching their reactions when they are attacked unfairly.
I’m glad to see the Economist moving back towards the saner approach to this issue that they had back in the day when they were defending Lomborg. I am hoping that at some point major publications will take a moment to those who refused to get caught up in the attempt to stoke hysteria.
+1 Tom
@TF: I’m glad to see the Economist moving back towards the saner approach to this issue that they had back in the day when they were defending Lomborg.
The Economist is not an individual, it’s a newspaper (as it likes to call itself). To which Economist reporter or editor are you attributing the onset of sanity?
(I try to read every issue cover to cover, time permitting.)
@MRM: The examples I write about most commonly are flood control and irrigation in California, Texas and the Indus Valley. Reduction of CO2 output should certainly not come first in those areas.
Quite right. Section 3.1 of the Wikipedia article on the tragedy of the commons cites global warming as an example.
Seems to me we’ve heard these “rational” economic arguments before:
Conclusion The scientific/economic/moral dialog that counts is coming later this spring.
Fan quotes Buck and is a Fan of Dr Strangelove. Nice touch. Finally we have something in common.
The reality is that Big Green has killed 100s of millions of people with the DDT ban, and now they are freezing seniors to death because of super-expensive “green” bird chopping electricity
Two billion? Slightly off. According to the Ehrlichs’ group, the number is more like five billion.
“Some of you may die, but it’s a sacrifice I am willing to make.” – Farquaad (Shrek).
A Fan of *MORE* discourse: Conclusion The scientific/economic/moral dialog that counts is coming later this spring.
No. It began at least as early as Bjorn Lomborg’s writing.
Turgidson: Mr. President, I’m not saying we wouldn’t get our hair mussed. But I do say… no more than hundred and fifty million people killed, tops. Uh… depended on the breaks.
How many people will die if investment is directed away from water (cleaning, flood control, irrigation) and crop breeding to solar farms and wind farms? Turgidson’s views are irrelevant to this discussion. At any time, there is only a finite amount of time, labor, money and attention, and the question now is which avenues of investment will cost fewer lives in the foreseeable future.
Based on past records of flooding, the Three Gorges Dam has now saved millions of people from drowning and starvation. An equally costly investment in solar farms and wind farms is extremely unlikely to have done as well.
California, by contrast, reduced its water storage capacity while building up its solar and wind generating capacity, and is suffering more from the current drought that it had to — and with higher electricity prices added in to punish the relatively poor..
Our hostess writes “How much resilience can we afford? ”
If the climate sensitivity for a doubling of CO2 from recent levels is indistinguishable from zero, then we don’t need to “afford” any “resilience”.
Well said.
The AGW scare served a useful purpose, showing the public how absolutely dogmatic and unscientific is “consensus science.”
We are all deeply indebted to Al Gore, the UN’s IPCC, Michael Mann, et al. for exposing this aspect of tax-funded science.
Thank you for adding some complexity to the issue of sea level rise. Rise does not equal CO2 increase and conversely drop in CO2 does not equate eliminating sea level rise threats. Just one area of the US illustrates the point. The USGS says that over half of the “sea level” rise in the Chesapeake Bay region is due to subsidence. Throughout the US they say that 80% of subsidence is due to groundwater extraction. As urbanization expands on a global scale, so do groundwater recharge areas disappear, lowering the rates of precipitation retention. The millions of acres of pavement and concrete replacing natural areas also accelerate watershed drainage, adding to the natural runoff from the continents.
As JC stated, the apportionment of AGW to the total sea level rise is unknown.
Even without any AGW sea level rise, what we have been experiencing for X years from natural forces and other impacts from man would have eventually caused coastal problems. Adaptation and mitigation need to make this part of the calculus in finding solutions and in estimating the timing of all such actions.
“…but I am inferring from some twitter comments that the costs of mitigation may be higher than previous estimated. ”
And yet I strongly suspect that the true cost of mitigation would in fact be several orders of magnitude higher than any current estimates …especially given that they’re not likely to do anything.
If you allow bureaucrats to command and regulate either adaptation or mitigation, the cost will be your liberty. It is in their self-interest.
FoxNews. “GOP Lawmakers Push EPA to Ax Proposed Water Rule amid Outcry from Farmers.” Text.Article. FoxNews.com, April 4, 2014. http://www.foxnews.com/politics/2014/04/04/gop-lawmaker-moves-to-block-epa-water-proposal-amid-outcry-from-farmers/
I’d say this shift in attitude by the IPCC is good news indeed. This climate exceptionalism stance is pretty well illustrated by a video sent to me by a friend, here’s the link:
http://www.bing.com/videos/search?q=youtube+terrifying+video&qs=n&form=QBVR&pq=youtube+terrifying+video&sc=0-16&sp=-1&sk=#view=detail&mid=A05F9335930482F95432A05F9335930482F95432
You may need to pick the one called “The Most Terrifying Video You Will Ever See”… for some reason the above link doesn’t take you directly to the video…its MIcroSoft so should probably be expected :-(….
I sent my friend the following response (I offer this only because its relevant to this thread):
“I watched the video, thanks for sending. The video was well done and the guy presenting seemed like a well-meaning guy, as are, I think, most of those who have been possessed by the AGW religion, including yourself. However, the argument presented in the video while effective emotionally is very flawed… I will demonstrate why and how….
Let’s apply the logic presented in the video to another catastrophic scenario–Earth impact by a large asteroid. The result of such an Earth impact is very well understood: complete devastation of the planet, the elimination of many (if not all) species, climate change on a far greater scale than anything predicted in the most dire AGW scenario. If we follow the logic of the video, we must now at all costs develop comprehensive technologies for detection and then deflection of asteroids, because if we don’t the result COULD be far too disastrous to contemplate. Far more important than solving AGW, because the potential impact is so much greater.
Now, you are likely to say that the probability of a large asteroid impact is astronomically low, and maybe the effects would be relatively modest. To that I have two responses:
1. Really? Actually, perhaps not as low as you might think and in fact not very well understood at all. There are serious efforts to track known objects, but I don’t know of any credible estimates of the probability that we might miss a large one until its too late. I will out of curiosity dig into this question a bit further, but it might take me a while to get to it and wanted to respond to you today.
2. So, in the case of an asteroid impact you would probably say that we really need to understand the probability of a catastrophic event and just how bad it could be, before we craft a plan to avert the disaster. I agree.
I’ll also offer another analogy– The so-called “Cheney 1% doctrine”, which is to paraphrase: If there is a 1% chance that an adversary has the capability of striking the US with a nuclear weapon, we must strike first and wipe out the threat whatever the cost since the potential outcome is so catastrophic. This logic is identical to that used in the video. [whether or not you lend any credibility to Suskind’s book or not is irrelevant to my point]. I’d imagine that you wouldn’t be in favor of the Cheney 1% doctrine.
So, I think you probably get the point by now, but I’ll go a bit further. When analyzing any potential solution to a problem like this one must ALWAYS consider the entire spectrum of potential outcomes and their probabilities. This necessarily generates a spectrum of potential actions… the real world is not binary like the guy’s table.
Finally, the other problem with the argument in the video is one of context. He sets up a simple game universe in which there is only one problem to consider–AGW. The real world is far different, of course. There are many problems in need of solution that will require scarce resources. Problems must be prioritized and resources allocated, cost benefit analyses must be done in this process. For instance, shouldn’t we take away all those resources devoted to AGW and direct them to asteroid impacts? And there are a myriad of other problems: increasingly antibiotic-resistant bacteria, clean fresh water shortages, cancer, the ever-growing list of welfare entitlements…. the list obviously goes on…
When the resource allocation part comes around, here is where, of course, if inevitably gets political. Governments will tax their citizens to pay for the solutions, thus it gets political, by definition. Government taxation is essentially at the point of a gun, individual choice is not an option (at least in the functioning democracies), so a lot of people tend to take it seriously and want to be convinced of what the money is paying for. And many of us, do not trust governments to make wise decisions about many of these matters and will continue to embark upon critical review of these issues.”
My point in posting this here is that I think this is a pretty good refutation of the “exceptionalist” doctrine, so I find it very encouraging to hear of this new attitude coming out of the IPCC.
NASA – Near Earth Object Program
“With over 90% of the near-Earth objects larger than one kilometer already discovered, the NEO Program is now focusing on finding 90% of the NEO population larger than 140 meters.”
“As of April 01, 2014, 10855 Near-Earth objects have been discovered. Some 863 of these NEOs are asteroids with a diameter of approximately 1 kilometer or larger. Also, 1462 of these NEOs have been classified as Potentially Hazardous Asteroids (PHAs).”
The wild card is previously unknown comets, they swoop in from way out on very long term orbits and could only give us minimal warning if we were very, very unlucky.
Statistics say you can rest easy, but one must keep in mind that they never saw the recent Russian meteor coming at all.
http://neo.jpl.nasa.gov/
I really wonder whether that guy actually believes what he’s saying, or whether he’s cynically manipulating people. Exactly the same argument can be used to prove that we should all buy lottery tickets, and accept Jesus Christ as our savior.
Tom, even a 40 kT impact on a nuclear power could cause a retaliatory launch before anyone realized that a rock, not a bomb, caused the explosion.
I think we are no longer on hair trigger alert now a days. They track launches so they have a pretty good idea if it originated from an unfriendly.
The geological record of the last million years doesn’t show any sign of a major asteroid strike. On that basis the probability of one in the next century is less than one in 10,000. And probably way less.
In order to argue that global warming presents even less of a problem for the coming century than a major asteroid strike you need to show that the concern currently being voiced by the geophysical community has a 9999/10000 chance of being wrong.
Given that you’re just one person and the attendance at the annual Fall Meeting of the American Geophysical Union is over 20,000 scientists, if you’re not a geophysicist then it seems to me that it’s more likely that you’re the one who’s wrong. The AGU membership itself is around 60,000, and that’s without counting the European Geophysical Union membership.
If Climate Etc. had 20,000 regular contributors it might be a different story. Admittedly I have to use my toes as well as my fingers to count them, but so far I haven’t run out of toes.
“3. The demands of developing countries: While wealthy countries account for most greenhouse-gas emissions, poor countries suffer the most damage from climate change.”
No, currently there is a net benefit from emitting co2. Only when the developing countries emit as much as the developed countries, then the benefits turn into damage (according to Tol). That is the real catch-22, which is never spoken out.
Hans Erren
Yes.
The concluding premise of Tol’s study is that global warming is beneficial for humanity up to around 2.0 to 2.5C warming above today’s level, and that this level will only occur late in this century, using the 2xCO2 TCR estimate of IPCC.
It is unlikely IMO that the “TCR estimate of IPCC” is understated (why should IPCC understate it?).
It is conceivable, on the other hand, that this estimate is overstated (after all, IPCC is selling potentially deleterious AGW).
So the 2.0 to 2.5C limit could actually occur later than projected by Tol.
If we accept the UN (and US Census Bureau) projections on population growth, there will be 10.2 to 10.5 billion inhabitants by 2100.
Between 1970 and 2010 the globally averaged per capita CO2 generation from fossil fuels increased by around 10%.
But this was not evenly distributed.
“Industrially developed” nations (North America, Europe, Japan, Aus/NZ) actually saw a decrease of around 3% in the per capita CO2 generation over this 40-year period, while the rest of the world saw an increase of 53%.
The decrease in per capita CO2 emissions in the “industrially developed” nations has accelerated since 1980 (energy conservation measures, etc.), while that in the rest of the world has increased (primarily due to the rapid industrial growth in nations like China, Brazil, India, etc.)
CO2 from the second major source, deforestation, comes primarily from the third world today. This has remained fairly flat since the 1970s and now stands at around 15% of the total human CO2 emission.
If we ASS-U-ME that per capita CO2 emissions will continue to grow, and that they will increase by 30% by the end of this century, we arrive at a cumulative CO2 emission from today to 2100 of around 4,000 Gt and an atmospheric CO2 concentration of around 650 ppmv by then.
The Tol study is based on a more rapid CO2 growth scenario, reaching around 750 ppmv CO2 by 2100.
This equals “growth” of around 350 ppmv above today versus a more reasonable 250 ppmv above today.
So the CO2 growth assumptions used as the basis for the Tol study are arguably also exaggerated, which would push the “breakeven point” (between beneficial and harmful warming) further into the future.
But the key here is that the added CO2 will not come from the already “industrially developed” countries (whose per capita CO2 emissions are declining today), but from the developing nations, as you pointed out.
Max
Manacker,
Excellent point. It must to be repeated endlessly until it is widely understood, acknowledged and accepted.
The relevance to policy is that for GHG mitigation strategies to be successful, they must be successful in the developing countries. Therefore, our policy focus should be on developing technologies that are cheap for developing countries.
Since fossil fuel energy is responsible for 70% of GHG emissions policies must focus mostly on providing cheap alternatives fro fossil fuels in developing countries.
Since electricity will provide an increasing share of global energy (and will displace fossil fuels for some transport and some heating, and produce liquid fuels) as time progresses, electricity – i.e. cheap electricity – needs to be a main focus of policy.
As always, the obvious way forward is to remove the impediments that are preventing nuclear power from being a cheap source of energy for developing countries.
@HE: No, currently there is a net benefit from emitting co2.
With CO2 having climbed in the past hundred years to 33% higher than it’s been in the past million years (10,000 times as long in case you’ve mislaid your calculator), we should start a pool.
What proportion of geophysicists would agree with your statement?
I’ll start with less than 5%. I may well be beaten by those claiming less than 2%. I rather doubt those claiming less than 10% have any chance of beating me.
The problem with taking the “C” out of CAGW is that it removes the impetus for total government control. Governments are funding the research for the most part to justify their continued power grab. Having someone come along and tell them it is NBD, destroys the justification for their actions. So they will not fund it or admit it exists.
Scientists need to ween themselves from the government teat so that they can then let the data lead them to the conclusions, not government sponsored conclusions leading to a search for supporting data.
Considering sea levels, the Egyptians, Carthaginians, Greeks and Romans had the correct solution: local common sense.
We can improve it slightly. Take unadjusted local and regional records as far back as we can find. Apply some statistics, noting outliers. Figure what you wish to protect against (1, 2 or 3 sigma). Add a safety factor. Consider the cost. Balance the cost of protection against the cost of failure.
I had an erosion problem on my lot (steep). Solution: Buy some stakes ($10). Cross the wash with logs and branches held by stakes. Fill with brush. Cover with fallen leaves. Leave the lower lot natural.
Result: Erosion stopped. Compost remains. This solution has been known a long time.
Sykes, Frank. Humus and the Farmer. London: Faber and Faber limited, 1946
Turning the Economist is a far cry from turning the British governments present policies, or Kerry, Obama, and the EPA.
Reading the AR5 WG2 SPM closely, the scary stories are still there but in less falsifiable ways. The UNFCCC is turning to spending on adaptation ($485 billion Green Climate fund wish by 2020) because Kyoto mitigation failed and they know there is no hope of getting mitigation in Paris in 2015. But this way, the UN still gets oodles of adaptation money to play with, adaptation conferences to host, studies to staff…and a reason to perpetuatemthe IPCC.
Reading parts of the underlying WG2 report ( done with 3.4.2.5 and related extinction sections, much more to go on crops and other matters), the AR5 WG2 scientists have climbed down greatly and are presenting more nuanced balanced meta-analyses. Too many got burned in AR4 (Himalayan glaciers), and too many are not climatologists with vested interests in climate models. They are biologists, ecologists, agronomists, public health experts practicing legitimate science who cannot afford to get tarred by the increasingly ill repute that most of climate science per se is falling into as a result of the pause, erroneous ECS, repeated exposure of paleoproxy scientific malfeasance, and the like.
Rud,
Still planning some work on changes to past temp records?
Hard to believe all the old records on had3 and GISs have to be adjusted cooler than measured. Has the effect of increasing the warming trend without warming observations
Scott
Scott, done. Judith has the postable draft, and emailed last week that she liked the book version submitted for her review first ( not postable on CE since uses footnotes rather than hyperlinks, and too long and complicated for a mere guest post. ) It is a lot of work to turn a draft book essay into a blog post, so I don’t volunteer until she gives a scientific and thematic green light.
I suspect she is extremely busy, plus overrun by AR5 WG2. Plus all the policy developments of adaptation versus mitigation, which are a BIG deal.
Regards
Rud
Thanks. Look forward to reading it
Scott
As I mentioned before, it is premature before WG3’s Mitigation of Climate Change report due out this month to say that the IPCC view has shifted in any way. Wait for that, and then see if there is a shift.
On adaptation and resilience, these are expensive, so what about a carbon tax to subsidize these expenses with a premium added on using the culprit? Otherwise the expenses would come out of regular government budgets by decreasing other programs or increasing taxes. Carbon taxes would also support transitions to cleaner energy sources and fuel efficiency as part of a mitigation strategy.
I suspect that the IPCC ‘view’ (as reflected by Pachauri and chief spokespersons) won’t have shifted; however, I don’t see them as driving the public debate on climate change anymore
IPCC is not supposed to have a view in the sense I understand your text.
WG3, in particular, is not about the need to mitigate, but about options available in case there’s will to mitigate.
I would expect that the report is really written in the spirit I state above. The need is covered by WG1 and WG2
The outline
http://www.ipcc-wg3.de/.files/WGIII-Outline-AR5.pdf
tells on what we should expect.
Perhaps I have been on this website too long. By IPCC view, I mean the view ascribed to the IPCC by the readers of their reports. It is a subtle difference, but many here think of the IPCC as a single-minded thing, even though it represents thousands of views in its reports.
Thinking a little more on what I expect to come, WG3 is likely to add a further step towards discussing the climate change as a complex problem in need of complex action that must be balanced with what is done for other major issues like sustainable development taken more generally and poverty.
I think climate change can be considered a positive feedback to problems with energy, water and food resources, health, security, resilience and sustainability, that would have occurred anyway with increasing population demands, especially with growing development. It can’t be separated, but mitigation also mitigates this feedback to other problems.
Jim,
There are opportunities for double dividends. We have even books like one by Roger Pielke Jr that propose solving the whole issue by those.
Unfortunately what’s most efficient for one goal is seldom most efficient for another, and there are also many cases where the different goals lead to conflicting requirements.
Pekka, I guess my point was the counterpoint to Pielke’s bottom-up approach in which he would attack each problem one at a time after prioritizing them based on risk. Mitigation is a top-down approach that helps with many problems and reduces many risks at the same time, and you don’t have to prioritize them. However, mitigation is a simple idea (minimize integrated emissions) with many component pieces needed to make it work, and these need prioritization in terms of effectiveness and replacement potential.
Yup, deal with the existing risks that we’ve neglected.
@JimD: By IPCC view, I mean the view ascribed to the IPCC by the readers of their reports.
What do you mean by “the view”? Readers of the IPCC reports have an enormous variety of views they ascribe to the IPCC. Do you mean the average view? And if so how would you define that, and how would you measure it?
Vaughan, the premise of this main post is of a shift in the focus of the reports. I called this a shift in the “IPCC view” mainly in the context of this perceived shift. If there was no IPCC view in the first place, it would be impossible to perceive a shift. However, I am not convinced of any shift in WG2 except to show broader concerns, and I say wait for WG3 to see if that shifts.
Jim D: Mitigation is a top-down approach that helps with many problems and reduces many risks at the same time, and you don’t have to prioritize them.
One of the uncertainties is whether mitigation on the scale to have a hoped-for effect will in fact reduce any risks at all. If all of the fossil fuels burned to date have a total effect less than a 0.5C mean rise, then total elimination of all fossil fuel use will be futile in terms of improving climate, but very costly in reducing agricultural output and other kinds of wealth. And if reduced sunspot activity is going to reduce mean temperature independent of anything else that humans do, then we probably want to keep burning fossil fuels, assuming that to be effective in raising mean temperature.
To me, the biggest problem with mitigation is that it looks likely to be completely futile.
Matthew Marler, there is some debate as to whether we have used 15% or as little as 5% of the potential fossil reserves. Mitigation by even leaving a significant fraction of what remains in the ground would have a large effect compared to using it all. The difference is in the hundreds of ppm which is several degrees.
Jim D: The difference is in the hundreds of ppm which is several degrees.
That is the part that is uncertain, the “is several degrees”. It’s based on the abstract and counterfactual notion of “equilibrium”, and might be cancelled by increased cloud cover.
Matthew Marler, however you want to look at it, several hundred ppm is a major knock to the system, including just going by paleoclimate evidence. It usually takes tens of millions of years to change CO2 by this much.
Jim D: however you want to look at it, several hundred ppm is a major knock to the system,
That is not what the debate above is about. The debate is over: (1) does increased CO2 inevitably cause a deleterious temperature increase, given the climate as it is now? (2) will dramatic reductions in fossil fuel use prevent that otherwise inevitable rise?
With a great deal of money, labor and time to be invested, there are good reasons to bet that the answer to both questions is “No”.
WG2 tells you that there are large risk differences in almost all aspects, between a 2 C and a 4 C rise, as seen in the SPM.2 Table 1. If there is one bottom line to what it contributes to the discussion, it is this point about the temperature sensitivity of risk.
An item on WG3 due out soon.
http://www.huffingtonpost.com/2014/04/05/ipcc-meeting_n_5097022.html
Perhaps you should concentrate on Huffington Post. Please bother to read an article you recommended to our attention in an earlier post -http://www.huffingtonpost.com/2014/04/04/wind-power-emissions_n_5087308.html, boasting that wind power cut 4.4 % of 2013 emissions.
That is one strange article. It only discusses an installed windmill capacity in megawatts – but of course an installed turbine does not reduce any emissions when it stands idle. The savings come from megawatt-hours, and those numbers are sorely missing. It reads like an advertisement for a wind energy industry association. Did you notice?
CG, it is easy to evaluate saved emissions from windfarm power over a year, which is what they did. This was for 2013.
Of course they have a right to publish totally irrelevant numbers. And I have a choice: maybe they are merely incompetent, or maybe they lie on purpose.
Jim D: WG2 tells you that there are large risk differences in almost all aspects, between a 2 C and a 4 C rise, as seen in the SPM.2 Table 1. If there is one bottom line to what it contributes to the discussion, it is this point about the temperature sensitivity of risk.
More assuming that which is in doubt: (1) will increased cloud cover prevent/reduce warming? (2) will mitigation efforts reduce CO2? (3) how much of an effect does anthropogenic CO2 have on climate changes?
“3. The demands of developing countries: While wealthy countries account for most greenhouse-gas emissions, poor countries suffer the most damage from climate change.”
Way back in 2008 China emitted 23% of CO2.
http://www.epa.gov/climatechange/ghgemissions/global.html
“In 2012, the largest contributors of greenhouse gases were China with 27%, the United States with 14% and the European Union with 10%.
On an individual basis, China and the EU were at the same level, with 7.7 tons of carbon gas emitted per person and year.”
http://www.cnn.com/2013/11/19/world/greenhouse-gases/
The Atlantic should be ashamed of itself.
It’s another gravy train. How scared can you make the populace? Time will tell, but here is another way for government to take taxpayer money and redistribute it.
Dr. Arrhenius’ Greenhouse Effect Theory — What do most of today’s Climate Scientists think about this theory? Among Climate change scientist skeptics, does ppm not really matter (i.e., too many other things that this theory doesn’t account for)? (Thank you Manacker for your kindness with the prior link).
Dont ask a scientist, ask an engineer. Why? because we use GHE physics to build stuff that works. stuff you use every day.
The evil fossil fuel industry and deniers –i.e., those who are destroying the Earth with their poisonous CO2 — do not believe that –e.g., a doubling of atmospheric carbon dioxide will result in an 8°C rise in the average global temperature of the Earth, as calculated by Arrhenius in the 1800s. The IPCC doesn’t believe it anymore either. However, the IPCC and global warming alarmists will continue to believe that there is some rise — e.g., a 0.5°C rise — as opposed believing there is no rise at all due to a doubling or even with a tripling because atmospheric CO2’s absorption bands are already fully saturated.
Steven Mosher: Dont ask a scientist, ask an engineer. Why? because we use GHE physics to build stuff that works. stuff you use every day.
Be sure to let us know when you engineers build a climate. Or even an accurate climate simulator.
“we use GHE physics to build stuff that works”
Like?
Andrew
Dont ask a scientist, ask an engineer. Why? because we use GHE physics to build stuff that works. stuff you use every day.
and we test with computers models, make prototypes, again and again, until we make it work, and then offer it to (not force) you to buy it. Even so, after long testing and scrutiny things do fail. Ive done it, and even when it works you may not sell it.
Climate so called ‘engineering’ is something else, we have to buy it (through taxation) working or not, want it or not.
.
LOL @ Mosher’s comment.
Dear Wagathon — The intro to your post wasn’t exactly objective, as Arrhenius dramatically lowered his initial estimate. But your last statement is helpful in us laymen trying to understand. Are you saying that the overwhelming view of today’s Climate Scientists (or at least Skeptics) reject Arrhenius theory because “atmospheric CO2′s absorption bands are already fully saturated”? (thus, ppm doesn’t really matter) Thanks.
Stephen, Wagathon is wrong, of course. Arrhenius knew and today’s climate scientist know that the effect still goes as the log of the CO2. Arrhenius additionally postulated a constant relative humidity, which a simple equilibrium model would still do today, and therefore had water vapor as the main positive feedback. Only his numerical tables for radiative effects, and data for earth’s atmosphere were crude. He also knew about albedo feedbacks at higher latitudes, and so had a very credible method for seeing the effects of adding CO2 on the surface temperature.
‘We use GHE physics to build stuff you use everyday’
You mean like Coca-Cola?
There’s so much uncertainty involved in the business of estimating the average temperature of the globe in 30-50 years. Initially, Michael Mann was uncertain the MWP (Medieval Warm Period) and LIA (Little Ice Age) ever existed. There is uncertainty even now about the logarithmic effect that puts a lid on CO2′s contribution to global warming (“The relationship between temperature and CO2,” according to Dr. Timothy Ball, “is like painting a window black to block sunlight. The first coat blocks most of the light. Second and third coats reduce very little more. Current CO2 levels are like the first coat of black paint.”); and, Mann is suing reporter Mark Steyn and fellow academic Tim Ball for having the temerity to question his scientific integrity. And now — because global warming has stopped — there is considerable uncertainty about how much climate change is actually due to natural causes, like changes in solar activity and the negative feedbacks of water vapor and clouds. The real question is, how large do uncertainties have to be to make our estimates worthless?
Mosher: “because we use GHE physics to build stuff that works. stuff you use every day.”
Does it quit working for 35 years or 15 years every 60 years or so?
Wagathon, you write “The real question is, how large do uncertainties have to be to make our estimates worthless?”
I have enormous difficulties with this question. If we agree there are uncertainties, then, surely, be definition, we don’t know how large they are. Not only that, but as long as they remain uncertain, we can never know how large they are.
This is my whole point in emphasising The Scientific Method. What physicists established in the 17th century was the way to proceed to solve problems by eliminating the uncertainties. When you have a measurement, then you are certain as to how accurately you know the value. If you cannot eliminate the uncertainties, then it follows that you must be uncertain of what is happening.
So as long as climate sensitivity relies on estimates, it follows that no-one can be certain how large the effect is, of adding CO2 to the atmosphere from recent levels. As long as there are uncertainties, then any values associated with climate sensitivity cannot be anything other than guesses.
If Arrhenius was around today (having the benefit of more data), how would he discuss the “Pause”? (other than obvious things like volcanoes, aerosols, etc.)
Bad Andrew. and
The Green house Gas effect–C02 retards the free escape of LW radiation to space– is founded in radiative transfer theory. In short radiation at all wavelengths is effected by the medium that it travels through.
Take X band radiation. We want to know how X Band ( a radar) radiation propagates through the atmosphere. Will it pass through h20 or be absorbed or reflected. If we are building a satillite to transmit information to the earth
we will want to know how the atmosphere effects the power received
http://dl.acm.org/citation.cfm?id=1943639
http://dl.acm.org/citation.cfm?id=2470477
Suppose we are building a stealth Aircraft and we want to know how visible it is in the IR region? Well, we use radiative transfer. We create a signature of the aircraft ( emissions due to aeroheating, hot metal, and burning fuel)
When then use radiative physics to calculate how those Watts will transfer through the atmosphere ( say from 50K AGL) to the ground where a sensor sits. And then we will calculate the effects of rain, clouds, dry air, tropical atmospheres. We can even vary the amount of C02 in the air. Why? Because we want to test whether adding C02 to the exhaust of the aircraft will lower its IR signature.
We would also use radiative transfer in designing certain parts. Like an exhaust nozzle
http://www.global-sci.com/galley/ICCP7-21.pdf
More down to earth if we are building a C02 detector for a building we would design a system where a detector measured an IR signal. IF C02 increases the signal will bet impacted. Why? because C02 is “relatively” opaque to IR ( that’s at the heart of the GHE effect )
http://en.wikipedia.org/wiki/NDIR
In summary
When radiation passes through a medium ( say the atmosphere) it’s transmission is governed by physical laws. Depending on the frequency
and the molecules in the atmosphere you will find areas where you have “windows” a window is a frequency that is transparent to the radiation. The radiation passes without attenuation
http://en.wikipedia.org/wiki/File:Atmospheric_Microwave_Transmittance_at_Mauna_Kea_(simulated).svg
When you design a system that needs to ‘see through’ the atmosphere, you try to center it on these windows.
You are driving down a road at night. There are low lying clouds on the road. Your lights (radiatiion) hit the fog and reflect back in your face. You are blind. Why? see radiative transfer theory.
However, you could attach a radar and “see through” the clouds. radar can see through some clouds. Why? see radiative transfer theory. What if you use an IR sensor on the front of your car? could it see through the clouds? A little bit. Well how much? Just consult radiative transfer theory for the answer. Or look to the engineers who build FLIRs.
Note that they use MODTRAN. MODTRAN used to be classsified.
But now, its oneline. Want to estimate the first order no feedback impact
of doubling C02? use MODTRAN. That’s GHE engineering.
http://www.flir.com/uploadedFiles/ENG_01_FOG.pdf
We also use radiative transfer theory to design COVERT com channels.
We center on a frequency where the atmosphere absorbs a lot of the wattage. That gives me a short range com channel that cant be intercepted outside a certain range. Since highly classified data is transmitted over these channels the engineers who work on them only use the best physics.
That would be GHE physics. No links for these sort of things, you have “no
need to know” as we say in the classified world
If around today I would hope Arrhenius would be the be the first to say that we are immediately lost to science if we follow the Left down their reductionists’ road to climate prediction because down that road we also must believe that the Earth actually has an average temperature. Temperature is what is known in science as an intensive variable. Simply put, the average of an intensive variable has no meaning. Just imagine taking an average of all telephone numbers in NY to understand what the concept, ‘NO MEANING’ actually means.
Stephen, Arrhenius was interested in equilibrium states. I don’t know if he would have been influenced by time variations in how we got there. The equilibrium state is the problem more solvable by basic physics than time variations in geophysical fluids, which are just like noise in the system.
Jim D, the equilibrium state for 120,000 out of every 130,000 years is ice age.
:Mosher: “because we use GHE physics to build stuff that works. stuff you use every day.”
Does it quit working for 35 years or 15 years every 60 years or so?”
############################
Of course not. But understand what the GHE effect is.
1. GHGs are relatively opaque to IR. water, C02, etc. This is engineering.
2. When you add GHGs you raise the ERL. This is engineering
3. When you raise the ERL the world radiates from a higher and hence
colder level. This is engineering.
4. When you radiate from a colder place, the rate of loss is less. This is engineering.
5. When energy in exceeds energy out, balance must be restored. This is engineering.
And then……
The energy that doesnt escape will.
A) heat the atmosphere And/or
B) get stored in the ocean and/or
C) melt ice.
Figuring out the ins and outs of A, B and C… That’s science now. There is no guarantee how it will move between A B and C. The GHE effect says that if you redard the release of radiation the system will have to do some combination of A, B and C. not all of them, not all at the same time, not all in the same amount. That’s the tough problem. ‘A’ can stop if B or C accelerate.
Now there you go — it’s more than A, B & C; otherwise, oceans would never… cool.
Mosh
Isn’t there a d and an e?
D. Will create more biomass
E. will heat up the soil
Tonyb
… and the cumulative effect of A, B, C, D, E &etc.
When you radiate from a higher place, the rate of loss is higher too because the sphere is bigger.
Steven Mosher you write “The energy that doesn’t escape will.
A) heat the atmosphere And/or
B) get stored in the ocean and/or
C) melt ice.”
I don’t think so. The energy will escape, otherwise it will accumulate indefinitely, and the world will get hotter and hotter. What happens is that the energy that you are referring to, escapes to space, but, because of the insulating effect of the extra GHGs, it takes a higher surface temperature to force the energy into space.
The question is, how much does the surface of the earth have to heat up in order to force the extra energy into space? That is the question which physics cannot answer.
… taking a step back, any changes in energy radiated into space is the result of the change in the amount of energy from the sun that is absorbed by the Earth.
“sunshinehours1 | April 4, 2014 at 4:47 pm |
When you radiate from a higher place, the rate of loss is higher too because the sphere is bigger.”
Net loss is decreased. That’s just engineering.
“Jim Cripwell | April 4, 2014 at 4:48 pm |
Steven Mosher you write “The energy that doesn’t escape will.
A) heat the atmosphere And/or
B) get stored in the ocean and/or
C) melt ice.”
I don’t think so. The energy will escape, otherwise it will accumulate indefinitely, and the world will get hotter and hotter. ”
#######################
To return to equillibrium yes the surface would warm. You just made the rest of the argument. Thanks Jim,
So, you will just continue to ignore the cumulative effect.
~Nassim Nicholas Taleb, Fooled by Randomness(2004)
“Jim D
The equilibrium state is the problem more solvable by basic physics than time variations in geophysical fluids, which are just like noise in the system.”
Sometimes one doesn’t know whether to laugh or cry.
So a dynamic, steady state can be treated as an equilibrium state as the cyclical components are ‘ just like noise in the system’.
Steven, you write “To return to equilibrium yes the surface would warm. You just made the rest of the argument. Thanks Jim,”
I am not sure why you re thanking me. I have always agreed that as you add more CO2 to the atmosphere the surface will warm. What no-one, and I mean no-one, knows is HOW MUCH the surface warms. My guess is that for a doubling of CO2 from recent levels, the increase in temperature is negligible.
Steven Mosher: 1. GHGs are relatively opaque to IR. water, C02, etc. This is engineering.
2. When you add GHGs you raise the ERL. This is engineering
3. When you raise the ERL the world radiates from a higher and hence
colder level. This is engineering.
Where is building the stuff that works?
4. When you radiate from a colder place, the rate of loss is less. This is engineering.
5. When energy in exceeds energy out, balance must be restored. This is engineering.
And then……
The energy that doesnt escape will.
A) heat the atmosphere And/or
B) get stored in the ocean and/or
C) melt ice.
You always stop short of the details, such as: what is the balance of radiative and non-radiative transfer from the surface? How much warming and where, and over what time span? And so on, and so on.
Steven Mosher: To return to equillibrium yes the surface would warm. You just made the rest of the argument. Thanks Jim,
Again with the counterfactual equilibrium claims.
Steven Mosher
Don’t forget the energy that never makes it into our system, because it gets reflected back out by clouds (that’s not included in your version).
On average that’s around a third of the incoming solar radiation.
We know from ISCCP observations (Pallé et al.) that the global monthly mean cloud cover decreased by around 4.5% between 1985 and 2000. As a result the Earth’s global albedo decreased by the equivalent of around –5 W/m^2, i.e. decrease of reflected SW radiation (= heating of our planet).
This coincided with a period of observed rapid global warming.
After 2000 the cloud cover increased, which coincided with a period of observed slight global cooling.
Coincidence?
Hardly.
It’s not as simple as you paint it Mosh.
CO2 is not the only “climate control knob”.
Max
DocMartyn, an equilibrium world with double CO2 is on average so much warmer that even the noise in the global surface temperature can’t span the range, so, yes, the noise is insignificant if you are planning on doubling the CO2.
JimD: DocMartyn, an equilibrium world with double CO2 is on average so much warmer that even the noise in the global surface temperature can’t span the range, so, yes, the noise is insignificant if you are planning on doubling the CO2.
You missed his point completely. If extra CO2 or a little warming increases cloud cover, then the “much warmer” that you speak of will not occur.
Matthew Marler, my statement is also true of the lower proposed sensitivities. 1.5 C is far above natural variability certainly as regards decadal averages, or even individual years with massive El Ninos. Sensitivity to doubling is just a different scale of things.
I come to this blog to try and learn, or at least understand. Dr. Arrhenius advanced a theory which has been updated with better information that didn’t exist a hundred years ago. What I’m trying to understand are the “science” arguments by skeptics that a current trajectory to 1,000 ppm is irrelevant. Wagathon says the saturation of absorption bands make the use of ppm irrelevant. Is this “the” science argument that ppm is irrelevant?
Stephen, you write “Is this “the” science argument that ppm is irrelevant?”
My answer is no. I believe in The Scientific Method; one needs to validate all hypotheses against measured data. There is no measured data to show that as more CO2 is added to the atmosphere from recent levels, anything happens to climate at all. No-one has measured a CO2 signal in any modern temperature/time graph, despite the FAR promising that one would be measured by 2002. Further Beenstock et al have looked for a signal in the temperature/time graph and found none. If there is no CO2 signal, then it follows that it is likely that the amount of CO2 in the atmosphere does not affect global temperatures.
Tostoi wrote
Similarly every skeptic is skeptic in his own way.
One may contradict physics, another declares that he’s the guardian of the right way of doing science, the third ..
Dear Jim Cripwell — I hope the following comes across as respectful, as this sure is my intent. For the sake of discussion, would it be objective to add one word to your statements — there is no “current” signal? My question continues to be with the GHE theory of Dr. Arrhenius. If we had trajectories of 2,000, 3,000, ppm, this still wouldn’t be a concern? The theory of Dr. Arrhenius at any ppm trajectory level can not be considered relevant without a “clear signal” first? Thanks.
Another way to ask my question regarding GHE theory. Based on the current understanding of science, how would a Skeptic answer the question. At levels of 1,000 ppm, what would be your best guess on resulting temperatures: (A) Very significant increase; or (B) We have no idea of what, if any, increase there would be.
Jim D
I think you are missing the point here.
You (and Mosh) are concentrating your attention on the outgoing LW impact of added GH gases, primarily CO2 (i.e. the greenhouse effect).
This effect results in around 3.7 W/m^2 forcing with a doubling of CO2.
At the same time you are totally ignoring the incoming SW impact of added cloud cover (i.e. the albedo effect).
Clouds reflect around one-fourth of all incoming solar radiation, on average.
If cloud cover increases by 5% this has the same forcing impact as a doubling of CO2.
There was an observed reduction of cloud cover over the period 1980-2000, which coincided with a period of sharp warming. Since 2000 the cloud cover has recovered partially, which coincided with a period of slight cooling. (Palle et al. Earthshine data)
IPCC has conceded that clouds remain the “largest source of uncertainty”. However, despite this “uncertainty”, it considers the effect of clouds simply as a “feedback” to (anthropogenic) forcing, and has ignored any natural forcing effect from clouds themselves.
I think this is a basic mistake, as we have seen from the Palle data.
It could be that clouds are the “control knob” over 30+ year periods or longer, rather than just GH gases.
If so, all the estimates of 2xCO2 transient climate response or equilibrium climate sensitivity may be grossly exaggerated.
Max
Stephen Segrest
In your search for the empirically validated “CO2 signal”, you must be very careful not to overlook other factors, which may be influencing our climate, for some of which we do not understand the mechanisms all that well at present.
Look at statistically significant (i.e. 30-year+) periods of good CO2/temperature correlation, to be sure, but do not overlook periods of similar length during which there was no correlation (the mid-century period of slight cooling, the current pause, etc.).
Only if you have a good understanding of why there was no CO2/temperature correlation during these periods can you begin to develop a good understanding of what the empirically validated “CO2 signal” really is.
Theory is very nice.
But nothing beats empirical evidence from actual physical observations or reproducible experimentation.
And this evidence is very slim so far (Jim Cripwell’s position, as I understand it).
A word of caution: be skeptical of rationalizations, which attempt to “explain” periods of no CO2/temperature correlation, but are not based on empirical data.
A second word of caution: be skeptical of subjective interpretations of dicey paleo proxy data from carefully selected periods of our planet’s geological past – especially if these are based on the “argument from ignorance” (i.e. “we can only explain this if we assume…”).
Your journey will be a slippery path.
There will be those along the way that promise you all sorts of false wisdom based on theoretical deliberations and model outputs.
There will be some that claim (without any real justification) that the whole greenhouse effect is simply hokum, dreamed up by politicians who want to tax and control energy.
Be skeptical of both groups.
Insist on empirical evidence following the accepted scientific method before accepting any hypotheses cloaked as scientific truth.
Lots of luck.
Let me know what you find out – because I’m searching, too.
Max
Stephen, you write ” there is no “current” signal?”
I tried to say the same thing with the use of the expression “modern temperature/time graph”. As to speculating as to what would happen at 2000 ppmv or 3000 ppmv, I have no idea. All I can say is that, to date, no CO2 signal has been measured. My concern is with our politicians wasting my taxpayer dollars. There is no sign that adding more CO2 to the atmosphere from current levels has any significant effect on climate.
Stephen Segrest
To your specific question to how a skeptic would estimate the warming impact of atmospheric CO2 levels of 1000, 2000 or even 3000 ppmv ,as compared to today’s measured CO2 concentration of around 400 ppmv – l;et me give you my response, as a skeptic of the “CAGW” premise as specifically outlined by IPCC.
In AR4, IPCC tells us that a doubling of CO2 would result in warming of somewhere between 1.5 and 4.5C, when “equilibrium” has been reached. The mean value is commonly taken as 3C.
This value is suspect, in my opinion (as a rational skeptic.) I have concluded that it is very likely to be exaggerated.
It is based on predictions of climate models, rather than actual physical observations. These are obviously only as good as the input assumptions.
Several recent partially observation-based studies suggest that the 2xCO2 ECS is closer to around 1.8C, rather than 3C.
These are based on (partially) observed transient climate response averaging around 1.35C, rather than around 1.9C, as predicted by the models cited by IPCC.
Even though these studies are partially observation-based (rather than simply based on model outputs backed by theoretical physics), there is still great uncertainty regarding natural forcing factors. In all of these studies, these have been assumed to be very low (only that portion of solar forcing, for which we understand the mechanism, namely solar irradiance, and a minor amount of natural variance have been considered), so there may still be an exaggeration in the estimated CO2 effect.
[Here is where a true skeptic, like Jim Cripwell, will say that there can be no empirically defined CO2 signal until all other factors are considered and excluded. IOW the CO2 signal may be there, but could be so small that it is insignificant.]
But let’s accept (for now) the results of the latest studies, rather than those cited by IPCC:
2xCO2 TCR = 1.35C
2xCO2 ECS = 1.8C
The question of “equilibrium” is a dicey one, in itself. It is based on theoretical physics and a bit of circular logic (see a 2005 study by James E. Hansen et al). In effect the reasoning went as follows:
– our models tell us that we should have seen warming over an extended time period of X (in this case 1.6C, from 1880 to 2003, I believe).
– however, we only saw warming of X/2 (around 0.8C), after deducting a small amount we attribute to increased solar irradiance
– physical theory tells us we should see a time lag until equilibrium is reached
– so lets ASS-U-ME that the “missing” 0.8C are hidden “in the pipeline”, and we will still see this added warming in the future
So the “equilibrium” story is not that clear in itself in my opinion. But let’s leave this problem to the side and accept the values above.
Then it’s easy to calculate what the theoretical greenhouse warming above today’s temperature would be. For 1,000 ppmv:
Expected warming: 1.35C * ln (1000 / 4000) / ln(2) = 1.8C
And at “equilibrium”: 1.8C * ln (1000 / 4000) / ln(2) = 2.4C
We are presumably talking about anthropogenic warming, most of which is coming from the combustion of fossil fuels, which we know are a limited resource. So let’s see if there are any constraints resulting from fossil fuel availability.
We’ve seen all sorts of “peak oil” and “peak fossil fuel” studies, which tell us these resources are limited to a few decades. But there is a more optimistic study out there by the WEC in 2010, which gives estimates not only for the “proven reserves” of coal/oil/gas, but also of the much higher “inferred recoverable total fossil fuel resources”.
Since fossil fuels are fully interchangeable with today’s technology, we can look at the total.
The amount of total fossil fuels estimated to still exist on our planet as of 2008 represented 85% of all the fossil fuels that were ever on our planet, i.e. we had used up 15% of the total.
The remaining fossil fuels are expected to last us 150 years at anticipated future usage rate.
From this information, it is quite easy to estimate the maximum CO2 level we could ever reach from combusting all the remaining fossil fuels. The level was 385 ppmv in 2008 and the first 15% got us from a pre-industrial 280 ppmv to 385 ppmv, so:
385 ppmv + (385 – 280) * 0.85 / 0.15 = 980 ppmv
The WEC study did not include methane gas produced from methane clathrate (hydrate) deposits. A 2011 study by Boswell and Collett estimates that there could be as much as 43,300 trillion cubic feet of potentially recoverable methane from clathrates. This would add around 150 ppmv to the above estimate if this becomes a real potential.
So we are looking at 1130 ppmv as an absolute upper limit from anthropogenic fossil fuel combustion.
At this level the above warming estimates would be 2.0C and 2.7C, respectively.
You can calculate the CO2 warming, which could theoretically occur at CO2 levels of 2000 and 3000 ppmv, using the same formula, but this would be a bit too “hairy-fairy” for me, since there is no way we can get to these levels from burning fossil fuels.
This is the best answer I can give you, to the best of my knowledge.
Someone else might come up with a different answer. If so, he/she should show you the basis used to arrive at it, as I did above
Hope this helps.
Max
Water vapor is the most significant GHG. The atmosphere could hold far more water vapor than it does. No one knows why it doesn’t. No one cares why it doesn’t. It’s self-regulation is simply something climatologists take for granted. And yet, CO2 measured in ppm is important? Of course: it’s politically important; and, that’s it’s only importance. Atmospheric CO2 has been far more plentiful over the geophysical record of the Earth. Dr. Will Happer testified before the U.S. Senate that, “the planet is currently starved of CO2, and has been so starved for several million years.”
-Stephen Segrest | April 5, 2014 at 7:15 pm |
Another way to ask my question regarding GHE theory. Based on the current understanding of science, how would a Skeptic answer the question. At levels of 1,000 ppm, what would be your best guess on resulting temperatures: (A) Very significant increase; or (B) We have no idea of what, if any, increase there would be.-
If 1 C increase is regarded as very significant increase then (A)
but also (B).
What fairly certain is it probably take more than a century before global CO2 levels would reach 1000 ppm.
If 1000 ppm were a stop stop, we walking towards it but it’s 10,000 miles away. From time started measuring global CO2, it gone up less than 100 ppm and it should take more than coupe decade before any knowable process come cause it rise up to 500 ppm.
And if CO2 were to rise to 500 ppm, it’s unlikely to get .5 C increase in temperature.
But let’s assume the unlikely chance that in 20 year global temperature rise by .5 C and CO2 level were 500 ppm. So by 2034 global CO2 was 500 ppm and it’s and we had average increase in temperature of .25 C per decade. Or .025 C per year. And in terms CO2 more 5 ppm increase per year.
Within first 10 years, everyone would saying, Gosh, the ICCP model look like they might be right. And 20 years, everyone would saying, “Wow, looks like some of ICCP model could have been underestimating global temperature”.
But what would even more astonishing would be the rise in global CO2.
It would be like, “holy sh*t, what happened starting from 2014”?
But though alarming, this would have little effect upon anything- polar bears don’t die, sea level does rise in any significant manner. Etc.
Just as the more than .5 rise in temperature from 1850 to present did not have any adverse effects.
And such increase in CO2 would increase global crop production, and further increase the greening of the world. Which good considering most predict human population will peak around 2050. And having more productive cropland allows more area for other things- like forests
The point is that it would be alarming in terms of what could happen in the future, rather the result of it up to 2034.
It seems to me the good news would be that by 2034, everyone would be aware of how stupid wind mills and solar farms are.
So we aren’t going to get political stampede to make more wind mills.
So. We pat yourself on back for testing those idea early, when it did not matter. That was very clever.
So with such distractions eliminated, it’s possible we might decide to do effective measures to lower global CO2.
First we look at China which would probably be emitting 3 or 4 times the CO2 than any other country. China is now burning 4 times more coal than any other country. And maybe by 2034 it’s burning 10 times more coal than any other country. Or perhaps it’s already depleted all it’s available coal.
But in any case, with situation as described, we could have pretty good case to demand that China reduce it’s coal use [assuming it hasn’t already reduced it’s coal use because: A: it’s bankrupting the country. B: because they don’t much left. C: Because they are great nation- full of geniuses.
Now, China seems to be presently focused on relying more on nuclear energy, but if they are successful at doing this, it would be even more unexplainable as why we would have such huge unexpected rise in global CO2.
So seems if we had such wild increase, that China ever increasing CO2 should part of the reason. So a reasonable guess as to why this occurred could be China tried and mostly failed to develop more nuclear energy. Maybe they built lousy nuclear reactions which subsequently killed a lot people- so they were even dumber than the Soviets.
If so some nation will have re-train the Chinese so they don’t make unsafe
nuclear reactors.
In addition to making more nuclear reactors, there other proven ways to reduce CO2 emission- which is using more natural gas for electrical energy
production and for transportational needs.
So short term solution is much more fracking for natural gas.
Longer term solution is mining ocean for methane hydrates- which Japan and the US are currently trying get to point of getting beyond doing it experimentally. Which is around stage we doing fracking about 20 years ago. So within 20 years maybe one could actually be commercial mining Natural gas from the ocean in many parts of the world. If not, then more effort must put into getting it to operational mining.
We also stop the burning of wood for electrical production- it burning wood is worse than burning most coal. And instead burning wood we could bury it. And we burying it to use as lumber in the future [or burning the wood as fuel assuming CO2 levels somehow as inexplicable decrease as it increased]. So in same way we have national strategic oil, we say strategic
wood- and who knows maybe we have battleships made of wood in the future:)
Giving an alternative (more mainstream) view from manacker.
Effect of 1000 ppm: Using a 3 C sensitivity you get 5.5 C warming relative to 280 ppm, minus the 0.8 C we already have which means 4.7 C to go.
Also manacker is convinced all fossil fuels will run out before 1000 ppm, when other estimates say it could reach twice that by accounting for new-found undeveloped resources, and new technologies for recovering them. We saw methyl clathrates mentioned above for example. This would be another of Hansen’s ‘game over’ resources if we tap it. The upper limit of fossil fuels is increasing almost as fast as we use them due to discoveries and technology.
Bottom line: 1000 ppm is where you don’t want to be. Stabilizing at 500 ppm is barely acceptable and leaves 75% of known fossil fuels in the ground.
Max and Jim D — Thank you for taking the time.
Pekka Pirilä
Tostoi wrote
Happy families are all alike; every unhappy family is unhappy in its own way.
Similarly every skeptic is skeptic in his own way.
The science relating CO2 change to climate change is full of holes. Each skeptic directs attention to his or her favorite holes.
The ‘end of climate exceptionalism’ presumes that the Left (e.g., the EPA) will not continue to portray CO2 as a poison.
Wagathon
Don’t count on it. These guys have their own agenda (and it is “kill King Coal”, while slowing down oil and gas exploration/development as well).
Max
What! Kill King Coal? All that stored fossil sunshine. Oh what folly
to kill the golden goose that brought prosperity to human kind….
Enough to make jolly ol’ King Cole feel melancholy.
It has a gloss like crow feathers, an aroma like the best Lapsang Souchong…and it enables and powers whole civilisations. It is coal. It should only be used in massive and spanking-modern central facilities, because it is too powerful for anything less and too good to allow the waste of a single lump in an ageing or inefficient plant. Whatever eventually replaces coal will be pretty terrific, and certainly won’t be as luscious. That Belinda chick has got it right again.
High quality black will never peak because there is so much of it, centuries of supply just in NSW. And don’t worry about phase-out of coal. You can phase it out of your backyard, but someone somewhere will burn it – to make all that stuff you like to use and buy, even those ludicrous wind turbines.
Psst. You do like to own and use a lot of stuff, don’t you? No need to answer! Well, after you win your carbon neutral war with China, Japan, Korea, India etc to force them to stop burning coal you’ll still want own and use piles of stuff (in air-con comfort). Got your nukes online? Done enough terra-forming so every region is shaped like Norway or BC for hydro? Thought about all that?
You can play all you like at low-footprint lifestyles, minimalism, localism etc…but we’re not talking about expensive middle class hobbies here. After Asia, there’s just the pixies at the bottom of the garden left to manufacture all those piles of stuff you want to own and use. And it’s not just the hordes of aspirationals, the serfs thronging the shopping malls. It’s you and I who want to own and use piles of stuff, albeit with a measure of disdain and posh green ritual in some cases. You and I, all day every day, sit comfortably atop a civilisation enabled by coal.
Belinda
“Ol’ King Coal” is still waiting for his pipe (and his bowl), but he’s up to his eyeballs in fiddlers – at last count (according to concertmaster, Pachauri) there were 2,500 guys fiddling with the data.
A consensus cacophony of meaningless model outputs.
Max
JC says: “For CO2 mitigation to make sense given the uncertainties in sea level and acidification impacts, CO2 migration would need to benefit the whole host of potential adverse impacts from extreme weather events, public health, species extinction, etc.”
But I do not understand her:
– If climate sensitivity is not clear to be in the range 1.5 to 4.5 K, then, why CO2 mitigation is needed?. And if climate sensitivity is clearly in the range 1.5 to 4.5 K: IPCC needs to justify it.
– Next, if extreme weather has nothing to do with climate change, why do we need to worry (in terms of CO2 mitigation)?. And if extreme weather is clearly a consequence of climate change, then IPCC needs to justify it.
– The same for species extinction , etc; except for
– that reference to public health. This is the only JC’s issue that makes sense to me. If we mitigate CO2, we reduce contamination, and we gain in public health; but, could anyone figure out how to improve public health damaged by contamination without necesarily reducing CO2 emissions?. Some thing like … living in the countryside, wearing masks, using electric cars, … well, I have to leave you now, it seems that there are a lot of salesmen knocking the door.
“Defending low-lying cities against a rising sea level is difficult and expensive”
Wrong.
Back in the 90s the EPA did a study on adaptation to a 1 meter sea level rise.
The estimated cost was something around 400B.
spread that over 100 years. let the market actually price the risk of living
in areas below 1m ASL.
http://www.livescience.com/18997-population-coastal-areas-infographic.html
According to Nicholls, Tokyo has already dealt with rapid, greater than one-meter subsidence without noticeable strain (though at actual cost). The idea that a few extra millimeters a year spread over centuries will be a big deal is risible.
Boredom that this scientific sedate, when what is at stake is simply human greed. See the book Windfall by McKenzie Funk.
An overview of the content here: http://bloombergview.com/articles/2014-02-24/profit-from-global-warming-or-get-left-behind
… another hypocritical anti-business diatribe.
I have made a list of recent articles on the realistic adaptation argument, include the Economist, Lilico and Atlantic ones and a lot of others. It seems to be an argument that is gaining strength.
“First, adaptation measures are less politicized than mitigation measures. People may not agree on the science of climate change, but uncertainty about the future is no excuse for failing to prepare for the worst. ”
Beware Trojans bearing gifts. This is a beginning, but it is also a neat little sleight of hand. Take for example the storm Sandy. The flooding in Manhattan and elsewhere was a failure of preparedness. Full stop. It had nothing to do with AGW- storms bigger than Sandy with surges bigger than Sandy have been a known likelihood long before the AGW bandwagon got rolling.
So this is about blame and, ultimately revenue source. Suddenly, the failure to prepare for an entirely predictable weather event is not at all the fault of local politicians whose job it was to prepare. Suddenly, the idea of a storm in New York is a brand new concept, brought on entirely by Exxon and those dastardly Republicans who refused to stop the weather by signing Kyoto. The cost of preparing for the predictable? Not something that anyone should have budgeted for over decades- (heaven forbid a real responsibility intrude on the urgent desire to spend our time policing soft drink cup size and managing income redistribution!).
No, now that we’re safely allowed to assume Exxon and your local power company controls the weather, suddenly we have this brave new middle-class tax that will (finally!) allow government to focus on the stuff it should have been focusing on for decades, but ignored in order to pursue the joys of handing out other people’s money.
Bunk
Adapt to climate? Yes, of course, man’s been doing that since he first walked on two feet. Suggest this is a brand new concept required by fossil fuels that is so unexpected that it needs new tax streams? No thanks.
Jeffn,
Exactly. This is just the latest “reframing” of the progressive CAGW agenda in reaction to the “pause” in reported temperatures.
The IPCC was formed as a political organization to provide a scientific gloss to the policy of governments assuming control of the energy sector. Progressive politicians are stlll avidly pursuing this agenda in virtually all developed western countries.
But the IPCC is anticipating its masters needs. The pause has put a crimp in the quality of the PR the IPCC can put out. If it continues, it will become damn near impossible to continue preaching thermageddon is imminent.
So what happens to the research budgets, the NGO funding, and the imposition of ever larger “energy” taxes on the poor stupid voters?
The answer – adaptation.
Government is the hammer, the only tool in the progressive tool box. Adaptation is just another way of turning climate/weather into a nail. We may not be able to take control of the energy economy to decarbonize the planet, but maybe we can still keep control to “fund” adaptation.
Anyone who thinks the AR5 WGII report is a change in the goals or attitude of the IPCC is dreaming. It’s still a mouthpiece for progressive politicians, and is just, perhaps, in the process of changing tactics.
The primary danger is not taxation. It is government control of 80% of everything: Where you live, where you work, if you work, if you live.
CAA, ESA, IPCC etc.
“Beware of false prophets…” Mt 7:15
Well said and good points, but much of adaptation- like building flood control – is a proper function of government. Most think it is something the government is doing with those big budgets we fund. It’s when the stuff hits the fan that we learn they didn’t – they blew the money on pet projects and special interests or just handed it out like candy.
This is why snow removal stories resonate so much: “we gave you a giant sum of money, on paper you have allocated it to public works for snow removal, why aren’t you removing the snow? No, I wouldn’t rather talk about the opening of your latest combination arts collective and needle exchange. Remove the snow!”
Adaptation to anything climate change will throw at us over the next 50 years should have been in the governments capital budget for the last 75 years and should already be funded.
Hasn’t CO2 been significantly higher during the evolution of most ocean life. I don’t find it plausible that acidification could be strong enough to be a cost, it is much more likely to be a benefit.
Sure, as long as you do it gradually.
A 0-60 mph time of four seconds can be exhilarating. Four milliseconds however would likely be fatal.
MIT researchers have very recently implicated the genus methanosarcina as the likely cause of the Permian-Triassic extinction event. Of the five biggest mass extinctions to date this is the biggest, with some 90% of all life on Earth going extinct.
Supposedly methanosarcina exploited nickel to pump massive amounts of methane into the atmosphere. Quoting the above UK article, “The rate at which carbon dioxide would have been released was on a par with the modern emissions caused by the burning of fossil fuels, but they would have gone on for several thousand years.”
Projecting forwards from the last several thousand years, whether the genus homo has the legs of methanosarcina is a nice question. Having the legs of Tina Turner may be good for only a century.
Judith Curry,
This is the second time I have encountered Andrew Lilico. First Faustino and now you. The economics of climate change seem to be wafting under the locked meeting door and, like a vapor, permeating the room with a distinctly different aroma than what was emanating from the cloistered multitude.
In particular, I think that Lilico has articulated the core ideology which has prompted such push-back from the skeptics on mitigation and the role of the UN; namely, “…placing authoritarian control into the hands of global agencies.” To do mitigation “right”, nations need to yield sovereignty, allowing enforcement from central governance. Ain’t gonna happen. Bad idea from the gitgo. Lots of examples where such systems go awry.
Think globally and acting locally. From The Atlantic quote:
“Second, preparing for the worst actually presents major opportunities for the private sector and local governments. In its report this week, the IPCC is indeed calling for action—but not in the form of grand international declarations or promises. “Among the many actors and roles associated with successful adaptation, the evidence increasingly suggests two to be critical to progress; namely those associated with local government and those with the private sector,”
To me this means, changing the conversation at the township level. Either influencing or electing township supervisors who will alter their focus from developing rules and regulations on where to locate 500 foot windmills (where the wind is intermittent anyway) to establishing urban service districts to reduce urban sprawl, extending setbacks from wetlands and flood plains, development designed for a walkable community, partnering with adjacent communities for transportation corridors, etc etc etc.
There are ways to be successful, grandiloquent isn’t one of them. CO2 mitigation is grandiloquent.
One of the articles considers the naturalness, and the omnipresence of Climate Change
If you want an answer to that question that makes all other answers to that question unnecessary, PLEASE, watch this video: http://www.bradshawfoundation.com/journey/
It is not “AN” answer to the question.
It is “THE” answer”
Every other answer to the question of the presence of climate change becomes unnecessary. This is the “oil” you can stop “drilling”.
Cheers,
A.
Think the IPCC, the Guardian and the Economist are squealing uncle? No way. You just have to get with the new thought-compartmentalisation, sillies.
And I’m sure we’ll be able to move between categories. Surely we’ll still have our heat-concealing oceans (vinegar flavour), sea level rises straight out of a Roland Emmerich movie, stalling Gulf Streams, rogue polar vortices…all the good old stuff. Alarmism won’t have to leave the building; there’ll just be a new floor added for Resilience. No exclusivism here. These categories are symbiotic, like all really cool stuff. You can have it all – and have taxes for it all. (Sorry, I didn’t mean to say “taxes”. I meant “market-driven solutions”, of course.)
So we’ll now have a money-gobbling adaptation industry to go with our money-gobbling CC industry. Studies will show how urgent it all is, the Guardian will quote the studies! A warning, though: many more committees and discussions will be needed…and no mucking about with silly video-conferencing. The Boeings are purring on the runways. You have your duty.
So assumptions about a still largely unknown and fantastically complex subject (climate, duh) will now be matched by simplistic assumptions about what seed, engineering, developments etc are needed as a response to the initial assumptions. (Golly, Monsanto, we were wondering if you’d like to develop drought-resistant strains? Do you do that sort of thing? We’ll pay, of course.)
It’s Resilience time, it’s White Elephant time.
Any adults left? Hello? Adults?
Yes mosomoso.
Down with those money gobbling adaptation industries out there.
Begone …UN, IPCC, Environment-Advisory-ter-Guvuhmint-
Swollen-Bureaucracies … that we may become genuinely
adaptable come wott may, globull warming or more likely
in the long run,cooling, we jest can’t say.
Serf on the littoral who may or may not classify as an adult.
Well said, as usual, Mosomoso. Climate change, anthropogenic or otherwise, is a problem for the world of a similar order to that of hip displasia in overbred German Shepherds, and its study should be funded accordingly.
We have been patient for decades and now that the meds have kicked in, we can see some promise that our patience and our labors will bear fruit in discussions of climate and climate science. The very fact that Alarmists have trumpeted “climate exceptionalism” for so long bears witness to their “circle the wagons” mentality.
Jim D. – You write
“Mitigation is a top-down approach that helps with many problems and reduces many risks at the same time, and you don’t have to prioritize them. However, mitigation is a simple idea (Mitigation is a top-down approach that helps with many problems and reduces many risks at the same time, and you don’t have to prioritize them. However, mitigation is a simple idea (minimize integrated emissions) ).”
Unfortunately, mitigation where it is only meant to mean “minimize integrated emissions” is much too narrow an approach to reduce the wide spectrum of risks.
Also, in the bottom-up approach, you can deal with multiple threats by optimizing how resources are spent to improve resiliency.
For a tabular comparison of the top-down and bottom approach please see Table 1 in http://pielkeclimatesci.files.wordpress.com/2013/05/b-18preface.pdf
Roger P.
Hi Roger, that is an interesting table, but I think it lacks the cross-arrows that link climate change to the other factors which you list on the right. This is why I don’t put CO2 increase at the same level as something like sea-level rise or other stresses to agriculture, etc. By mitigating climate change, you also reduce the other risks that are amplified by it. As I mentioned earlier, climate change itself is a positive feedback to many of the risks or vulnerabilities that are growing anyway. Mitigation has a broadly beneficial trickle-down effect into many areas of risk, in my view. It is not selecting either top-down or bottom-up, because both are valuable.
Rising ocean levels? PAH! where? The predictions were way off. The only accepted measurement I can find is 10 cm in a century.
There is a finite amount of water on the planet. Water floats above the crust, period.
AND it is very important to note: Where heaviest, the crust SINKS to accommodate MORE water when it exists – Mariana’s trench is the deeperst part of the ocean, BECAUSE the crust is heaviest there, the disk of heavy rock under the great lakes SANK when a mile of ice formed above it. It is STILL bouncing back, 11,000 years later, slowly rising, draining the great lakes, and increasing the height of the Niagara Escarpment (the edge of the disk of rock).
Melting sea ice has little impact on sea level, as ice melting in a glass of juice does not. On the earth, the tendency for excess water will be to accumulate around the middle of the planet. But it all comes from somewhere. And will tend to level itself out.
A restriction in rotational flow like between south america and antarctica can raise an area of ocean.
A restriction in north south flow through the Bering strait can cause temperature shifts, as in the last ice age.
But the amount of water is fixed and finite. It’s place in life, above the crust, and below the atmosphere, is fixed and finite. The balancing across the surface is fixed, but not finite. The impacts on it, are much greater than those isolated as being caused by humanity.
When the planet naturally varies back to cooler temperatures, all of the melting is gradually reversed. It is inescapable.
Sea level consideration is a red herring, distraction, inevitability, not related to humanity.
IMHO.
?;-)
A.
p.s. the rising rock under the great lakes affects our climate TODAY. It is an “aftershock” of a significant climate event 11,000 years ago. That shows the kind of time-line we have to deal in. The effects of 11,000 years ago have not finished making themselves felt today!
p.p.s. Ocean acidification. PAH again. how many alive here can remember the potential consequences of “acid rain”. Did we run any parades when WE prevented that problem? Or was it overblown, the bare forests that were pictured as representative of what was to become of all of Canada’s forests. Not actually indicative of a real future. and when that future never came, we ignored that it was a fear we used to have. If our actions had prevented that acid rain, I can only imagine that someone somewhere would have measured that, been proud of the movement they initiated and there would be PARADES! therefor we did not prevent acid rain, therefor it was false. Now talk to me again about “ocean acidification”. And then tell me about the monsters in my closet, the boogie man, cooties, and all the other fables of my youth are really true.
Judith Curry says:
??? WTH
Adaptation is not “preparing for the worst”.
Adaptation is “dealing with what happens.”
There is no important functional difference between mitigation – “preventing the worst” – and this falsely defined adaptation – “preparing for the worst”.
Same scary stories based on politicized ‘science’ and BS models that do not verify, to define “the worst” for which we must now prepare.
Same call for increase in public sector control over the economy – including in many cases the very same policy demands… hmmmm … how to pay for all of this ‘adaptation’ to imaginary future conditions? Carbon tax? But of course!
Same excrement, different epoch.
Yes, but it may also be preparing to what’s expected to happen, or what’s possible to happen.
Nothing tells that adaptation is limited to reacting after the fact.
Be sure to include asteroid strikes and GRBs.
Hey, an easy one for a change! What is likely to happen and possible to happen is what’s expected to happen and what is happening now: extreme heat, extreme cold, floods, droughts, storms.
The trick is in knowing how, when and where each will occur. (Note that I said “knowing”, not predicting or publishing.) Don’t tell me Africa and Oz will experience drought, or the US eastern seaboard will cop a hurricane – kind of already know that from a few centuries of historical records. More information or just your silence, either is okay.
I really wonder which is worse: paying a vast community of experts to state what is fanciful, or paying them to state what is obvious.
@mosomoso
“paying a vast community of experts to state what is fanciful, or paying them to state what is obvious”
They do both simultaneously with consummate ease … got to be a bargain, two for the same price :)
Pekka – “[Adaptation] may also be preparing to what’s expected to happen, or what’s possible to happen.” I differ a little. Too many things can possibly happen. We should only deal with threats that are likely to happen.
It might get me a lot of beating to comment this.
“poor countries suffer the most damage from climate change”
I doubt that, because poor countries loose less value by the disasters, and the disasters are somehow evenly distributed relative to the possibility to be hit by one. Peoble adapt.
You could ask why houses in the tonado alley is build of cardboard instead of solid concrete. It is most likely the cheapest in the long range.
Hi Jim D. – You write
“By mitigating climate change, you also reduce the other risks that are amplified by it.”
By mitigating “climate change” do you mean specifically mitigating “emissions of GHGs'”?
If so, please be explicit, as the term “climate change” has not been clearly defined by the IPCC and others in this context.
Climate, however, is always changing, even when there was no human intervention. The “change” part is redundant.
I recommend we adopt mitigation/adaptation” approaches that increase resiliency to climate, not just any “change” and not just due to human involvement. This is why the bottom-up approach is more inclusive in my view, A top-down approach has not been working.
Roger Sr.
Roger, yes, to me mitigation is explicitly the stabilization of the atmosphere’s CO2 level significantly below where it would be with no mitigation from burning all available fossil fuels, which could make a difference of a factor of two in total CO2, between effective mitigation and burn-it-all policies.
Example of adaption already happening.
I’m not so sure that SLR will be as big of a deal here in FL as many believe. Damage costs may actually decrease per capita in many areas with time.
Why?
Many current houses near the coast >50 years old are built closer to sea level. These guys are very prone to destruction from storm surge. Newer building codes have these structures built up 10 or more feet depending on 100 year storm surge estimates. They are built on stilts.
As we go through the next 100 years most of these old vulnerable structures will disappear leaving a more resilient infrastructure in place. It’s already been happening for over a decade.
If we get 1m of SLR then these newer structures are still protected pretty well. As SLR occurs, the set backs are moved and the codes are updated as time goes by.
Bottom line: Even though storm surges later this century may be up to 1m higher, they will already be met with a much stronger infrastructure.
People who live on the coast should have to assume the risk of doing so. Whether it be increased insurance costs or higher property taxes to rebuild their washed away roads on occasion.
“While wealthy countries account for most greenhouse-gas emissions, poor countries suffer the most damage from climate change.”
JAXA satellite data shows us that ‘wealthy’ countries farms and countryside consume more co2 than those ‘wealthy’ countries produce. The first world emits more co2 than the third world, but consumes more co2 than it produces, leaving them as net consumers of co2 and the third world as net producers of co2.
So if co2 is causing damaging climate change then the third world only have themselves to blame and should pay damages to us in the first world.
Farms and countryside consume only, when they store the carbon for a long time. That does not take place at a significant level. Thus there’s practically no consumption of CO2.
Forests that increase their biomass do consume CO2, but forests that lose as much biomass as they grow have no net influence.
@ Pekka
http://chiefio.wordpress.com/2011/10/31/japanese-satellites-say-3rd-world-owes-co2-reparations-to-the-west/
Amazing that he can make such a total error – or is it?
And with big uncertainty bars NOAA sort of agrees.
http://joannenova.com.au/2011/11/co2-emitted-by-the-poor-nations-and-absorbed-by-the-rich-oh-the-irony-and-this-truth-must-not-be-spoken/
A lot of effort has been put in improving the accuracy of data on those effects
http://www.ipcc-nggip.iges.or.jp/public/gpglulucf/gpglulucf.html
This is a separate task also at IPCC. I happen to know a few people involved in these studies since 1990s. There are still uncertainties, but really small in comparison with that claim that contradicts simple logic.
It is probably because this is using solar wavelengths, so it only sees what is happening during the day when vegetation absorbs CO2, while it emits CO2 at night. Maybe they didn’t account for that. Plants net do nothing to CO2 over an annual cycle, but absorb in the growing season, as is seen in annual cycles.
The results are actually quite interesting. The usual superficial responses from the usual suspects notwithstanding.
http://www.jaxa.jp/press/2012/12/20121205_ibuki_e.html
What we see is sequestration in high northern latitudes in the NH summer. Not entirely unexpected – this is where global stores of organic carbon are concentrated.
Natural flux to the atmopshere increases with temperature. As most of the temperature increases was quite natural – it begs the question.
J Martin,
Australia’s oceans absorb far more CO2 emissions than we produce. We are the world’s good guys. :)
The big emitters could pay us for the share of their emissions we abate for them. And we could take their nuclear waste too – both for a big fee of course.
Saying — “While wealthy countries account for most greenhouse-gas emissions, poor countries suffer the most damage from climate change” — is like saying…
While tall people see further over the horizon, short people suffer the most from falling off high heels.
Hi Jim D – Thank you for clarifying. the term, “mitigation” as you have defined it, refers to only a portion of the human role in climate (and not the natural climate at all). This is not mitigation of climate “change”, but mitigation of emissions.
The term “climate change” should not be implicitly (or explicitly) used by the IPCC and others when they are talking only about GHG emissions. The use of the terminology “climate change” for this limited reason, but meaning all of climate, is misleading policymakers.
Climate is much more than changes in the climate system from these GHGs emissions. There are risks even if the climate statistics did not change. There are risks even if CO2 emissions could be ended immediately.
We have a short article that urged this broadening of the climate perspective and the human role in it in, for instance in our articles
i) Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell, W. Rossow, J. Schaake, J. Smith, S. Sorooshian, and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union. http://pielkeclimatesci.wordpress.com/files/2009/12/r-354.pdf
[see the three hypotheses in this article]
ii) McAlpine, C.A., J.G. Ryan, L. Seabrook, S. Thomas, P.J. Dargusch, J.I. Syktus, R.A. Pielke Sr. A.E. Etter, P.M. Fearnside, and W.F. Laurance, 2010: More than CO2: A broader picture for managing climate change and variability to avoid ecosystem collapse. Current Opinion in Environmental Sustainability, 2:334-336, DOI10.1016/j.cosust.2010.10.001 http://pielkeclimatesci.wordpress.com/files/2010/12/r-355.pdf
The end of climate exceptionalism is needed, and I am really pleased Judy has posted on this.
Roger Sr.
Yes, it is my (mainstream) view that by far the biggest driver of the clearly seen climate change in the last century or two has been the nearly 2 W/m2 from increased CO2, and that this makes the effects of further emission very predictable as they grow steadily towards providing 5-6 W/m2 of forcing by 2100.
Still a matter of political science not physical science:
There are greater risks if CO2 is mitigated. We simply don’t know the sign. (If there is one.)
None of the quotes or Judith’s perspective mention the major benefit of GHG emissions – reducing the risk and/or delaying the time to onset of the next sudden global cooling event. And another possible benefit of GHG emissions is a milder and more tranquil climate at higher average global temperatures [the rate of change and amplitude of climate oscillations has been much greater when the planet is at cold temperatures then when it is at warmer temperatures.
Why did no one mention these benefits of GHG emisisons?
Indeed, if we are to believe the planet is super sensitive to emissions, then the risk of glacial inception must decrease with increasing CO2.
However, to truly understand the psychology of the warmists one must acknowledge that since humans have evolved over dozens of interglacial cycles, then it only logically follows that “ice-ages” are good, and that we are adapted to such cold climate conditions, and that we should not interrupt the natural cycles that have lead to our evolution.
http://xanonymousblog.wordpress.com/2014/04/04/glacial-inception/
Someone did–e.g.,
~Richard Tol
Judy, I don’t know if you saw this from Richard Tol in The Conversation on Wednesday:
This would be inline with the twitter comments. The whole article is worthwhile.
With Emanuel of MIT virtually admitting that climate change studies were based on 19th century “pencil and paper” studies, one has to wonder whether anything the IPCC produces is worth reading. For example, in the middle of the 19th century, photons did not exist, nor was the way gases absorb heat with both kinetic and vibrational energy. It is certainty a wonder that so many scientists accept these ancient beliefs.The 1940 singularity and the on/off nature of climate change cannot be understood in 19th century technology.Obviously whether governments need to adopt adaptation or mitigation strategies depends on the accuracy of 19th century science!
We can expect Eurocommunism will be as effective in adopting effective adaptation or mitigation strategies as the USSR was effective in competing with the USA.
Hi Jim D – If it were so simple, we would not be having the “pause” that is currently seen.
As to a mainstream view, you are correct that the IPCC view is as you frame it. However, this excessively narrow view is changing as exemplified in the WG2 report. There are also other examples that I would recommend you look at; e.g.
Kabat, P., Claussen, M., Dirmeyer, P.A., J.H.C. Gash, L. Bravo de Guenni, M. Meybeck, R.A. Pielke Sr., C.J. Vorosmarty, R.W.A. Hutjes, and S. Lutkemeier, Editors, 2004: Vegetation, water, humans and the climate: A new perspective on an interactive system. Springer, Berlin, Global Change – The IGBP Series, 566 pp. http://www.springer.com/earth+sciences/meteorology/book/978-3-540-42400-0
See Section E on Vulnerability.
National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp.
http://www.nap.edu/openbook/0309095069/html/
The later NRC report includes the text (and I recommend you read the entire Executive Summary)
“…the traditional global mean TOA radiative forcing concept has some important limitations, which have come increasingly to light over the past decade. The concept is inadequate for some forcing agents, such as absorbing aerosols and land-use changes, that may have regional climate impacts much greater than would be predicted from TOA radiative forcing. Also, it diagnoses only one measure of climate change—global mean surface temperature response—while offering little information on regional climate change or precipitation.”
The IPCC has almost completely ignored these assessment reports, but their message is becoming more visible. Mitigation, as you define it, even if successfully accomplished, is not going to eliminate all of the risks from climate that society and the environment are facing. The IPCC (and your view) is too conservative.
Roger Sr.
The pause implies that variability in the historical record has been missed and that rates and duration of warming are not outside our ability to deal with. And, attribution for feedbacks is wrong.
Roger, yes, WG2 has shown that the IPCC is well aware that the problems that exist are not solely from anthropogenically forced climate change, and that a variety of actions are needed to solve all of the world’s problems. Adaptation is needed even with mitigation, so the IPCC has to emphasize that its view is not one-track. WG3 will emphasize mitigation, because regardless of the pause, the warming rate since 1950 is the expected rate, being consistent with the main forcings of CO2, aerosols and other GHGs on the global climate. The CO2 forcing will double to triple by 2100, so even with adaptation continuing, mitigation is needed so that the adaptation isn’t quickly undone by faster changes in climate. I therefore see mitigation as an investment that is worth the effort in the long run by saving future adaptation. I realize that some are not yet convinced of the rising tide of CO2 effects because it is still in the background today, but that long view is where the mitigation needs are coming from.
the rising tide of CO2 effects
Given the present parlous and infant state of the models and science today, that would well be hundreds or even thousands of years away still.
The most exceptional aspect of our zeitgeist is the pseudo-scientific notion that man’s activity rules nature, rather than vice versa. Can hardly wait for nature to sweep away all such anthropogenic conceits.
The division or distinction between humans and nature is an artificial one or illusion. Humans are as much a part of nature as hurricanes or the sun.
It’s a shame that you find “illusory” the categorical difference between man’s ideas and palpable physical reality. A scientist you’ll never make!
When I observe the past as best I can, I see a correlation between a lack of adaptation and death. Further, humans eventually die due to the lack of ability to adapt to increasing inefficiencies in the components of the human body.
What to do, I hear you ask?
Whatever you think right and proper at the time, obviously. You may choose to listen to others, you may choose to hoe your own row, you might remove yourself from the rest of humanity in a fit of depression or pique.
Each of us looks at our surroundings and our future differently. Nature, treats each of us differently, it appears. Some persevere through the the US dust bowl of the thirties, some head off to Californiyay – to live on the San Andreas fault line. What could possibly go wrong?
Call me a fool – and assuredly some just can’t wait – but I choose to live in the tropics, close to the coast, pummelled from time to time by cyclones of varying intensity, with earthquakes ranging from subtle to terrifying thrown in at random, in case I become complacent .
Add in a highly variable annual rainfall, one road in and out and a railway that cannot cope with extreme train along its few thousand kilometers of track and you might think that adaptation is difficult. Not at all.
Where else would I choose to live? I’m happy to stay where I chose to go years ago. Why change?
If you have the luxury of choosing where to live, you do the best you can. If you don’t, you have no choice but to adapt or die. Governments do their best, but the results from the dawn of civilisation have not been uniformly successful. Follow their advice if you wish, or even if you are forced to. It’s an example of adapt or die.
If you want to give your money to Warmists, go ahead. I’d rather spend mine on making my house a little stronger, and making sure I can keep the water off my head when it rains. It spoils my bouffant hairdo
Live well and prosper,
Mike Flynn.
Mike, the problem is you can afford to fix your roof and many people can’t.
So you have to give more money to the government, so more government employees can afford to buy lap dances.
It’s the only known solution.
Some approaches might make us less safe. If you are a child or have an immune deficiency, a way to adapt is to practice pragmatism–e.g., you may want your hamburger cooked well. Or, you could move to a fascist country where the government requires all ground beef to be cooked well before it is sold. But, if you are poor the best adaptation strategy is to live in a rich country which means you wouldn’t want to live in a fascist country.
It would be soooo much easier if someone, just someone from the IPCC went to physics school and tried to understand thermodynamics.
There’s a thing called the Second Law of Thermodynamics (which hardly anyone seems to really understand) but in fact is not all that hard to grasp.
The law tells you things tend towards a state in which there are no unbalanced energy potentials. Got it? A molecule has energy – some of it in its kinetic energy (KE), some in gravitational potential energy (PE), plus some other energy which, for our purposes here, stays the same.. But temperature is a measure only of kinetic energy.
Sooooo … if we have unbalanced energy potentials then the sum of these two forms of energy (PE+KE) tends towards being homogeneous (uniform) at all altitudes in a planet’s troposphere. But PE varies with altitude. So there is a variation in KE, which means there is a thermal gradient (aka adiabatic lapse rate) but that thermal gradient is really the state of thermodynamic equilibrium that the Second Law told you about.. (This time look it up here and also the link therein to thermodynamic equilibrium.).
How are you going? Do you get it yet? Do you understand the significance? We don’t have to worry about carbon dioxide at all. All that “33 degrees of warming” has been done by gravity.
Laws of Thermodynamics. Back in the remote past, I heard the three laws expressed as:
1. You can’t win.
2. You can’t break even.
3. You can’t get out of the game.
(I see they have added a “zeroth”.)
The Cause of the Pause originates from thermodynamic Laws.
Remember that line.
“Anthropogenic sea level rise is an unknown fraction of total global sea level rise (some question that there is evidence of anthropogenic acceleration); more significantly, sea level rise from climate change is only a fraction of the total sea level rise in many vulnerable locations, with geologic subsidence and land use (especially ground water extraction) dominating in many locations.
For CO2 mitigation to make sense given the uncertainties in sea level and acidification impacts…… ” – JC
Quite a mixture of hand-waving and confusion here.
Steric sea-level rise is pretty well understood. And noting that other factors are involved (subsidence etc) is no cause for complacency – just the opposite. Regions with such factors will face a double-whammy with global sea-level rise compounding the problem, not ameliorating it.
And if Judith is thinking of the recent Bangladesh post when she refers to local factors “domintiing – the article she cited didn’t say that and made it clear that AGW-related SLR would be even greater. uBt yes, very big porblems – but no uncertainlty monster making it all go away.
The main uncertainty around sea-level is whether we get very lucky with just 50cm or so, or if it goes into over 1m territory.
Adapation is definilty required, but it’s already a bust in many places – there are no shortage of examples. The most glaring are places like Bangladesh were mal-adaption is the order of the day. But even in wealthy frst-world countries (think Hurricane Katrina) hopeless mal-adaption is wide-spread.
If we are mal-adapted to current realities, how will increased vulnerability rectify that? To say that it just will, is magical thinking.
Presto; they seem to be doing things to change the situation now because of the current climate.
http://www.telegraph.co.uk/earth/energy/solarpower/10744891/Energy-minister-vows-to-curb-the-spread-of-solar-farms.html
Sea levels are changing around the world. Some are going down, most are going up. This deformation of the world has been going on since time immemorial and it is a lot slower now than at times in geological history. Where I live, the ground is sinking 25mm a year. Fortunately we are 400m above sea level so it is not a worry. Remember how Charles Darwin identified the cause of coral atolls. They are millions of years old and guyots are the failures. If you build infrastructure on the current surface of an atoll, it will sink. If you build on an active river delta, it will flood.
People have to adapt, no matter what the cause of the sea level change. You can be like Canute’s advisors and think otherwise with anti carbon policies, or you can gradually start banning developments at risk. The latter is adaption. Most buildings have only 30-50 year life so an orderly retreat is possible if necessary.
Chris,
the life-span for individual buildings may make it seem like a reasonable (though many last longer and infrastructure lifespans can be much longer), but taken altogether it’s a much greater problem, ie, when have you large cities at or near the highwater mark.
Not sure how you even get to 50cm over 100 years…. seems like the data I see with 2.5-3.0 mm/yr total rise,would make for 30cm max
Michael
Why is it a problem to adapt in the 50 to 100 year timeframe? Cities hollow out, depopulate etc. By your rationale, Detroit is still a thriving metropolis. If people know they can’t get insurance for new structures, they will move. Any sensible government would have abandoned New Orleans long ago.
Michael,
Given that marine fossils are found 6000 meters below current guesstimated mean sea level and 6000 meters above guesstimated mean sea level, what leads you to believe you can tell the difference between the continents dropping and the sea level rising?
What about the numerous drowned cities? Were they due to excessive CO2?
It seems that the rest of the world’s population managed to survive somehow, without your advice. Likewise, regardless of your undoubted fervour, where is your proof that you can foresee the future better than I?
It is easy for me to predict floods, droughts, earthquakes, hurricanes, heat waves, cold snaps, tsunamis or even outbreaks of sanity amongst the Warmists. Can you do better? Of course you can’t!
Good luck with finding people to follow your advice on adaptation, once you figure out what it is. I’m sure you can convince them you’ll fight on – to the last drop of their blood, if necessary!
In the meantime, the rest of us will just have to adapt or die, won’t we?
Live well and prosper,
Mike Flynn.
“Any sensible government would have abandoned New Orleans long ago.” – Chris
That’s an excellent point, and yet New Orleans is till there.
That’s the problem with the wishful thinking around adaptation – it’s already a faulure in many instances dealing with well-known quantifiable risks, yet it’s going to be the saviour in an uncertian future full of even more risks.
Mike,
6000 m.
Think timeframes.
The main uncertainty is not whether we will get 50 or 100cm. The main certainity is the spatial distribution of the rise. One of the biggest problems with sea level rise predictions is knowing where you will get
0 and where you will get 100. Not whether you will get 50 or 100. Suppose we know we will average 100.problem that doesn’t help you because its more important to know where you will get 150 and where you will get 50.
This is especially true because the value of damage is also spatially heterogeneus. That is if the gold coast sees 150cm
While unpopulated coastal areas see 50cm then the spatial
Dimension is even more critical than the vertical.
Where matters more than how much…up to a point.
And exactly what would mitigation have done?
Michael,
Are you a Warmist? I believe the Book of Warm gives instruction to the faithful on how to pack the the maximum amount of patronising condescension into the fewest words with the intent of being gratuitously offensive.
Unfortunately, in my case it’s wasted. I refuse to take offence at anything at all.
In any case, I had already thought time frames, and a few other matters pertaining to sea level falls and rises relative to rising and falling tectonic plates. I appreciate you may not realise that the matter of sea levels is not straight forward, but Warmists tend to adhere like limpets to their preconceived notions even when they are demonstrably wrong.
For example, Warmists probably still believe that the Earth was created cold and has gradually warmed up. They probably believe that CO2 can increase the heat content of a body surrounded by said gas. They might even believe – and I know it sounds ridiculous – that the Earth is still warming as a result of increasing CO2 levels in the atmosphere!
So yes, your admonition to think time frames had already been considered. It appears I am several steps ahead of you, and have a better grip on reality. I realise you can’t help your obviously limited intellectual abilities, but I cannot provide any solutions.
Maybe you could start by learning a bit about Nature – that might be a reasonable first step. I hope it helps.
Live well and prosper,
Mike Flynn.
Mike,
If you had really considered timeframes seriously, there is no way you would have mentioned the completely irrelevant “marine fossils are found …6000 meters above guesstimated mean sea level” .
All your long-winded bluster above does nothing but bears that out.
Michael,
Thank you for telling what I would have done if I had done something else.
I know what I thought and did. You don’t.
You might care to share the methods used for the last 150 year timeframe to establish vertical tectonic plate movements at tidal gauge station locations, so that accurate sea level variations can be measured.
If you can’t, others might justifiably think you are talking rubbish.
Live well and prosper,
Mike Flynn.
Mike,
Yes, no one knows about tectonic plate movement but you.
Maybe there’s a Nobel for you there somewhere?
Again, your timescales are way, way out.
Michael,
I take your point about time frames. I’ll be a little more specific.
During the decade 1900 – 1910, what was the variation in the global sea level?
During the same period what were the overall vertical variations for the individual tectonic plates, and upon which datum was this based?
During the same period, what vertical displacements were measured on the sea floors supporting the oceanic waters, and how were these measurements verified?
If you cannot provide answers to these simple questions in verifiable scientific terms, you might be adjudged by some to be a fool, a fraud, or purely delusional for claiming that sea levels have either risen or fallen due to global warming during the period 1900 – 1910.
I would be severely underwhelmed if I were offered a Nobel Prize for anything pertaining to man made climate change. This would place me in the same category as such luminaries as Al Gore, and the unsung heroes of the IPCC.
I do not aspire to become the Michael Mann of Climate Science.
Nominate me if you must. I will decline the offer. I hope you don’t mind.
Live well and prosper,
Mike Flynn.
Michael
There are a lot of tide gauge data out there, which go back to the early 19thC and some reconstructions that go back even further. I believe tony b has written a paper on historical tide data that go back several centuries.
From all these data we know that sea level has generally been increasing since we started emerging from the Little Ice Age.
This has not happened smoothly. Within the 20thC alone there have been decadal rates of increase of over +5 mm/year and other decades at -I mm/year, with an average rate over the first half of +2.0 mm/year and the second half of +1.4 mm/year, for an annual average of around +1.7 mm/year.
Using a total different method of measurement today, covering a totally different scope, it has been around +3 mm/year most recently.
Comparing this with the tide gauge record is fraught with some danger, but the current rate of SL rise appears to be well within the range seen over the past century.
The Dutch have been adapting to increasing SL for centuries, and this works just fine. The same is true for Germany.
It should work for New Orleans as well, provided politicians there are as far-sighted as the Dutch.
Adaptation is the only proven solution to challenges resulting from rising SL.
Pretty simple, actually.
Max
Mike,
it’s amusing that you think you’ve discovered something novel.
But maybe you should check out the extensive literature on 20thC sea-level rise?
Archangel Michael so sad that you’re here
Powered by hatred
Ruled by fear
You direct the ‘skeptics’ to the science and they get all offended.
Hmmm.
Reblogged this on evemeredith.
“The question then becomes NOT what is causing climate change or how we can prevent it, but rather: How much resilience can we afford?”
The question should really be “You’re kidding, right?”
At 11,717 years since the end of the Younger Dryas the Holocene is pretty much half a precession cycle old (presently we are at the 23kyr part of the precession cycle, 11,500 being half). Seven of the last eight post Mid-Pleistocene Transition warmings to interglacial level (<~3.5o/oo O18/O16) have each lasted about half a precession cycle.
Which brings us to perhaps the most pertinent question ever asked by a hominid; what sort of weather/climate should one expect at a possible end extreme interglacial like the Holocene?
Boettger, et al (Quaternary International 207 [2009] 137–144) abstract it:
“In terrestrial records from Central and Eastern Europe the end of the Last Interglacial seems to be characterized by evident climatic and environmental instabilities recorded by geochemical and vegetation indicators. The transition (MIS 5e/5d) from the Last Interglacial (Eemian, Mikulino) to the Early Last Glacial (Early Weichselian, Early Valdai) is marked by at least two warming events as observed in geochemical data on the lake sediment profiles of Central (Gro¨bern, Neumark–Nord, Klinge) and of Eastern Europe (Ples). Results of palynological studies of all these sequences indicate simultaneously a strong increase of environmental oscillations during the very end of the Last Interglacial and the beginning of the Last Glaciation. This paper discusses possible correlations of these events between regions in Central and Eastern Europe. The pronounced climate and environment instability during the interglacial/glacial transition could be consistent with the assumption that it is about a natural phenomenon, characteristic for transitional stages. Taking into consideration that currently observed ‘‘human-induced’’ global warming coincides with the natural trend to cooling, the study of such transitional stages is important for understanding the underlying processes of the climate changes.”
Which means we might have a pretty significant anthropogenic signal to end extreme interglacial climate noise problem.
The projected anthropogenic signal from the IPCC’s Assessment Report 4 (AR4) (2007) Figure 10.33 from page 821 of Chapter 10, SRES marker series A1F1’s upper error bar comes in at about +0.59 meters, about 2 feet, to 2099. To be prudent, we might round that up to +0.6 meters for the AGW “signal”.
This is the "worst case" unless someone has deciphered the equivalent in AR5 WG1. Substitute that worst case scenario here.
Ordinarily, one might be tempted to compare worst case to worst case first, but we will do it last. First we will compare the IPCC AR4 'worst case' scenario to the end of the last interglacial's "least case" score of +6.0 meters amsl:
http://www.uow.edu.au/content/groups/public/@web/@sci/@eesc/documents/doc/uow045009.pdf
Although this paper summarizes 12 global studies up to +45 meters amsl, if we go with just +6.0 meters amsl our prognosticated sea level rise of +0.6 meters amsl by 2099 comes in at a paltry 10% of the normal natural end extreme interglacial climate noise!
But it might just be worse than we thought. Sea level may have gone up as much as +52 meters amsl:
http://lin.irk.ru/pdf/6696.pdf
The very best we think we can do, climate change wise, is 1 to almost 2 orders of magnitude less than what most recently occurred anyway!
Contracts for how to detect anomalous climate 'signal' 1 to 2 orders of magnitude below the 'noise' will not be issued in 2014 by the DOE or EPA.
So much for how much we can afford…….
I remember in 2012, riding my bike I saw road-crossing snakes and turtles in late March/early April. This year, not happening. Can anyone use reptile migrations as a proxy?
Wonder why McIntyre, though a member of the resource extraction fraternity, only applies “Hide the Decline” to climate? Because he knows all too well how effective creative accounting of the numbers can be in deceiving the public.
What truly staggering drivel. If oil was seen to be running out, oil shares would rise. So if McIntyre was indeed an oil puppet, that’s exactly what he’d be doing.
McIntyre is a master in the art of misdirection and projection, covering for his mining industry buddies.
Translation : he is a penetrating critic and analyst, for whom obfuscators and other frauds have no answer.
Told you all that fracking oil flows won’t last:
http://mobile.bloomberg.com/news/2014-04-03/old-math-casts-doubt-on-accuracy-of-oil-reserve-estimates.html
The new math is documented on my blog and also as one of the last dozen posts at The Oil Drum before that blog went belly up
http://www.theoildrum.com/node/10221
The oil age is on its swan song, that’s for certain.
Web,
You say you believe oil is on the way out. May we take it then you have
1) put your money where you mouth is, and invested heavily in oil stocks and/or futures?
2) stopped wasting time agitating for complex and messy political interference on CO2, since all you need do now is sit back and wait for the unavoidable switch to wind and solar etc as the oil runs out ?
(2) would push everyone’s energy costs through the roof, but your foresight in (1) should help cushion you from that blow.
Webby
No one I know has ever believed that oil is an ‘infinite resource.’ Maybe in 30 to 50 years renewables or other power sources will be reliable and cheap enough to take up the slack, but in the interim fossil fuels are an important part of the global economy.
Do you have an estimate as to how long fossil fuels will last?
tonyb
Tuppence
Webby is caught between a rock and a hard spot.
On one hand, he warns us that we are about to run out of fossil fuels.
On the other he warns us that the added CO2 from burning all those fossil fuels is going top fry us all.
What’s wrong with this picture?
A dilemma.
Max
WHT,
Thank you for letting me know that you are taking credit for having calculated the future yet again. Does this mean that the future is more easily calculated using the old math, or the new math?
Does using new math show where the hidden heat is lurking? Maybe you used the wrong sort of math for your toy CSALT model – or have you discovered that continually revising the model can be done using pencil and paper?
What is the difference between the new math and the old math? I’m still getting 4 when I add 2 and 2. What am I doing wrong?
I’m sorry. I’m only partly serious.
Live well and prosper,
Mike Flynn.
40 years ago all the experts had estimates of what known oil reserves were and we heard about the world would be running out in x number of decades. Every successive decade the estimates of known reserves has only gone up. I wouldn’t get my petticoat in a big twist just yet.
Manacker,
Actually, the spot I occupy is that of an analyst.
You simply don’t like the fact that I know what I am doing with respect to analyzing natural random processes.
The real problem is that Manacker doesn’t have the same set of skills so he lashes out by projecting his inadequacies on others.
The reality is that we are entering an age of alternative energies and it will be good to understand the natural variability of the climate around us.
Like this analysis I finished today on analyzing the ENSO SOI.
http://contextearth.com/2014/04/05/the-chandler-wobble-and-the-soim/
I don’t see a rock and a hard place. These are not problems, they are opportunities.
Web
But do you have a response to actual point put to you?
: if oil is about to run out, why do we need to worry about trying to stop people using it?
Oil reserves : for over 100 years now, there has always been 50 years worth left.
It’s one of those constants with pride of place in Flexible Physics.
Tuppence,
You are arguing rhetorically. This is a science-based blog and what we do is look at the evidence and try to understand what the data is telling us.
What has happened is that predictions for fractured oil were based on physics and heuristics that only applied to conventional reservoirs. But of course these people should realize that shale beds have completely different characteristics and show a significantly different transient diffusional flow.
Minor write downs of estimates of yields of a minor component of the energy equation? Means that the end of the age of fossil fuels is imminent because of resource depletion?
The reality is – http://www.eia.gov/forecasts/ieo/more_highlights.cfm
The Aussie wrote:
What energy equation? It’s a continuity equation of flow. As usual, you never know what you are talking about.
And it takes a dude from land-locked Minnesoata to figure out Australia’s SOI index:
http://contextearth.com/2014/04/05/the-chandler-wobble-and-the-soim/
More continuity equation stuff. You either understand the physics or you don’t. Easy to spot the poseurs like the Aussie.
The energy equation is an addition of diverse resources and a calculation of supply going forward – as in the EIA document. Shale oil at best is a minor component of global energy sources.
The conflation of the SOI and the Earth’s ‘wobble’ is an example of webby fantasy physics. It relies on gross assumptions that are nowhere justified except in the webby alternate reality.
watch this space. the math is too much for the aussie.
The mouth from Minnesota prattles about disjointed maths and crazy physics on a blog that no one reads or cares about.
I has wasted time on this blogospheric nonsense in the past and have no intention of attempting to disentangle Earth wobbles from the SOI through inapplicable and wildly fanciful math. About right?
Watch this space if you are at all interested in more crazy BS.
Web,
Again, please finally actually address the point put to you, not another of your own making designed to make the reader forget the original, as yet unanswered one.
Namely, you bang on endlessly about
– the imminent end of oil supplies (and coal?)
– the need to reduce CO2 emissions
Why, if the first is true, should we still worry about the second?
Simple enough. So please, no more evasions.
The analysis is needed because the practitioners of “Hide the Decline” actually exist in the oil industry. Wonder why McIntyre, though a member of the resource extraction fraternity, only applies “Hide the Decline” to climate?
Because he knows all too well how effective creative accounting of the numbers can be in deceiving the public.
The global decline of conventional crude oil production is real and will continue indefinitely. Other sources of liquid fuel will serve to prop up the decline by compensating for shortfalls. You see this with oil created from Canadian tar sands bitumen. That is not conventional crude oil, but a sludgy variation, which needs extra energy and other resources to process into a version that can pass for gasoline.
McIntyre is a master in the art of misdirection and projection, covering for his mining industry buddies.
Do you understand any of this at all?
The Aussie is starting to imititate the lolcats who say “I Can Has Cheezburger?”
It really must be fits of jealous rage that motivate the aussie to deny any amount of scientific progress not to his liking. There is actually nothing fanciful in any of the math that I do, more a dedication and care to grind out the analysis of the SOI waveform.
http://contextearth.com/2014/04/05/the-chandler-wobble-and-the-soim/
The piece that is obvious is just to extract the oscilllatory component as a quotient:
http://imagizer.imageshack.us/a/img690/9348/x8lm.gif
The reason that this approach was not discovered earlier (in comparison to the Chandler Wobble measurements) was that the nonlinear wave equation obscures the periodic elements underling the SOI, overlaying what appears to be a random or “chaotic” waveform. In fact, this is not chaotic at all, but an anharmonic waveform that is characterized by using the appropriate mathematical tools.
The Aussie can not accept the fact that underlying quasi-periodic factors can drive phenomena such as ENSO, thus crushing his ridiculous fantasies of a chaotically driven world.
There really is a reason for everything that happens on this grand a scale. A butterfly flapping its wings is no match for the forces exerted by the moon and the orbit of the earth as it continues to spin.
The Aussie can not admit to this fact because he inhabits that painful world of the reactionary wingnut, loyal only to his benefactor Rupert Murdoch . That’s where the unbridled rage comes from.
Webby
You are waffling.
The question was:
If we are about to run out of fossil fuels, why should we worry about CO2 emissions?
Max
Web,
Since you yet again avoid the question, I must yet again pose it :
You bang on endlessly about
– the imminent end of oil supplies (and coal?)
– the need to reduce CO2 emissions
Why, if the first is true, should we still worry about the second?
Simple enough.
So please, no more evasions.
… statement from The Atlantic: … adaptation measures are less politicized than mitigation measures …
As wrong as it is possible to be. The fundamental purpose of the IPCC has always has been and remains political – agitation for world government (no surprise, being as it is part of the UN).
And adaptation presents just as as much of an excuse for world government as mitigation does. Especially given the official mantra that while wealthy countries account for most greenhouse-gas emissions, and poor countries suffer the most alleged damage.
Tuppence
Of course it is a myth that “wealthy countries account for most greenhouse-gas emissions, and poor countries suffer the most alleged damage”.
The first premise was once true, but it is no longer the case.
The “industrially developed nations” (Europe, North America, Japan, Australia and New Zealand) generated 56% of global CO2 emissions in 1970 (a per capita emission of 12.9 tons), but only 35% of global emissions in 2010 (a pc emission of 12.4 tons, or a 3% reduction in pc CO2)
Over the same time all other nations have increased their per capita CO2 emissions from 2.1 to 3.3 tons, or an increase of 54%.
Overall, the per capita CO2 emission increased by around 10%.
The poor countries suffer most from the lack of an energy infrastructure, which gives them access to a reliable source of low cost energy, not from postulated “climate damage”.
They will undoubtedly continue developing their economies and, with this development, increase their per capita energy requirements (and CO2 emissions). At the same time they will improve the quality of life and average life expectancy of their populations.
The increase in CO2 emissions can be slowed down a bit by following Peter Lang’s suggestion of building nuclear power plants to cover the increased energy demand, where possible – or converting to natural gas, where this is abundant.
Max
Manacker,
To put some figures on this, ignoring limits of fossil fuel availability, according to GapMinder:
USA, Canada, Australia emit about 17 tonnes per capita. China emits 6.8 and India 1.8 tonnes per capita. Most of the world’s population emits less than 2 tonnes per capita.
If we assume all countries will reach the per capita emissions of USA before the end of this century, China’s will increase its per capita emissions by a factor of three and most of the rest of the world by a factor of about 10. It is easy to envisage 10 billion people with an average 20 tonnes per capita by 2100 – unless there is a cheaper alternative to fossil fuels. That’s 200 Gt/a by 2020. Nordhaus (2008) estimated human emissions at 190 Gt/a in 2100 with no emissions controls policies (Table 5-6, p100 http://www.econ.yale.edu/~nordhaus/homepage/Balance_2nd_proofs.pdf).
Another way to estimate future emisisons is using the Kaya Identity and projected growth rates for population, GDP/capita, energy/GDP, and emissions per energy use. Here is what Richard Tol says in his excellent online book “Climate Economics” https://sites.google.com/site/climateconomics/
and
Holding all rates of change constant except ‘carbon intensity of the energy system’, this rate needs to increase from -0.01% p.a. to -4% p.a. (average) to cut global GHG emissions to 55% by 2050 and 84% by 2100. So, without changing the other inputs to the Kaya Identity, -4% p.a. is the average rate of change of ‘carbon intensity of the energy system’ the world needs to achieve if we want to achieve the global emissions reduction targets being advocated.
Peter Lang
I do not believe that it is reasonable to ASS-U-ME that the entire world will be emitting CO2 at the same per capita rate as the USA is doing today (16 tons/a) principally because the rate is coming down in the USA, as it is in all the “industrially developed nations”. A lot of this comes from energy conservation measures, increased automobile mileage, improved energy efficiency in power generation plus the switch to natural gas in the USA with a small amount coming from renewable power initiatives.
The global per capita rate increased by around 10% from 1970 to 2010. Population increased from 3.69 to 6.92 billion, while CO2 from fossil fuels increased from 14.9 to 30.8 Gt/a.
CO2 from deforestation increased by around 10% from 1970 to the late 1980s and has decreased since then, back to the 1970 level. It is now at around 15% of the total. Almost all of this comes from the developing world.
Over the period 1970-2010 the pc rate in the “industrially developed” nations decreased slightly, from 12.9 to 12.4 tons/a, while the pc rate in the rest of the world increased by almost 60%, from 2.1 to 3.3 tons/a. A good part of this growth came from China.
So much for the recent past.
World population is projected to grow to 10.2 billion by 2100, with the “industrially developed nations” growing by around 30% and the rest of the world by around 50%.
By year 2100 it is reasonable to ASS-U-ME that the “industrially developed nations” will see a continued slight decrease in per capita CO2 emissions (assuming there are no radical new developments of economically competitive and environmentally acceptable alternates to fossil fuels by then), let’s say from 12.4 tons/a in 2010 to 11.5 tons/a by 2100.
Let’s ASS-U-ME that average global per capita CO2 emissions continue to grow at the same exponential rate as they did from 1970 to 2010. This would mean a 25% increase over 2010 by 2100. Let’s round that up to 30%.
On this basis the entire world would see an increase in the pc emission rate to 5.8 tons/a and the rest of the world would see an increase of 53% in the per capita CO2 rate, to 5.1 tons/a.
A cumulative total of around 4,000 Gt CO2 would be emitted from 2010 to 2100 and annual CO2 emissions would double to around 60 Gt/a by 2100.
That’s my “best estimate” for a “business as usual” scenario incorporating some “no regrets” initiatives such as those already being undertaken, but no major shift to nuclear, all-electric or hybrid automobiles, etc. and no radical new technology.
Richard Tol used a more pessimistic approach, in that he used IPCC projections for CO2 growth. If we ASS-U-ME, for example that the ROW will increase pc CO2 by 120% to 7.2 tons/a, we arrive at a total increase in pc CO2 of 80% (instead of 30%), or 3+ times the exponential rate of increase actually seen from 1970 to 2010. Total annual emission would be around 80 Gt/a by 2100, with a cumulative total of around 5,000 Gt CO2 emitted from 2010 to 2100 and CO2 concentration reaching around 720 ppmv by 2100.
This case seems on the high side to me, when compared to the past development of per capita CO2 emissions.
An even more exaggerated case would ASS-U-ME that every man, woman and child in the entire world would reach the same per capita CO2 emission level as the “industrially developed nations” have today by 2100. Total annual CO2 emissions would reach around 120 Gt/a by 2100, a cumulated total of around 8,000 GtCO2 would be emitted from 2010 to 2100 and atmospheric CO2 concentration would reach 900 ppmv by 2100.
This case does not seem at all realistic to me, as I am sure you would agree.
Putting the global per capita CO2 rate by 2100 at the same level as the USA is today, would be downright absurd, IMO.
Max
Peter Lang
Forgot to mention that my “best estimate” for “business as usual” has CO2 concentration increasing to 650 ppmv by 2100.
Rest of commentis OK.
Max
Scientists are born. Good teaching helps make them. I got a microscope and telescope with my 8 and 9 year Christmas/birthday money. At age 19 I got to work with electron microscopy, at age 21 I was doing RNA anakysis. At age 22 I got to program a computer in statistics. But it goes back to age 6, catching cterpillars, watching them make chrysalises, then butterflies at age 6.
So-called “teachers” don’t know anything about science or mathematics. I had a kid, deemed the worst kid in his class in 4th grade mathematics. “C-“. Nno lods got “D” or “F”. I switched him to home-tutoring, he eventually got to an 800 on SAT Subject Test Math Level II. I taught another boy math and found,”He’s really good”. PSAT M 66 starting 9th grade.
But his 9th grade teacher said, “You’re a C- student.” Right. You’re an F-minus teacher. Too many boys and girls are getting F- math and science teachers.
My “worst math student in my class” is now teaching from 9th-grade”Physics First” to 9th graders and AP Physics C, to 11th-12 graders, overseas. His top students are going to Cambridge UK,Stanford, Duke, Caltech (if you call it Cal Tech you are ignoarmi) and Rice.
Stephen Mosher,
I suppose your global warming humour is unintended, which just adds to the effect. You have gone to a great deal of trouble apparently, to explain an effect that cannot be observed. Where is this global warming observed?
Certainly not at night, when the surface cools. Not in winter, where one expects each day to be a little colder. Not in Antarctica, where temperatures below the freezing point of CO2 are recorded at the height of summer.
Maybe after sunrise, in the summer, up until noon or thereabouts. That’s hardly surprising, unless you are a devout Warmist. Ah, you say, the GHE has been masked or hidden by natural variability! So the GHE varies at the whim of Nature, does it? How convenient. Flexible physics, the very best kind if you want to keep the money flowing.
As to the following –
“We also use radiative transfer theory to design COVERT com channels.
We center on a frequency where the atmosphere absorbs a lot of the wattage. That gives me a short range com channel that cant be intercepted outside a certain range. Since highly classified data is transmitted over these channels the engineers who work on them only use the best physics.
That would be GHE physics. No links for these sort of things, you have “no
need to know” as we say in the classified world.”
You’re making this up. I know, you could share classified information with me, but then you’d have to kill me! Give me a break. This Secret Squirrel nonsense might impress a gullible Warmist, or a small child with a Captain Marvel Secret Code Ring from a cereal box.
If this is actually an April Fool’s Day joke, I apologise. I thought you were serious.
Live well and prosper,
Mike Flynn.
@ Mike Flynn
Mosher said: “We also use radiative transfer theory to design COVERT com channels.
We center on a frequency where the atmosphere absorbs a lot of the wattage. That gives me a short range com channel that cant be intercepted outside a certain range. ”
Agreeing with Stephen is a bitter pill to swallow, with little justifiable precedent, but in this case I’ll just have to suck it up and confirm that he is right.
Bob Ludwick,
I should apologise. People design perpetual motion machines, solar powered flashlights without batteries, and all sorts of other things.
The US military employed psychics to discover the whereabouts of Saddam Hussein’s weapons of mass destruction, and also spent millions of dollars on the program known as Stargate – remote viewing using ESP, which would give the US military and the CIA access to foreign secrets, both military and political.
I have no doubt they may have believed Mosher’s bizarre covert communication plan. Unfortunately, no trace exists, and everybody has been sworn to secrecy, apparently
In the same vein as the crazy self heating properties of CO2, easy to claim, but not so easy to demonstrate. Impossible in fact. Ray Pierrehumbert and the IPCC give the insulating value of the atmosphere as roughly equivalent to one seventh of an inch of polystyrene. Not just the CO2, the whole atmosphere. I am almost inclined to believe the IPCC figure. How’s that for even handedness? No self heating in evidence, but certainly claimed.
So no, Bob, no doubt from me that anything can be designed, in theory.
Some Greek said that if you gave him a fulcrum, with a lever long enough, he could move the Earth. Perfectly true. And the point?
Live well and prosper,
Mike Flynn.
@ Mike Flynn
A bit of dead horse flogging here, and definitely unrelated to ‘Climate Change’, but while comm systems specifically designed for operation in the high absorption bands were originally used for highly classified comm systems, and still are, they are now available from multiple commercial sources. 60 GHz is very popular because of the high absorption by O2 at this frequency and systems are widely available. Here is a white paper on the subject if you are interested: http://www.sub10systems.com/wp-content/uploads/2011/03/White-Paper-Benefits-of-60GHz.pdf
As noted in the paper, 60 GHz is popular for satellite to satellite links. It can be used at very long range in space, but the links cannot be intercepted by ground sites because the O2 in the atmosphere eats the signal before it can reach the ground. Of course the fact that highly directive antennas are very small and lightweight doesn’t hurt either.
Bob Ludwick,
You’re not flogging a dead horse at all. Unfortunately, using a secure comms system which depends on absorption by a particular gas achieves about as much as depending on ESP.
If you can think of a practical example which I cannot implement more reliably or cheaply by means which do not depend on absorption of transmitted radiation, I will eat my words, and offer a fulsome apology.
I’ll stick with my original opinion. Stephen Mosher is talking nonsense. Engineers use the same sort of physics as any non Warmist does. Warmists use Warmist physics – the sort that doesn’t actually exist in reality.
I can’t read Stephen Mosher’s mind, so I have to believe he meant what he wrote. I’m still waiting for the explanation of the secret GHE physics, for which there are no links, of course.
I did read the promotional material you linked to. I wish them every success.
Live well and prosper,
Mike Flynn.
@ Mike Flynn
“If you can think of a practical example which I cannot implement more reliably or cheaply by means which do not depend on absorption of transmitted radiation, I will eat my words, and offer a fulsome apology.”
No matter whether I convince you or not, polite and non-abusive doesn’t require an apology.
First, I provided an example: satellite to satellite communications when it is desired to keep the existence and content of those communications private. As noted in the linked white paper, the 60 GHz band was selected for this mission specifically because the O2 absorption band would ensure that ground stations could not detect the existence of the links, never mind extract content. There may be other more advanced methods available now, but you can bet that they depend on two things: near zero attenuation, other than the 1/R^2 losses, in a vacuum and near infinite attenuation between space and the earth’s surface. Fiber optics need not apply.
Second, there are apparently quite a few earthbound missions for which short range, high capacity open air links are highly desired and security is only one of them. In many cases, the narrow beam width and high attenuation at 60 GHz makes the band highly re-usable. You can have a lot of links in the same geographic area with little danger of them interfering with each other. Because of the short range ‘feature’ the FCC has chosen to allow unlicensed operation of links in this band. In other words, if you want to set up a link at lower microwave frequencies at which the atmosphere is essentially transparent, the FCC requires that you jump through a whole bunch of ‘permitting’ hoops before you are allowed to operate it. At 60 GHz, because the links don’t travel very far, you simply go buy the hardware from one of the several suppliers, set it up, and commence operations.
Building to building comms in an urban environment is a common use. Yes, fiber optics can carry lots of data, but if you think that dragging a fiber optic cable between two buildings a quarter mile or so apart in an urban environment is easy and cheap compared to setting up a 60 GHz link between the same two buildings you probably haven’t tried it. Especially if the requirement is temporary.
Here are a few companies whose business depends on the existence of the O2 absorption band. Since they are in the business, their company blurbs describe missions for which taking advantage of it makes economic sense:
http://lightpointe.com/60ghzradios/airexstreamseries60ghzultralowlatency.html?gclid=CMGLytzxy70CFYc7OgodC1AADw
http://www.siklu.com/product/etherhaul600-v-band-radio/?gclid=CPj8vcbyy70CFaVQOgod3CkA0A (Contains a nice bullet list of applications)
http://www.bridgewave.com/products/tech_overview.cfm
If you can “……implement more reliably or cheaply by means which do not depend on absorption of transmitted radiation,” feel free do do so and become rich(er). There is clearly a market.
At any rate, transmissions at around 60 GHz are highly attenuated by O2 and there are a lot of commercial, military, and intelligence missions which take advantage of that fact. There may also be other high attenuation bands which are used for similar purposes. I just knew about the 60 GHz one used it as my example.
“We have to stop climate change” was the cry even when that required using what ever totally draconian dictatorial powers we could muster across the planet!
When that didn’t work as the public got sick of the fear mongering particularly when nothing untoward was seen climatically or was experienced personally by the vast majority of the public, the message morphed into mitigation.
Well that was going to cost an arm and leg, mostly the public’s arms and legs, The realisation was that message wasn’t going down very well either particularly as those failed renewable energy scammers got increasing publicity at the monstrous rake offs of the taxpayer’s hard earned plus the rapidly increasing energy costs across the board.
So there goes mitigation which never really got much of a run in any case being swamped by the usual alarmist hell fire. end of times cultists of the global warming faith “unless somebody does something”.
Note that they personally never seemed to accept any responsibility for the situation they claimed mankind was wallowing in towards his own demise. Nor did they offer to become the ones who were doing something unless you call demanding total power over all other’s lives “doing something” in this case.
Now seeing mitigation didn’t get very far with the hoi polloi and the huddled masses the whole climate science shebang is right into the process of morphing into the next newest invention of climate science called, “adaption”
So please tell me what the hell is so new about this remarkable discovery in climate science called “adaption”.?
And please tell me what exactly are we supposed to be adapting to that we have never as a species, myriad’s of species in fact on this planet that haven’t adapted to for the three point seven billion years life has been existent on this third rock from the Sun.
Life on this planet has spent every second of that three plus billion years adapting to something including the ever changing climate both for our species ie; Homo sapiens, during our very short sojourn amongst the planetary life forms plus the adaption to the ever changing climate on a grand scale for every other life form on this planet every minute of every day.
Always without fail we are bombarded with the demands to stop climate change or we must mitigate our influences on the climate even if we aren’t definitely sure of what those influences are or how or if the affect they climate.
And now “adapting” to what is apparently so different to anything before that we haven’t adapted to and very successfully if you look at Homo Sapiens numbers, and rate of increase in those numbers from the twenty thousand or so breeding pairs at the bottom of the human bottleneck of some blink of an eyelid in time of only eighty thousand years ago ?
Never ever discussed or debated and definitely not brought up by the alarmists scientists particularly amongst those who claim to have all the answers in alarmist climate science;
What is it that the alarmists actually want in the way of global temperatures?
How much colder do they want global temperatures to go down to?
What average or regional averages of temperature are the goals of those demanding “we do something”?
What are the perfect ocean Ph levels”
What is the perfect ice cover across the polar regions”
What is the correct sea level seeing it goes up and down by 30 or 40 metres over every few tens of thousands of years.?
Lots of other what climate if’s that would designate the great unsaid and carefully skirted “perfect climate” by climate alarmists and climate science.
Assuming mitigation or stopping global warming actually worked, have the alarmist and those oh so knowledgeable climate scientists with all that bravado and hubris so prominently displayed when they are delivering the climate Gospel according to their interpretation, ever bothered to ask the public who pay all their munificent stipends, just what that global public would like in temperatures, sea levels, polar ice and etc and etc let alone ask that same public at a national or regional level what the public of each of those entities would like for their own personal climate criteria ?
Nope!
Climate science and scientists have all the answers.[ sarc/ ]
The opinions of the public re their wishes for a specific type of climate don’t count.
The ever changing climate, local, regional, national and global is the great forcing, a forcing with sufficient power that life of every type had to adapt to and fast to a rapidly changing climate or it perished.
The constant rapid and often severe changes in the climate forced life to adapt and become flexible and adaptable which in turn selected for increasing levels of intelligence which itself led to ever greater adaptability to the changes being forced onto the planet’s life forms.
Without climate change there would be almost no incentive for our planet’s life forms to be forced to change once a level of predator and prey were sorted out.
Life would have most likely stayed at little more than that of a dull dreary bacterial level.
A changing climate upset every single predator / prey relationship and the great arms race of life was on, exacerbated by a constantly changing set of environmental conditions, ie climate changing which upset any and every established species relationships requiring a new and different set of relationships to be temporarily established until that also was forced to alter or was overturned by another climate change of sufficient intensity and the need to adapt to the new circumstances.
By implication arising from the pronouncements of the alarmist scientists what is what is being sought is a perect never changing uinbelievably dull climate paradise which would eventually lead to a stabilising of the forces that drive the competitiveness of life on this planet and eventually the slow demise and eventual extinction of practically all higher life forms until the planet was once again populated only of the bacterial life.
Thankfully that doesn’t seem like it will ever happen
Just what do the hell do you alarmists climate scientists , you greens and you alarmists want in the way of an actual hard numbers climate?
The world or at least some of us want to know just what exactly climate science is demanding and aiming to establish with their constant refrain of “doing something” about the climate ?
If you don’t know what you want or what your hard number climate goals are, all of them, then shut up, hand back your grants and just let the global climate get on and do it’s thing
Life has adapted to climate and a hell of a lot of other things for 3.7 billion years.
It and us will just keep right on adapting for another, known only to God, billions of years despite the posturings and shamanic pronouncements of the myriad’s of parasitic climate scientists.
Will AGW scientists now be able to adapt to our changing climate around the world today? You all seem to be stuck between the rock and a hard place, at least that is what it looks like to me. Your time is running out, better get real busy soon. The weather won’t pause for you either.
Steven Mosher April 4, 2014 at 4:15 pm said:
5. When energy in exceeds energy out, balance must be restored. This is engineering.
Steven, I think in the cases of engineered equipment the desirable steady / stationary operational states are attained through the designs of various sub-systems components based on the general concept that energy in = energy out at steady / stationary states. However, note that in the cases for which energy exchange is determined by temperature driving potentials, and thus empirically-based heat transfer coefficient correlations must be employed, in contrast to energy-flux constrained exchange, there is always some difference between the design states and the actual physical-world operation. Generally, sufficient margins to cover the differences are designed into the sub-system components to handle the lack of fidelity of the empirical correlations to the physical world.
There are no general fundamental principles that require balance to be restored. The requirement to operate at steady / stationary states leads to application of energy in = energy out to the design of sub-system components. Additionally, engineered systems frequently do not always operate dead on the energy in = energy out point and engineered control systems are generally employed to drive the system back toward a balanced state. Again, there are no natural fundamental principles that ensure / enforce the balanced state; engineered control systems are instead employed.
The Earth’s climate systems, on the other hand, being natural systems, are not subject to design constraints. And importantly with respect to energy considerations, the climate systems are open systems, such that radiative energy in and radiative energy out are not constrained to be in balance by any fundamental principles. Some of the sub-systems are open relative to both mass and energy, and the energy exchanges are not solely radiative. Just as you said, “When energy in exceeds energy out, balance must be restored,” you can also say, “When energy out exceeds energy in, balance must be restored.”
Relative to the Earth’s radiative-energy balance, it is solely an hypothesis that a state of energy in = energy out will eventually be attained. There are no general fundamental principles that require balance to be restored and there are of course no engineered control systems to drive the systems back toward energy in = energy out. Note that generally it is almost never stated what the magnitude and frequency of the oscillations in natural systems will be as the systems seek to make the balance attained.
What is the empirical evidence that the Earth’s climate systems have at any time been sufficiently close to radiative energy balance to the extent that the signal for restoration when imbalance is obtained can be determined? An over-all global-average radiative-energy balance requires that there be energy and mass exchange balance between the sub-systems internal to the over-all system, to the degree that the sub-systems impact the over-all balance to greater or lesser extents. What is the empirical evidence that mass and energy exchanges between and within sub-systems have attained balance to the extent that the over-all balance is not measurably affected?
Weather is said to be chaotic. Climate is said to be ( some-kind-of-not-rigorously-defined ) average of weather, and is thus also chaotic. One of the most notable features of chaotic response is that the trajectories are never, never repeated. This ever-changing nature of weather ensures that balance at the interfaces between sub-systems cannot be attained. And, again to the greater or lesser extent that the over-all balance is affected, seems to imply that an over-all balance cannot be attained.
What is your basis for stating: “When energy in exceeds energy out, balance must be restored.” for the Earth’s climate systems?
Dan, You have to forgive Steven Mosher. From extensive discussions I have had with him over several years, he thinks that estimates are the same as measurements, and he never seemed to have heard of The Scientific Method, until I discussed it with him, very recently. So don’t expect much in the way of physics should he respond.
The 11-year sunspot cycle with an amplitude near 0.2 W/m2 is detectable in the temperature record, so it is a delicate balance and a detectable response.
Dan, in engineering we often use energy imbalance to do work. In that sense, when there is an energy imbalance, we use the fact that it would like to reach an equilibrium to get work out of the system. Mosher is simply pointing out that with an energy imbalance, regardless of being a natural one or an engineered one, the energy will find a way to get to equilibrium (balance must be restored). Not to say equilibrium will always be achieved, but that it is what the system will seek given the opportunity.
“When energy out exceeds energy in, balance must be restored.”
Or
“When energy in exceeds energy out, balance must be restored.”
——
But of course, balance is never restored, and hence the climate changes, and is always cooling or warming, either rapidly, or more slowly, depending on the forcings present. There is never a time when energy in exactly equals energy out and that’s a good thing as it is the attempt (always attempting but never succeeding) to restore balance that causes a wonderfully dynamic and living planet.
Hi Jim D. – You write
“…regardless of the pause, the warming rate since 1950 is the expected rate, being consistent with the main forcings of CO2, aerosols and other GHGs on the global climate.”
As you are undoubtedly aware, the heating is actually a result of radiative forcings + radiative feedbacks. This produces the radiative imbalance which can be diagnosed from ocean heat changes.
The IPCC estimates that the global average radiative imbalance is 0.59 Wm-2 for 1971-2010 while for 1993-2010 it is 0.71 Wm-2. These values are well less than the radiative forcing that you present.
It is the radiative imbalance that we should be focusing on with respect to global warming as that is the net result of all of the radiative forcings (including CO2 and other human GHG and aerosol inputs but also natural forcings) plus the wide diversity of radiative feedbacks.
It is clear in even a cursory examination of the the terms that make up the radiative imbalance that global warming is not as well understood as you indicate. We have a paper in process that discusses this issue.
What is more solid (and where I assume we would agree) is that the atmospheric concentration of CO2 is increasing due to human activity with a large part from fossil fuel combustion. This increase in concentration is having effects on biogeochemistry, including different responses by different plants.
We still do not know enough about this biogeochemical effect which by itself indicates we should seek to limit the elevation of atmospheric CO2 concentrations. This effect occurs regardless of the magnitude of global warming (or even if there were global cooling).
Moreover, there are other human inputs that significantly affect biogeochemistry. This includes nitrogen deposition and land use changes and land management practices. Thus even with this focus CO2 is just part of the issue.
As Judy has written, this is a wicked problem. Your focus on what you call the main forcings (CO2, aerosols and other GHGs) is much too narrow. We need, of course, to retain a concern regarding the added CO2 and other GHGs but this effect is not exceptional with respect to the wide range of other issues that we face.
Roger Sr.
Roger, as I suspect you should know, the radiative imbalance being less than 1 W/m2 with the forcing being near 2 W/m2 do not conflict with each other because the surface warming has already offset part of the forcing change. That is, the imbalance is driving only the remaining warming rate. In equilibrium with the new CO2 level, the imbalance will have dropped to zero, but at a much larger surface temperature. The imbalance only exists because of the inertia in the system, primarily from the oceans.
Regarding biogeochemistry, I would note that the warming in the last 30 years has already shifted by one standard deviation of the summer mean temperature in many land areas, and, at this rate, by the time we get to 2100, the summer mean temperature distribution will have shifted more than three of its own standard deviations, so its distribution will have almost no overlap with those of 30 years ago. That is, even the coldest summers of the future will rival or exceed the hotter summers of the past. This will locally necessitate major ecological shifts, because it will be a significantly changed climate.
… by the time we get to 2100, we’ll all be dead.
Says Jim with such extreme certainty. Extreme weather and certainty at unprecedented levels. I just wish I knew who would be in the NCAA finals Monday night.
“Says Jim with such extreme certainty”
Certainty is their distinguishing characteristic. Those warmists who admit to a degree of uncertainty simply fall back on the so-called precautionary principle. But that involves another kind of certainty, which that mitigation will do more good than harm.
How they can be so sure is beyond me. . I can’t even be anything close to sure that efforts at mitigation will do anything at all, except make life profoundly more difficult Clearly, they’re very intelligent people
“We still do not know enough about this biogeochemical effect which by itself indicates we should seek to limit the elevation of atmospheric CO2 concentrations. This effect occurs regardless of the magnitude of global warming (or even if there were global cooling).”
I don’t think we should encourage creating CO2 for sake of having high levels of CO2. [Nor do I think it’s useful way to warm a cooler world.]
But I don’t we need a government which is seeking to limit the elevation of
atmospheric CO2 concentrations.
We know enough about high levels of CO2, and most people live in elevated CO2 levels. If you sleep in bedroom and live in a house the CO2 level are generally 2 to 5 times higher. Greenhouse grower add CO2 to make plants grow better- and it’s probably as important as other types of fertilizers
Reblogged this on Power To The People and commented:
The End of Climate Exceptionalism aka Climate #Extremes promoted by the likes of Ann Curry have come to an end. No need to Shaft the poor with “skyrocketing” fuel prices http://wp.me/p7y4l-lnm or Whack Mother Nature http://youtu.be/5igyXyJKL_0 to “save the planet” from the trace gas CO2 that grows plants
The historical facts are just this:
Through the study of dog’s responses to food, Pavlov discovered the powerful influence of feeding appetites on the behavior of animals.
After WWII, world leaders realized that humans might destroy the whole earth, including world leaders themselves, via nuclear annihilation. They took totalitarian control by:
1. Forming the United Nations on 24 Oct 1945, and
2. Hiding information about energy in cores of atoms and stars in 1946.
The good news is that the benevolent force at the core of the Sun – the force that made our elements and sustained the origin and evolution of life – is still in total control of planet Earth [1].
The bad news for egotistical world leaders is that acceptance of reality [1] will probably be as painful for them as for a camel “passing through the eye of a needle!”
1. “A Journey to the core of the Sun,” – Chapter 2: Acceptance of Reality“
https://dl.dropboxusercontent.com/u/10640850/Chapter_2.pdf
Jim D.
Where has
“the warming in the last 30 years has already shifted by one standard deviation of the summer mean temperature in many land areas”
that is not due to land use change and/or siting quality issues?
On the forcing, as I have often written about, the use of surface temperatures is an inadequate metric to assess global warming. Not only is it a limited 2-D sample of a 3-D field, but it has a warm bias as we have shown with respect to minimum land surface temperatures.
The ocean heat changes is the metric we should be discussing. Their is no inertia in terms of heat.
On the forcing, if the climate system has warmed as you state, than part of the added CO2 forcing is no longer active as it has accommodated by the greater outgoing LW radiation to Space. To bolster your view, you need to estimate and tell us what is the current (2014) radiative forcing, not what is a change since preindustrial as the IPCC unfortunately does and as you present in your comment.
Roger Sr.
JImD
Having now looked at thousands of records of all types dating back to 1000AD it is completely impossible to view the current era as out of the ordinary. It contains few of the extremes of some of the past, is not as warm as some other periods and is generally not as cold as many others.
In other words we are currently in a benign climatic period. Why should you be surprised that we are currently warmer than we were at the end of the Little Ice Age?
Tonyb
Roger, the current forcing is the imbalance that you quoted, which is less than 1 W/m2, but it is growing at about 0.04 W/m2 per year due to emissions. I agree that the ocean heat content is a vital part of the system, because a lot, probably most, of the imbalance is due to energy going into the ocean heat content instead of directly warming the surface. However, I would not downplay the surface temperature because ultimately that is the way the forcing is balanced, and until it rises, there is further warming left in the system. The forcing goes into an interplay between the ocean heat content and surface temperature, but long-term ends up in the surface temperature. The most descriptive formula I have seen is
dF = dH/dt + dT/lambda
That is, a forcing change (dF) goes either into a rate of change of heat content (dH/dt) or directly into surface temperature change (dT). In the steady state, dH/dt goes to zero and lambda is the equilibrium climate sensitivity, but in transient states H and T both change, not only due to forcing but also due to ENSO for example where energy exchanges between them.
Tonyb, I am not surprised by the 0.8 C rise, especially since the climate forcing has changed by nearly 2 W/m2 since then, and not much of that was solar (which was the LIA recovery part). 2 W/m2 is equivalent to a solar intensity increase of 0.5%, so of course it has to warm.
He can’t hear you, Tony. Alarmist believers like jimmy dee have a need to convince themselves and evangelize to the rest of us that we are on the verge of an unprecedented temperature catastrophe; another 2C and we are toast. Thus the need for their iconic hockey stick and the parade of genuflecting imitators. The alarmista faithful are impervious to your historical records.
“Why should you be surprised that we are currently warmer than we were at the end of the Little Ice Age?”
Because the Little Ice Age is the end of a 8,000 years old rather smooth cooling trend (about 0.6°C over that whole period) that was suddenly reversed in the 20th century. The cause of the Holocene cooling trend is known. It’s the increasingly negative Milankovitch forcing. The cause of the reversal of the trend is also known… to some people, and should be a surprise… to other people. Hint: it’s not Milankovitch.
A smoothed cooling trend is not exactly the same as a smooth cooling trend.
Pierre
No, the main second phase of the LIA was suddenly reversed around 1700 when the coldest period of the LIA showed an uptick in a sharper hockey stick than the one in the 20th century. Phil jones estimated that the 1730’s was only fractionally below that of the modern era.
It then cooled and warmed again. Having cooled and warmed again various times from 1200AD to the 17th century
tonyb
I think you’re over-selling the 1730’s warming compared to the late 20th century a bit too much Tony. The period might have represented true LIA recovery, but hardly compares to the late 20th century:
http://www.ncdc.noaa.gov/paleo/globalwarming/images/last2000-large.jpg
Rgates
Please see figure 4
http://judithcurry.com/2013/06/26/noticeable-climate-change/
You can not compare a 50 year centred novel proxie with an annual instrumental record, the latter shows the variability and peaks and troughs that the paleo proxies miss
Tonyb
Rgates
Here is my comment quoting Phil jones that the 1730’s was the warmest decade until the 1990’s and that natural variability might be underestimated.
http://wattsupwiththat.com/2014/01/17/phil-jones-2012-video-talks-about-adjusting-sst-data-up-3-5c-after-wwii/#comment-1539164
It’s ok, apology accepted
Tonyb
Jim D. What evidence do you have that the radiative imbalance is is “growing at about 0.04 W/m2 per year”?
In terms of balancing the heat budget, it is never in balance. dH/dt is never zero. There are large variations intrannually as shown, for example, in
Ellis et al. 1978: The annual variation in the global heat balance of the Earth. J. Geophys. Res., 83, 1958-1962. http://pielkeclimatesci.files.wordpress.com/2010/12/ellis-et-al-jgr-1978.pdf
Shaun Lovejoy’s concludes that on long time scale variations actually increase [Lovejoy, S., 2013: What is Climate?: EOS, 94, No. 1, 1 January 2013, p1-2].
In terms of the formula,
dF = dH/dt + dT/lambda
unfortunately, it is too simplistic. First what exactly is the “surface temperature”. Second, radiative fluxes to Space do not just depend on the surface but also the atmosphere above (as I am sure you know). If the Earth had no atmosphere (including clouds), the use of this formula would make more sense.
Moreover, the Earth does not have one temperature; it spatially and temporally varies as it radiates from the surface proportional to the 4th power of temperature. See the consequences of this at
http://pielkeclimatesci.wordpress.com/2008/02/18/spatial-variations-in-gmst-ii-eli-rabett-vs-dr-pielke-sr-guest-weblog-by-lucia-liljegren/
based on the analysis of Lucia Liljegren.
I realize that this formula is central to the IPCC worldview, but the formula
global annual averaged radiative imbalance = global annual averaged radiative forcings + global annual averaged radiative feedbacks
is not only more fundamental but is a way to build more consensus on the climate issue, There is no need, for example, to debate a lambda value, as with the heat budget in Joules, there are no lags. There is no reason to debate what has been called the “climate sensitivity” [which is really not climate but a “global annual averaged surface temperature sensitivity”].
Thus, if the interest is in the magnitude of global warming, it is the radiative forcing that should be the focus.
Roger Sr.
P.S. I noticed that you did not answer my question
Where is
“the warming in the last 30 years [that] has already shifted by one standard deviation of the summer mean temperature in many land areas”
that is not due to land use change and/or siting quality issues?
Roger, on your questions
The radiative forcing (not imbalance) is increasing by 0.04 W/m2 per year, and this can be computed from a CO2 increase of 3 ppm per year. Perhaps it is currently 0.03 W/m2, but will increase to 0.04 W/m2 soon. The imbalance would only increase at this rate during a pause in the surface temperature, but normally would increase rather more slowly as the surface temperature rises.
I agree dH/dt is never zero. It can be close if you average over long unforced periods, but certainly not at the annual scale due to ENSO, as I mentioned too.
In the formula, the surface temperature has to be a globally averaged effective radiative surface temperature because the other terms are in terms of radiative forcing. If talking about climate sensitivity (lambda), this is the only temperature that it makes sense to use. It defines lambda. In practice, for small changes, the mean surface temperature is a good proxy, because there would be a proportionality.
Your proposed formula is largely equivalent because the imbalance maps to dH/dt and the feedback is the Planck response that maps onto the radiative effective surface temperature term. The formula I use argues that the Planck feedback is dominated by changes in surface temperature. I haven’t heard of that being disputed. It also argues from energy conservation that the imbalance has to go into the total heat content.
The last point on temperature rise. I don’t think any land-use change can be correlated with the patterns of this rise. GISTEMP is good for this. This shows the distribution of the annual-mean rise since the base period of 1951-1980. The largest rises are in lower population density areas.
http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=3&type=anoms&mean_gen=0112&year1=2001&year2=2010&base1=1951&base2=1980&radius=1200&pol=rob
Jim D. On the positive radiative forcing from CO2, this increase (which I agree with you is there) would adjust over time as the outgoing LW increases. Thus, what is your best estimate of the radiative forcing in Watts per meter squared due to added CO2 from human input that remains out of equillbrium in 2014? The 2013 IPCC was silent on this, and the 2007 IPCC report gave conflicting answers, as I have posted on in the past (i.e. they wrote in the SPM WG1 both that it was [http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf]
“Global average radiative forcing (RF) estimates and ranges in 2005”
and
“radiative forcing values are for 2005 relative to pre-industrial conditions defined at 1750”
Both cannot be correct.
The formula that I propose is entirely consistent with the dH/dt formula. The issue is in the interpretation of what is “T”, Please define your use of that variable (and not just “surface temperature”). Is it the skin radiating temperature, the temperature at 2m etc?
I urge you to read the analysis by Lucia that I included in my last comment, as to why the spatial distribution of T matters if one uses the dH/dt formula.
On the alternative formula, we avoid these issues, as well as others [e.g.
i) that heating in the ocean is not sampled by the surface temperature; see
http://pielkeclimatesci.wordpress.com/2011/09/20/torpedoing-of-the-use-of-the-global-average-surface-temperature-trend-as-the-diagnostic-for-global-warming/
ii) concurrent trends in surface absolute humidity obscure the actual trends in heating; see Pielke Sr., R.A., C. Davey, and J. Morgan, 2004: Assessing “global warming” with surface heat content. Eos, 85, No. 21, 210-211.http://pielkeclimatesci.wordpress.com/files/2009/10/r-290.pdf
On the last point in your latest comment, you only presented “the annual-mean rise since the base period of 1951-1980” yet your original statement was
“the warming in the last 30 years [that] has already shifted by one standard deviation of the summer mean temperature in many land areas”
Please show analyses in terms of summer mean temperatures in terms of sigma deviations.
We have shown in our work that extreme values of temperature in terms of standard deviations averaged over the lower troposphere have increased in the summer, but I have not seen for the surface nor for the mean;
Gill, E.C., T.N Chase, R.A. Pielke Sr, and K. Wolter, 2013: Northern Hemisphere summer temperature and specific humidity anomalies from two reanalyses. J. Geophys. Res., 118, 1–9, DOI: 10.1002/jgrd.50635. Copyright (2010) American Geophysical Union. http://pielkeclimatesci.files.wordpress.com/2013/08/r-341.pdf
Finally, I will be offline for most of the rest of the weekend, but I find our interactions very constructive and informative. Let’s continue next week.
Roger P.
Roger, as I answered last time, the imbalance represents the part of the forcing that has not already been countered by warming. This is very much harder to estimate than the total CO2 forcing since the beginning, or the rate of change of forcing due to emissions. There are two independent methods, one from net radiation at TOA from satellites, and the other from the rate of change of ocean heat content, but both suffer from noise, and lacking long-term or complete observations. These seem to converge on numbers between 0.5 and 1 W/m2 positive imbalance.
Yes, the spatial distribution of warming may give a different effective sensitivity for T defined as a surface mean temperature, and a mean T from T^4, but it would be unlikely that this would affect their proportionality factor for a few degrees of global warming. Both transient sensitivities could be obtained from analyses and compared, and it would be an interesting exercise. A T^4 based sensitivity would likely be higher if temperatures are changing faster in colder areas because those are less effective at emitting radiation for a given temperature rise.
The heating in the ocean is not sampled by the surface temperature, I agree, but it is a major part of the H term. It is needed to understand the pause in terms of energy conservation in a time of growing forcing, and a surface pause.
On the last point, the summer mean temperature change is given here.
http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=3&type=anoms&mean_gen=0603&year1=2001&year2=2010&base1=1951&base2=1980&radius=1200&pol=rob
The standard deviation is from the Hansen “perception” paper.
http://www.columbia.edu/~mhs119/PerceptionsAndDice/Fig2_comparison_v2.gif
Both the change and standard deviation are near 1 C for large continental areas. Hansen doesn’t have a plot of the ratio for a 10-year period, only individual years, because he was making a different point. 30 years comes from using a 2001-2010 minus 1951-1980 average, actually more like 40 years, come to think of it.
PS, Roger P, yes, glad for the interaction too.
Cart before horse.
For a given forcing, temperatures change faster in colder areas because of T^4. This implies lower sensitivity.
phatboy, in colder areas a given temperature change is less effective at offsetting a given radiative imbalance. Therefore you need a larger temperature change which is a larger effective sensitivity.
You start off with the forcing, not the temperature change.
For a given forcing, the temperature change has to be higher in cold areas in order to restore equilibrium.
But, conversely, in warmer areas equilibrium is restored with much smaller changes in temperature.
So, on average, implied sensitivity is less because of the T^4 effect
There are balancing effects.
For a given change in T, the change in T^4 is the larger the higher T. Another factor is the contribution of that in OLR, because the atmosphere is most transparent when the absolute humidity is lowest, i.e. in clear sky polar conditions.
Those are just two of the factors, but they are enough to prevent drawing conclusions from qualitative observations, calculations based on actual conditions are needed to tell better.
Another question is the significance of unweighted global average surface temperature in comparison with other possible measures, both those based on a different way on surface temperatures and those based on other quantities like OHC. The average surface temperature is used mainly, because it’s the most obvious first alternative, not because it would be known as the most informative for further conclusions.
I checked and the actual effective radiative temperature varies from 230 K at the poles to 260 K in the tropics (255 K global mean). If the surface temperature change is concentrated mostly at colder latitudes, the effective global-mean sensitivity would need to be higher (just from fixed-lapse-rate no-feedback response) to balance a given forcing change because colder regions are less effective per degree in canceling the forcing. This is in addition to the area of higher latitudes being smaller per latitude degree. In some ways, polar amplification is an inefficient way to respond to forcing in terms of the magnitude of the surface temperature change.
Jim,
Remember also that almost nothing of the surface radiation at tropics escapes to space. In tropics tropopause is also very high up and colder than elsewhere.
On the other hand a sizable part of surface radiation of high latitude winter penetrates trough the atmosphere.
Extratropical deserts are areas where the influence on the OLR may be larger, but one should check real numbers rather than speculate in any direction.
Jim D, you’ve got it exactly backwards, but if neither Roger nor Pekka can help you to see that, then I have no chance.
phatboy, T^3 is in the denominator of the Planck response, meaning sensitivity increases as T decreases. dF = 4.sigma.T^3.dT.
Sensitivity goes as dT/dF. Higher sensitivity means more Kelvin for each W/m2 forcing.
Jim D, the T^4 effect is the very reason why radiative forcing leads to much greater warming in very cold places than in very warm areas.
But, given that, by definition, at least half of global temperature is above average, where the effect is much less, the average temperature increase for a given forcing, and therefore the sensitivity, is less than it would have been without the T^4 effect.
And that’s before you factor in the effects which Pekka mentioned.
phatboy, no, if you want to cancel out an imbalance, the most effective place to add a degree of temperature is in the warmer areas because of the T^4 effect helping it. This is opposite of what the polar amplification does. It takes more degrees in colder areas to have the same effect on the imbalance.
Does it not seem that Jimbo has differentiated Steffan-Boltzmann wrt to T and set –
d(j*)/dT = dF/dT
or j* = F
This is entirely wrong.
Robert Ellison, it is that differentiation that is relevant. The change in outward radiation per temperature increase is precisely the inverse of the Planck response sensitivity, which is the temperature change required per unit of added outward energy. It is how the no-feedback fixed-lapse-rate response works.
Jim D, the temperature change is a result of the forcing, not the other way round.
Yes, a degree change has a greater effect in warm areas than in cold areas, so you don’t need such a big temperature change to offset the imbalance.
And, given that colder areas constitute a smaller proportion of the planetary surface than warmer areas, the average tends in the direction of lower sensitivity.
I don’t know why you can’t see that.
What I wanted to tell is that looking at the surface temperatures does not give the full picture. Another approach is to look at OLR. CERES has produced material suitable for that. Satellite images from this site have a clear message
http://ceres.larc.nasa.gov/press_releases.php
Strongest OLR comes not from the hottest moist tropic but a little off from that. High latitudes contribute less. What happens at the surface is a little different, but there’s a connection between surface temperatures and TOA radiation.
phatboy, for sensitivity it is the other way around. First you have the imbalance from doubling CO2, for example, or increase solar strength, then you have the temperature response, which would be more efficient from warmer areas, but needs to be larger from cold areas. If it is only warming in cold areas, the required response is larger, and that is higher sensitivity to that forcing.
Jim D, the temperature change comes from the forcing, NOT the other way around.
For a given forcing, you don’t need as big a temperature change in warm areas in order to balance the books.
I really don’t know how I can make it any clearer – so it’s up to you to think clearer.
Pekka, yes, the further assumption is that the change is not so large as to affect the general lapse rate, which links the surface temperature to the whole troposphere’s temperature profile, so in this sense OLR changes in proportion to surface temperature.
phatboy, without realizing it now you have started to agree with me. Smaller temperature change needed when it is warmer means less sensitivity, because sensitivity is temperature change per forcing.
Jim D, so now that you’re cornered, you try to transfer the blame onto me!
phatboy, I am still saying the same thing as I was at 10:37am, with which you initially disagreed. Who’s changed their view?
j* = σ T^4
=> dj*/dT = 4 σ T^3
Gives the instantaneous rate of change – the tangent to the curve – of emittance at a temperature. Not all that interesting.
Your formula assumes that:
dF/dT = dj*/dT => F = j*
This is not the case. They are different physical concepts and have different dimensions.
Robert Ellison, it is a central concept of climate change that a change in forcing is balanced by a warming to increase the outward radiation. This is how the no-feedback response works.
Forcing is defined as the nominal change in forcing with no change in surface temperature. So temperature is irrelevant to the formal definition. There is certainly a hyperbolic negative feedback – the Planck response – to increasing temperature.
Jimbo – the response is shown by the S_B equation for a grey body.
So both the blackbody formula and the equivalence (j* = F) are incorrect.
Robert Ellison, you need both forcing and temperature to define sensitivity.
F = α ln(CO2/CO1)
dT = λdF
Irrelevant to the wrongness of the formula you are using.
Robert Ellison, so lambda is sensitivity. I have shown you that dT/dF varies with temperature using the same S-B formula as you, and now you don’t understand? How can that be?
No – you and others have assumed that:
dF/dT = dj*/dT = 4 σ T^3
The change in emittance does not equal the change in forcing. They are distinct concepts with different units.
Do you not understand this? Is it one of your many blind spots?
Jim D, you originally used your (correct) argument of higher sensitivity in colder areas because of T^4(3) (incorrectly) to imply a higher global sensitivity.
Robert Ellison, the change in emission does equal the change in forcing after reaching equilibrium. That is what this is all about. First you have forcing, then the temperature changes, which changes the emission (your dj*) to balance the forcing change (dF). That is how climate change works in a nutshell. Forcing changes lead to temperature changes.
phatbot, Ellison showed you how you get the T^3 dependence by differentiating the T^4 equation.
Forcing doesn’t equal emittance for many reasons – first of all for a grey body with varying albedo. Too simple by far.
Robert Ellison, it does, otherwise you still end up with an energy imbalance after the emission has increased. Both are defined at the top of the atmosphere, are in the same units, and are basically measuring the same thing, changes in net radiation. Albedo doesn’t come into it because we keep that fixed for the no-feedback response.
Jim D, what’s that got to do with it?
Here’s your original comment, in full context:
phatboy, sure, and I stand by that one as well as the later 10:37am one I referred you to before. They are all saying the same thing.
Well then you’re wrong – end of!
‘Comparisons of global steric height trends based on different gridded fields of Argo in situ measurements show a range of 0–1mmyr−1 which can be lead back to data handling and climatology uncertainties. Our results show that GOIs derived from the Argo measurements are ideally suitable to monitor the state of the global ocean, especially after November 2007, i.e. when Argo sampling was 100% complete. They also show that there is significant interannual global variability at global scale, especially for global OFC. Before the end of 2007, error bars are too large to deliver robust short-term trends of GOIs and thus an interpretation in terms of long-term climate signals are still questionable, especially since uncertainties due to interannual fluctuations are not included in our error estimation.’ http://www.ocean-sci-discuss.net/8/999/2011/osd-8-999-2011.pdf
‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’ http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf
The more fundamental equation is the 1st order differential global energy equation.
d(W&H)/dt = energy in (J/s) – energy out (J/s)
Where W&H is work and heat – and is mostly heat in oceans – and the LHS is the radiant imbalance at TOA. Theoretically – it is possible to determine the radiant imbalance to first approximation from ocean heat. It is impossible to quantify this directly to the required accuracy from satellite observations – but the change in trend is much more accurate and the difference is equal to the change in the rate of ocean warming or cooling.
In the period that von Schuckmann and Le Troan (2011) cover energy in declined a little and energy out declined somewhat more leaving a net warming of some 0.5W/m2. So the safer assumption is that the oceans did warm in the period. Looking more closely at the components of energy out it was in the period all SW changes – less reflected SW – presumably due to secular changes in cloud cover associated with changes in ocean and atmosphere circulation.
Of immense significance is that there are both decadal changes in energy out and abrupt shifts associated with shifts in ocean and atmospheric circulation. Less cloud from the 80’s to late 1990’s, an abrupt shift post the 1998 climate shift and not much change since.
e.g http://s1114.photobucket.com/user/Chief_Hydrologist/media/cloud_palleandlaken2013_zps3c92a9fc.png.html?sort=3&o=96
Over the full CERES record there is negligible trend in net energy out – it is perhaps unlikely that this will change any time soon in the current cool decadal mode. Energy in is declining from near the Schwabe cycle peak and perhaps longer term.
It seems the trend for the near term is less warming – or even cooling. Cloud cover suggests cooling from the late 90’s peak. Only ARGO can tell for sure which way – the sign of the differential. The next round of ARGO updates should be most interesting – if we can get past the problems of data handling.
What a beautiful discussion we have. Let me paraphrase a great Lewandowsky’s insight,
The less we know the more urgent an immediate action becomes
(http://phys.org/news/2014-04-scientists-unmask-climate-uncertainty-monster.html)
and add my own imitation (the sincerest form of flattery)
The less we know the more we argue.
Seriously, I enjoy this blog a lot, and I have learned quite a few things.
I ran out of puff pursuing the threading. Sorry.
To Vaughan Pratt and Latimer Alder.
The problem with Warmists claiming – and rightly in my view – that a the actions and interactions of few molecules of CO2 can produce arbitrarily large and unknown effects on the Earth’s weather, leads to the inescapable corollary that a few molecules of anything can have the same magnitude of effect.
The problem is that we can’t track all the molecules involved. We don’t know the effects of the actions of each. We haven’t got the faintest clue what arbitrarily small changes to inputs to the deterministic dynamical system that comprises the Earth – from the centre to outermost reaches of the atmosphere, however defined – will eventually cause.
So, if you think it might rain, you might choose to take an umbrella with you. Or maybe not. If the more immediate danger seems to be the violent shaking of the building you’re in, whether due to earthquake, terrorist attack, meteorite strike, imminent structural failure, or reasons unknown, the likelihood of rain might vanish from your considerations for a time.
It is often the unforeseen that bites you on the backside, and by definition you can’t prepare for the unforeseen.
So you might choose to worry about what you consider worrisome, and figure you can worry about other things later. It seems to work for me, but your mileage might vary. Worrying on my behalf, while appreciated, is unlikely to help me, but thanks for the interest.
Live well and prosper,
Mike Flynn.
Dancing on thin ice might – rightfully – be defined as recklessness and someone perhaps more prudent might decide with admirable restraint.
OMG – stylistic pomposity is contagious.
Robert I Ellison,
Did you forget to finish your first sentence? Apparently stylistic pomposity is not as contagious as you might wish, if it involves sentence completion as part of the style.
If you need any assistance with pomposity in any form, I will be more than glad to help.
I’m not sure whether you need additional pomposity, though.
I’ll let you decide.
Live well and prosper,
Mike Flynn.
The sentence is as complete as I want to make it – Flynn. It is replete with cliché and a studied mannerism that is horribly pedestrian. It is redolent of a yokel writing to the papers with a formalism that he imagines is sophisticated – but merely shows a lack of real depth in letters and learning.
The problem with Warmists claiming – and rightly in my view – that a the actions and interactions of few molecules of CO2 can produce arbitrarily large and unknown effects on the Earth’s weather, leads to the inescapable corollary that a few molecules of anything can have the same magnitude of effect.
The idea that an industrial sh_tload of greenhouse gases can change the composition of the atmosphere considerably is not so inescapable. They are defined as greenhouse gases because of not so obscure interactions with photons of specific frequencies
e.g. http://cips.berkeley.edu/events/rocky-planets-class09/ClimateVol1.pdf
I would suggest close perusal – if they can – by both DC and Flynn because making it up as they go along is obviously not working for them.
Flynn may of course call me a warmist all he likes. Webby and Gates can chime in with denialist at the same time. Yin and yang – the middle path might result in something collectively less insane.
The problem is that we can’t track all the molecules involved. We don’t know the effects of the actions of each. We haven’t got the faintest clue what arbitrarily small changes to inputs to the deterministic dynamical system that comprises the Earth – from the centre to outermost reaches of the atmosphere, however defined – will eventually cause.
The problem of course is statistical involving not so arbitrarily large changes to the atmosphere and energy dynamic of the planet – and as I keep saying – unknown but potentially catastrophic changes in as little as 10 years in a climate that is wild. Good enough reason to invest in trade and development, restoration of agricultural soils, conservation of ecosystems and energy innovation I would of thought. Even if pollution and resource depletion were not compelling factors.
This odd cooling Earth notion of Flynn’s is indeed correct. The Earth is cooling at the surface of the mantle at a rate of 0.05W/m2. The surface of the planet is warmed by some 165W/m2. Let me do the math here. The energy in and energy out are variable – as I suggest just above – leading to warming and cooling of the oceans and atmosphere.
So, if you think it might rain, you might choose to take an umbrella with you. Or maybe not. If the more immediate danger seems to be the violent shaking of the building you’re in, whether due to earthquake, terrorist attack, meteorite strike, imminent structural failure, or reasons unknown, the likelihood of rain might vanish from your considerations for a time.
Buildings are earthquake proofed, terrorism is hopefully averted, meteorites are scanned for, the risk of structural failure is vanishingly small with modern engineering and good maintenance – catastrophic failure is engineered out of systems other than in extremes such as the World Trade Centre. Similarly – dancing on thin ice – in a wild climate – is a game for clowns.
It is often the unforeseen that bites you on the backside, and by definition you can’t prepare for the unforeseen.
So you might choose to worry about what you consider worrisome, and figure you can worry about other things later. It seems to work for me, but your mileage might vary. Worrying on my behalf, while appreciated, is unlikely to help me, but thanks for the interest.
We might reasonably foresee much that is possible, most that is probable and all that is inevitable. Mind you I do worry about Flynn’s odd take on the world – but not much. He once told me that the local bringing vegetation back to the Sahel only encourages them to survive and breed more.
e.g. http://www.telegraph.co.uk/earth/10176217/The-underground-forests-that-are-bringing-deserts-to-life.html
I guess we could stop it with a sh_tload of agent orange.
You are handling a wide spectrum of phenomena. Let me nitpick at some:
“The Earth is cooling at the surface of the mantle at a rate of 0.05W/m2.” It seems to be 0.09 now, but your point stands. “We might reasonably foresee much that is possible, most that is probable and all that is inevitable.” That one is probably limited to natural phenomena; I don’t believe that you foresaw in 1994 what Russia, a proud guarantor of Ukrainian territorial integrity, would do in 2014. But even in natural phenomena, I don’t know of a climate scientist foreseeing the “pause”.
BTW, thanks for a link to Pierrehumbert’s 2009 class.
It was reasonably foreseeable – I talked to a hydrologist in 2003 who had made the same connection. In 2007 I was in print. The IPCC had missed it entirely even after the fact.
The ‘Great Pacific Climate Shift’ in 1976/77 was widely ascribed to global warming. The shift to cooler conditions post 1998 put paid to that but even so the timing of shifts is imponderable.
Robert I Ellison,
Thanks for joining the club worrying about the way I think. I appreciate it.
Live well and prosper,
Mike Flynn.
Flynn’s short journeys to wits end – at some stage adolescent petulance pales into tedium.
“In 2007 I was in print.” This was 8 years into a “pause”, not much of a foresight. But can I have a link, anyway?Thank you.
Can’t seem to link at the moment. Google Ellison and American Thinker.
It was obvious that the system had shifted in 2003 – as I said. Perhaps naively I imagined it would be obvious to the IPCC. What do you want for nothing?
The timing of shifts can’t be predicted beforehand.
e.g. http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00626.1
Manacker,
@ April 6, 2014 at 6:16 am
Thank you for your reply. My reply is a long comment so I am posting it at the top level (hoping for constructive discussion from the denizens)
I’ve seen your assumptions and calculations on this many times. However, in the spirit of challenging statements like yours, I like to consider what others are saying as well (especially economists who crunch these numbers). I am also influenced by past experiences where enthusiastic advocates for a particular solution or course of action cherry pick from their experience and omit things they are not familiar with. I am very familiar with what the enthusiastic advocates for energy efficiency, renewable energy and carbon pricing were arguing in the early 1990’s. I was involved in it back then and still have much of the material. The Australian Bureau of Agricultural and Resource Economics (ABARE) was charged with providing the government with the projections of Australia CO2 emissions and also did what was recognised at the time as world leading work on carbon pricing (carbon trading and carbon taxes). The proponents of what could be achieved by energy efficiency and renewable energy were trying to persuade ABARE to use their (optimistic) inputs instead of the rates of growth the economists could justify. ABARE were happy to run the model using the advocates’ figures if they could support them. The advocates couldn’t but tried. ABARE ran their scenarios anyway. ABARE produced projections for Australia’s CO2 emissions for the period 1990 to 2005. I’ve compareed ABARE’s projections against what actually happened http://bravenewclimate.com/2010/08/22/abare-projections/ . ABARE’s projections have proven to be very good, especially given this was a new field of research in 1991. The advocates were way off the mark (not shown in my comparison which was comparing ABARES’s 1991 projections for 2005 with the actual emissions in 2005.
So, what are the ways we can do sanity checks on your estimate and others?
1. Long term trend of humans’ increasing per capita energy consumption:
E=1000t^-0.4
where E = per capita energy consumption in MJ/d
t = years before present.
Using this rate, average the per capita energy consumption in 2100 will be about five times what it is today.
If the energy is provided with fossil fuels, even with energy efficiency improvements, we are heading for at least a doubling of CO2 emissions from fossil fuels.
2. EIA projection is 50% increase in emissions from 2010 to 2040. http://www.eia.gov/forecasts/ieo/more_highlights.cfm If this rate continued (linearly), we’d have 150% increase to 2100 (increase by a factor of 2.5) to about 80 Gt/a.
3. William Nordhaus (2008) using DICE estimated emissions would increase from about 87 Gt/a in 2015 to about 197 Gt/a in 2100 (Table 5-6), and increase of 110 Gt/a http://www.econ.yale.edu/~nordhaus/homepage/Balance_2nd_proofs.pdf . His estimates use economists projections of rates of growth for: population, GDP, energy use per capita and per GDP, fossil carbon limit (6000 Gt C) and more.
4. Richard Tol has also done sophisticated economic projections, and also done simple analyses using the Kaya Identity.
5. Roger Pielke Jr. explains the rates of decarbonisation that would be needed in this short post and the linked Nature paper http://rogerpielkejr.blogspot.com.au/2010/07/decelerating-decarbonization-of-global.html
So, it seems to me, without looking closely at the basis for your assumptions, that you are underestimating what global emissions will grow to by 2100 (if we continue to block the cheaper alternatives, such as nuclear power).
I do not believe the trend of increasing per capita energy consumption of the past 200,000 years will suddenly change. I think the most likely rate of growth of per capita energy consumption is what has been occurring for the past 200,000 years.
Therefore, (as you’ve seen me write many times) if we want to reduce emissions, we need to allow cheap, low emissions alternatives to fossil fuels. We need to remove the impediments we’ve imposed and let the markets and competition do what they do – i.e. provide what the markets want (which is cheap, reliable, secure energy for everyone and every business and industry in all countries).
I think the best guess is nuclear power will provide most of the world’s energy by 2100 (including producing most of the transport fuels). I doubt renewable energy will make a significant contribution to future energy supply.
I reckon nuclear power can cut the emissions from fossil fuels by 50% by 2100 and do so at a large economic advantage as well. I’ve written the basis for all this many times before, so won’t write it again here.
Peter Lang
Thanks for your detailed response.
Let me comment to your points.
First of all, I agree with you that a concerted switch to nuclear energy to replace coal for power generation would have a significant impact on CO2 emissions and, more importantly IMO, on real pollution from particulate carbon, sulfur compounds, metals, etc.
Now to your long-term per capita trend line.
The formula you cite ignores the saturation effect. We are seeing this today in the “industrially developed nations”, where per capita CO2 emissions are diminishing rather than increasing. A better measure is the shorter term trend. And this showed a 10% increase in pc CO2 emissions from fossil fuels from 1970 to 2010 (CDIAC data on CO2 emissions, UN data on world population). [The other major contributor, deforestation (~15% of total today) increased from 1970 to the late 1980s and has since returned to 1970 level, despite a 87% increase in population.] If the past trend continues to 2100 we would see roughly 30% increase in the pc CO2 emissions by 2100 (not “five times what it is today” as your formula estimates. So, let me tell you as I see it: your formula is flat-out wrong as a predictor of future per capita CO2 emissions, as evidenced by the observed increase since 1970. If economists are using this formula, they are simply deluding themselves.
You predict a doubling of CO2 on the basis of this formula.
I’d say, based on a continuation of the observed the past pc increase since 1970 and UN population growth estimates, a more reasonable level by 2100 would be 650 ppmv, which would represent an increase of 65% (rather than a “doubling”). Even if we ASS-U-ME that the pc CO2 emissions will increase by 80% from today to 2100 (or three times the observed rate of increase since 1970), we arrive at an annual CO2 emission by 2100 of ~80 Gt/a, a cumulative CO2 addition since today of ~5,000 Gt and a CO2 concentration of around 720 ppmv. That would be a “worst case” upper limit, IMO.
You cite an EIA study which projects a “50% increase in emissions from 2010 to 2040”.
The UN projects a 28% increase in population over the same period (from 6.926 to 8.874 billion). So this would be a 17% increase in per capita CO2 generation over that period. Based on the past, I’d say this estimate is a bit on the high side (my estimate would be a 10% increase in pc emission and a 41% increase overall, rather than 50%, but the difference is not great.
Now you fall into a logic trap when you write:
Population growth (the folks who are generating this CO2) is NOT expected to continue linearly at all. In fact, UN estimates have it levelling off at around 10.2 billion around the end of the century, or a growth of only 15% from 2040 to 2100. So your “increase by a factor of 2.5” is, again, based on bogus assumptions.
To your point 3.
CO2 emissions from all sources were around 34 GtCO2 last year. So assuming a starting point in 2015 (next year) that is already 87/34 = 2.55 times what it was in 2013 is silly. Don’t know where Nordhaus is getting his info, but, again, it is flat out bogus.
In addition, 197 Gt/a in 2100 with a projected population of 10.2 billion equals a pc CO2 rate of 19 Gt/a for every man, woman and child on this planet, or 50% higher than it is today for the “industrially developed nations”.
I’m sure you can see how silly this projection is, Nordhaus or no Nordhaus. He has used a false starting point for his projection and then gotten bogus “estimated emissions” input info from DICE, to arrive at a bogus projection. It’s just that simple. GIGO.
Richard Tol’s study is much better, but it relies on someone else’s projections of future CO2 emissions, and hence atmospheric concentrations and temperature increase (IPCC?). Based on these assumptions, Tol arrives at 3.0C warming above today by 2100. Using IPCC’s 2xCO2 ECS estimate of 3C, this means atmospheric CO2 must be at around 790 ppmv by 2100. More GIGO?
Roger Pielke Jr. has en excellent article. Citing data from the Netherlands Environmental Assessment Agency he shows that the “rate of decarbonization” has slowed in recent decades. This should come as no surprise if one looks at world population, which increased by 87% from 1970 to 2010. RPJ discusses what would be required to reach an arbitrarily established “low stabilization target” of 450 ppmv. All makes sense (although the target is impossible to reach, no matter what we do). He concludes:
This is also undoubtedly true.
Then we come to your concluding statements:
Peter, we should not look at the per capita energy consumption trend of the past 200,000 years, but rather the trend over the past 40 years. The trend over the past 200,000 years has already changed. It was around 10% increase over 40 years.
You add:
Agree wholeheartedly.
But you don’t need to use “scare mongering” based on bogus projections to get your point across.
And that is my point.
Max
Manacker,
Likewise, thank you for your response too.
Let me deal first with this key statement at the start of your response:
The long term trend (200,000 years) I was referring to was energy consumption per capita, not CO2 emissions per capita.
I agree that the world can cut CO2 emissions per capita, as I stated. But I disagree with those who argue that the world is likely to reduce energy consumption per capita over the long term. They cherry pick countries, regions or particular periods to support their case. I gave an example of the case of the ABARE projections of Australia’s CO2 emissions in the early 1990s and how the enthusiastic advocates for energy efficiency and renewable energy tried to persuade ABARE to change their inputs include the activists’ beliefs of what could theoretically be achieved to reduce per capita energy consumption and CO2 emissions. Their beliefs were wrong by a country mile.
Can you give me a link to the site which shows that World CO2 emissions per capita are decreasing over a statistically significant period? IEA gives me these figures: (see p 125 “World key Indicators” here:
https://www.iea.org/co2highlights/co2highlights.pdf )
CO2 / population (tCO2 per capita):
1990 3.98
1995 3.85
2000 3.87
2005 4.22
2008 4.42
2009 4.29
2010 4.44
I will come back to the remainder of your comment a bit later. I just wanted to get this possible misunderstanding sorted first (i.e.l that the long term trend I was referring to was increasing per capita energy consumption, not per capita CO2 emissions).
Manacker
I didn’t deal with this properly in my previous comment. The long term per capita trend line is for per capita energy consumption. I do not agree with cherry picking short term trends, and especially if you are simply picking out some countries regions or economic groupings to support your point rather than using the total world figures. Developed countries, it could be argued, are in an anti-enlightenment period (as has happened throughout history as empires grow comfortable and complacent). The rate of development in the developed countries has slowed. For example, we stopped going to the Moon in the 1972 and exploring beyond. When we start travelling to Mars and beyond we’ll use a hell of a lot more energy per capita. Energy use per capita will continue to grow as per the long term trend. You’ll need to provide a stronger argument than you have so far to persuade me otherwise on this
Manacker,
There are lots of good points in your comment. For example, in 2007, leading up to the AR4 report, Nordhaus projected global CO2-eq emissions of 87 Gt/a in 2015. And yes, the latest estimates of global CO2 in 2013 is 34 Gt/a. Of course, he didn’t anticipate the GFC and other corrections when he did these analyses. So, yes his projection for 2015 is way too high (and possible for 2100 too; I don’t know but I do respect that he is a highly regarded world authority on the subject).
There are also cases where you have mixed apples and oranges and seem to have confused energy growth rates and CO2 emissions growth rates (and misunderstood what I was trying to say). I can’t go through them all. But here is an example of a misrepresentation:
The formula you referred to is a trend of per capita energy consumption over the past 200,000 years, not a predictor of CO2 emissions. I don’t understand how you could have misunderstood that from my comment.
Here is another:
I was clearly talking about CO2 emissions, not concentrations. You’ve done a pea and thimble trick; you asserted that I’d said “You predict a doubling of CO2” without stating that I was talking about emissions, then you address the comment by talking about concentrations. It’s concerning that you would do that.
Here is an example of mixing apples and oranges:
Pielke’s figures are for 1990 to 2007 but you’ve stated population increased by 87% from 1970 to 2010. Is this a case of mixing apples and oranges and misrepresenting the comparison? Is this a case of you doing what you are accusing me of doing and inflating figures to support your argument?
Also, just to be clear for other readers, his ‘decarbonisation of the global economy’ is referring to is CO2 emissions per GDP, not CO2 emissions per capita.
I’d also point out that some of the major reasons for the slowing of the rate of decarbonisation of the global economy since 1990 are: 1) blocking what was in the 1970s to 1990s an accelerating rate of rollout of nuclear power; 2) a slowing rate of development of hydro as the world’s most economically viable hydro sites are already built; 3) the rapid growth of fossil fuel consumption in Asia This has reduced the proportion of the world’s energy that is supplied by nuclear and hydro; renewables have made an insignificant contribution so have not made up for the reduction of low emissions energy supplied by nuclear and hydro.
I agree with this point. But neither should we understate the issue based on bogus projections. I haven’t mentioned damage function or impacts, so I don’t believe I am ‘scare mongering’. I am just trying to be realistic and caution against understating the situation. I am trying to persuade you to not understate the likely emissions projections if the world doesn’t have a cheap alternative to fossil fuels. Understating the emissions projections is just as bad as overstating it.
In short, I do suspect you are underestimating what global CO2 emissions would reach in 2010 if we continue to use fossil fuels to provide the current proportion of global energy. To change that substantially we need cheaper alternatives. The most prospective technology that could supply most of the world’s energy by 2100 is nuclear, IMO. But is it effectively blocked by policies the developed world has implemented which artificially make it far more expensive than it could and should be. That is my point. I don’t think it helps to guild the lily and understate the CO2 emissions projections as I believe you are doing with the assumptions and simple analysis you keep repeating. I think you are giving a low ball estimate and should say so each time you write it. I do agree that many of the figures I provided in my comment are on the high side, sometimes a long way on the high side. But my main point remains: unless we remove the blocks that are preventing the world from having cheap, low emissions energy, then CO2 emissions will be much higher than they will be if we tackle this issue; and the sooner we do it the lower will be the emission throughout the century. I’d add that, IMO, renewables will supply only a minor proportion of world energy by 2100.
Peter Lang
It does not make much sense for me to get into a p—ing contest with you about per capita CO2 generation from fossil fuels, but let me clarify.
First of all, I have NOT ASS-U-MED that pc CO2 generation or energy consumption is decreasing as you write, rather that it will continue to increase at the same exponential rate that it has done over the past several decades (1970-2010). As I showed you using CDIAC and UN data, it increased by around 10% over this period. CO2 from deforestation is the same today as it was in 1970, despite a 87% increase in population, so this is not tied to global human population and is unlikely to increase.
Using the same exponential annual rate of increase and adding in a bit, I would arrive at a further 30% increase from 2010 to 2100.
Using the year-by-year UN projection reaching 10.2 world population by 2100, I arrive at around 60 Gt/a by 2100, a cumulative emission of around 4,000 GtCO2 and a CO2 concentration of 650 ppmv.
Even if I ASS-U-ME that the pc CO2 generation will increase by 3 times the past rate, or 80% from 2010 to 2100, I arrive at 80 Gt/a by 2100, 5,000 GtCO2 cumulative and a concentration of 720 ppmv.
The Tol study used CO2 data from IPCC, and based on the estimated temperature increase of 3C from today to 2100 and IPCC’s 2xCO2 ECS of 3C, this means that CO2 would have doubled compared to today, reaching 2*395 = 790 ppmv.
This is very much on the high side IMO, but any estimates that exceed this range are ludicrous.
The Nordhaus estimate you cited ASS-U-MEs that by 2100 every man, woman and child on this planet will be generating 50% more CO2 than the inhabitants of the industrially developed countries do today. This is absurd.
IPCC also has a silly case like this in AR5 (RCP8.5), with CO2 reaching over 1000 ppmv.
Economists should be careful accepting silly bases from politically motivated agencies like IPCC for making their studies.
You don’t need to resort to silly fear mongering based on absurd projections in order to sell your idea that we should convert to nuclear power in the future rather than build more coal-fired power plants.
Your point is valid without doing this and I would fully support your position, especially since it also eliminates real pollution from black carbon, sulfur, metals, etc. (which, unlike CO2, kills people).
There’s no need to oversell your story with bogus data, Peter.
Max
Manacker,
Thank you for explaining your position. I do admire your ability to explain it very clearly. And I believe I do understand what you are saying.
However, I am not totally persuaded. I do accept that the high estimates of around 700 ppmv CO2 concentration by 2100 are reasonable if we continue to block cheap alternatives to fossil fuels.
There are two reasons I am wary of accepting your figures. The first is that per capita energy use will continue to grow as it has. You haven’t really dealt with that. To me that is the crucial first step. The next step, about CO2 emissions and concentrations) is secondary. That step requires assumptions about what rate we decarbonise the global economy.
We cannot envisage how and for what future societies will use energy? We have no idea what the uses of energy will be in the future. We can imagine lots of things: e.g. space travel, mining to great deaths to extract the resources we need and many new consumer and industrial technologies that will demand ever more energy; these will far exceed (in total) the improvements in efficiency we achieve from the current technologies and appliances.
So, energy will continue to increase. The rate of decarbonisation of energy is secondary that I want to put aside until I understand if we agree that per capita energy consumption will continue to grow approximately as it has done (with ups and downs in the growth rates of course). Can we address that first, please?
Insults are unhelpful. I have your back-of-an-envelope analyses on one hand and a mass of highly regarded scientists and economists on the other hand. Then I do my own sanity check. And I bring my own experience having seen how little progress has been made in the last 23 years compared with what, back in the early 1990s, the dedicated enthusiasts and advocates were asserting would be achieved by now. So, to restate, I do understand your simple assumptions and calculations but not sure they are appropriate.
What would be helpful, if you felt so inclined, would be to point out what you believe are the fundamental flaws in the DICE-2013 analyses.
DICE 2013: http://www.econ.yale.edu/~nordhaus/homepage/
Some inputs for Kaya Identity based on 1950-2010 trends: http://www.econ.yale.edu/~nordhaus/homepage/documents/Prague_June2012_v4_color.pdf Look at slides 8, 15.
Below are two more references you may want to look at. The first summarises some of the key inputs used in DICE 2007, and the second explains the basis for these inputs and the calibration of them.
A Question of Balance, Table 7-1, p127 http://www.econ.yale.edu/~nordhaus/homepage/Balance_2nd_proofs.pdf
Notes on DICE-2007 … (calibration of input parameters)
http://www.econ.yale.edu/~nordhaus/homepage/Accom_Notes_100507.pdf
What are the major flaws in the inputs that are causing an over estimation of CO2 emissions to 2100?
Peter Lang
This exchange is getting boring so I will keep this last comment brief.
As expected, the economic evaluation methodologies used by Tol and Nordhaus are impeccable (after all, they are both renowned economists)
So your “argument from authority” is valid.
It’s just the CO2 assumptions they have been fed (by IPCC) which are exaggerated.
This is no surprise to me.
And, yes, the only economically viable answer today for reducing future CO2 levels significantly is going nuclear, as you suggest.
End of discussion, as far as I’m concerned.
Let’s move on to something else.
Max
Manacker,
Thanks you for your answer. But I think you misunderstand the inputs to DICE. The ones that estimate CO2 emissions are not from IPCC. DICE calculates them from economic data, not from IPCC data. It is not until the model calculates CO2 concentrations, temperature changes and damages that some of the inputs are from climate science. That’s how I understand it any way.
I hope you wont cop out having made comments about me using bogus numbers.
Manacker,
Just for another quick sanity check, I inserted the figures from Slide 8 http://www.econ.yale.edu/~nordhaus/homepage/documents/Prague_June2012_v4_color.pdf into the Kaya Identity and calculated the emissions in 2050 and 2100. The growth rates given are the average growth rates from 2050 to 2010. I did not use the population growth rate. Instead I assumed round figures of 9 billion and 10 billion population in 2050 and 2100 respectively.
1950 2010 growth rate 2050 2100
GDP/Pop (2005 $/person) 2,744 9,856 2.2 23,536 69,868
CO2/GDP (tons/1,000,000 $) 852 497 -0.9 346 220
Population (millions) 2,557 6,869 1.7 9,000 10,000
Total CO2 Emissions (million tons CO2) 5,976 33,678 2.9 73,330 153,910
Factor increase from 2010 2 5
I calculated the emissions in 2050 and 2100 would be 2x and 5x the emissions in 2010.
These support the figures in my sanity check that you disagreed with. I hope you may retract your comment about me using bogus figures.
Manacker,
Further to the above, I believe the disagreement can be brought down to a a debate about the growth rates for GDP per capita and CO2/GDP. The average global rates from 1950 to 2010 were:
GDP/Pop (2005 $/person) = 2.2
CO2/GDP (tons/1,000,000 $) = -0.9
What are the appropriate rates from 2010 to 250 and 2100 and what is the basis for them? William Nordhaus and Richard Tol have done lots to estimate the rate of change of these rates to 2100 and through to about 2500 (They have not used rates supplied by IPCC).
Peter Lang’s growth rates are consistent with an annual growth rate of 1.7-1.8% for emissions. This leads to about 7500 Gt CO2 between 2010 and 2100. This continuous growth is driven mostly by GDP, which I interpret as development, since the population is flattening. This amounts to 500 ppm added to the atmosphere between 2010 and 2100 resulting in nearly 900 ppm of CO2 in the atmosphere by then under this growth scenario. The forcing from this is 6.2 W/m2, so it is a little above the RCP6 scenario of the IPCC.
Jim D,
They are nit my growth rates. They are the grates Nordhaus quoted for the world 1950-2010 (Slide 8 here: http://www.econ.yale.edu/~nordhaus/homepage/documents/Prague_June2012_v4_color.pdf ).
He uses lower rates and the rates decrease over time in DICE and RICE for his projections.
My points is that Manacker’s simple, back-of-an-envelope- estimate is lower than most of the economists estimates. Manacker does not use rates of change but simply does a linear projection of past rates. so he is ignoring the effects of compounding. I suspect his estimates may understate the rates CO2 emissions that would occur unless we can all cheap alternatives to fossil fuels. That is the point I’ve been trying to make.
To clarify, Nordhaus (2008) projected CO2 concentration in 2100 at 686 ppmv.
And to correct my previous comment, Nordhaus’ projection of industrial CO2 emissions in 2100 is 190 “Billions of Metric Tons of Carbon per Decade, Industrial Sources“. Or 70 Gt CO2 per year.
Manacker,
Nordhaus’s figure is correct. I’d messed up the units. His units are in Gt C not CO2 and per decade, not per year. My mistake not his.
I’ll correct my statement as follows:
William Nordhaus (2008), using DICE, estimated industrial CO2 emissions would increase from about 87 GtC per decade in 2015 to about 197 GtC per decade in 2100 (Table 5-6), an increase of 110 GtC per decade. Converting to tCO2/a, this becomes:
William Nordhaus (2008), using DICE, estimated industrial CO2 emissions would increase from about 32 Gt CO2 per year in 2015 to about 70 Gt CO2 per year in 2100 (Table 5-6), an increase of 38 Gt CO2 per year.
This is a scenario that adds 4500 Gt CO2 and ends up with 700 ppm by 2100. Rather more conservative than the previous one.
Below I compare Nordhaus (2008) estimates of industrial CO2 emissions in 2015 and 2100 with the statements I made in my sanity check in my first comment (I’ve corrected my misquote of Nordhaus emissions estimates; units converted to to Gt CO2/a)
Nordhaus estimates (Gt CO2/a):
2015 = 32
2100 = 70
Summary of the statements I made in my sanity check in my first comment:
1. “If the energy is provided with fossil fuels, even with energy efficiency improvements, we are heading for at least a doubling of CO2 emissions from fossil fuels.” [This statement is correct, not bogus]
2. “EIA projection is 50% increase in emissions from 2010 to 2040.http://www.eia.gov/forecasts/ieo/more_highlights.cfm If this rate continued (linearly), we’d have 150% increase to 2100 (increase by a factor of 2.5) to about 80 Gt/a.” [this statement is roughly correct, not bogus]
3. “William Nordhaus (2008) using DICE estimated emissions would increase from about [32] Gt/a in 2015 to about [70] Gt/a in 2100.” [This statement is correct, once the units are corrected].
So, I suggest, apart from my misquote of Nordhaus, the ball park figures I gave initially, that Manacker said are bogus, are not bogus. In fact, I tend to believe that Nordhaus’s and Tol’s projections are probably as good as any for the situation where we continue to block the prospective cheap alternative to fossil fuels, i.e. nuclear power.
Jim D and Peter Lang
Jim, your projection of 1.7-1.8% for emissions based on continuation of past CO2 growth rates, despite a dramatic slowdown in population growth, as projected by UN, makes no sense at all. Remember we are talking about human-caused CO2 emissions.
Peter, I see you now write that the Nordhaus estimate is based on 686 ppmv CO2 by 2100.
This is not much different from my “back of the envelope” figure of 650 ppmv and is certainly not absurd IMO. I thought you cited a much higher value (over 1000 ppmv) in your earlier comment, but maybe I misunderstood.
My comment (that the forecast was grossly exaggerated) was based on the data you listed in your earlier comment (mixing GtC per decade with GtCO2/year), and possibly by my misunderstanding your statements. Now that you have cleared this up, you can ignore my comment.
You say I have not used compounded (exponential) growth rates.
This is not correct. The compounded annual growth grate (CAGR) of per capita CO2 emissions (1970-2010) was 0.25% per year. I used a slightly higher CAGR of 0.3% per year going forward to 2100, tied to the projected annual population. This results in roughly a 30% increase over the period.
Using my simplified per capita CO2 emission approach, one would need to use a CAGR of pc CO2 emissions of 0.6%, or twice the rate I used, to arrive at a CO2 concentration of 686 ppmv by 2100, as Nordhaus has estimated.
So, based on these data, the Nordhaus (2008) CO2 projection (tried to anticipated GDP growth) checks roughly with my “quickie” reality check (tied to population growth plus an increase in pc CO2 emission), except that his estimate ends up with a somewhat higher CAGR of pc emissions to 2100.
Now to Tol.
According to Tol, the projected warming by 2100 (per his Figure 1) is 3.7ºC above the temperature in 1900 (his baseline), or 3ºC above the 2010 temperature. He states his bases: change of temperature benchmarked to 0.04ºC per year and change since “pre-industrial” (1750) = 1.0ºC. Figure 1 shows annual increase growing from 0.02ºC in 2010 to 0.038ºC by 2100 and flattening out asymptotically to a maximum of 0.04ºC (his “benchmark”?).
At the 2xCO2 equilibrium climate sensitivity of 3ºC, as predicted by the IPCC climate models, this would mean CO2 level of 2*390 = 780 ppmv (at “equilibrium”).
This is high, but not unrealistic IMO. And it appears that Tol does not tie temperature increase to CO2, GDP or anything else other than a “benchmark” rate increase of 0.04ºC per year.
So our “disagreement” really isn’t one, Peter, now that the numbers are cleared up.
The difference between my 650 ppmv CO2 and Nordhaus’ 686 ppmv CO2 is minimal.
Tol does not specify a CO2 level for 2100 in his study, but based on the warming he estimates, it is somewhere around 780 ppmv.
My “quickie” reality check would say this is probably on the high side, since it would require the pc CO2 emission to increase at 1.15% CAGR, over four times the rate actually observed from 1970-2010. It also would mean that every man, woman, and child on this planet would be emitting as much CO2 per capita as the inhabitants of the “industrially developed” nations do today, around 12 t/a. A stretch, IMO
But I’ll repeat the point I made: anything in the range of 650 to 790 ppmv would still lie within my “quickie” reality check; levels around 1000 ppmv would not (because they imply that every man, woman and child on this planet will be emitting twice as much CO2 per capita as the inhabitants of the “industrially developed” nations do today, around 26 t/a).
I’m sure you would agree that this is not a reasonable projection, especially in view of the fact that the “industrialized nations” are gradually reducing their per capita emission and the second constraint that there most likely aren’t enough fossil fuels left on our planet to even reach these levels.
I hope this now clarifies our apparent misunderstanding, so we can end this exchange and move on to something else.
Max
Manacker,
I posted another comment before I’d seen this latest comment.
Thanks you for responding.
I’d just like to correct this statement:
That is not correct. The Nordhaus estimate is not based on 686 ppmv. Nordhaus estimate of 686 ppmv is an output of the analyses that led to it which include projections of growth rates for global population, GDP/capita, energy consumption per GDP, emissions intensity per unit energy consumption. All these are derived from inputs from independent authorities, not form IPCC.
I agree the 686 ppmv is close to your estimate of 650 ppmv. But my points I that your figure is on the low side. That was my pointAs you said to me up thread:
I have not been talking about ppmv. That is your conversion. That requires a lot of information from IPCC and climate scientists about carbon cycle. I have stayed away from that (mostly). However, I have on many, many occasions in the past pointed you to Nordhaus’s figures in the tables in Chapter 5 and 7. I have mentioned and quoted the Nordhaus figures in response to your rough estimates many times in the past (possible over 20 times). I am surprised you haven’t followed up and looked at the tables in Chapter 5. In fact, I am surprised you have never read the book “A Question of Balance” to understand what is involved in the projections. http://www.econ.yale.edu/~nordhaus/homepage/Balance_2nd_proofs.pdf
Your comments on Tol are about temperatures and concentrations. That was not what I raised with you in my original comment. The comment was about projections of energy consumption and CO2 emissions to 2100 for the case where there is no cheap alternative to fossil fuels. You may want to read Tol’s on-line book, and if interested build the model yourself as per his guidance. ‘Climate Economics”: https://sites.google.com/site/climateconomics/
I agree!
There was never a big gap between what we’ve been saying. But you frequently wrote your simple calculation which gives a somewhat low-ball estimate. In the past, I frequently pointed this out to you, but you’ve kept posting the same figures, many times. I felt it was worth pointing out, again, nicely, that you figures are low-ball compared with authoritative sources. I was trying to say to you, but nicely:
[My bold]
I would be grateful if you will retract your assertions that the figures I stated are bogus.
Peter Lang
While I sincerely believe in civil and rational debate, I am not too sensitive about “hurt feelings” on my part. I hope you aren’t either.
My “back of the envelope” reality check is obviously not a replacement for a detailed economic study, such as the ones made by Tol or Nordhaus.
But it helps me separate studies that reflect a semblance of reality from those that do not.
And I am not talking about economic losses/gains, expressed in future GDP percentages, discount rates, potential benefits of CO2 fertilization of crops or any of the other rather complex calculations involved in arriving at a detailed economic study of the potential future impact of AGW on human society.
I am simply talking about the basic assumptions on CO2 growth as a result of human emissions, which form the basis for the economic study.
These are without a doubt tied to population growth forecasts – a point that you understand full well, but some true CAGW believers (like Jim D) apparently do not.
IPCC AR5, for example, has a so-called “business as usual” worst case that has CO2 growing to more than 1000 ppmv by 2100 (scenario RCP8.5).
This does not pass my quickie “reality check”, if UN population projections are anywhere near correct, because it ends up with every man, woman and child on this planet emitting twice as much CO2 by 2100 as inhabitants of the “industrially developed” nations do today on average, which is flat out ludicrous, no matter how one looks at it. It also bangs up against another possible constraint: the amount of fossil fuels still left on our planet.
I now know that this is not what you were talking about, since you have corrected your GtC/decade versus GtCO2/year mixup, which would lead to CO2 emissions exaggerated by a factor of 3x and these high CO2 levels.
After this correction, I see that you were not projecting such high CO2 concentrations (which I referred to as “bogus”, because IMO such high estimates for emission rates and future concentrations definitely would be).
Our estimates are not that far off after this has been corrected.
So there is nothing “bogus” in your projections now.
OK?
Max
Manacker,
Thank you for a sort of weaselled half retraction, with plenty of excuses and caveats, for your uncalled for incorrect assertion that both my data was bogus.
I guess where this ends up is that your back-of-an-envelope estimate understates the likely emissions increase to 2100 in the absence of a cheap alternative to fossil fuels. Some might say your estimate and method are bogus; luckily that doesn’t offend :).
I would have thought you would have, at some stage, checked the Nordhaus reference yourself (I even gave you the link and page number).
Yes, I incorrectly mistook the units (as Gt CO2 per year when Nordhaus gives them in GtC per decade), but you have gone to some length to avoid a genuine, retraction for your assertion of “bogus data” and even in your last comment continue to try to confuse the answer by talking about concentrations (and now extended it to temperature) when they was not raised by me in my comments. My comments were about energy consumption and emissions, not concentrations. And the ratio I stated for emissions in 2100 / 2010 (i.e. a doubling of emissions from 2010 to 2100) was correct from the start – i.e. it was correct whether the units were GtC per decade to GtCO2/a. So, basically, you asserted each of my statements was bogus on the basis of my one incorrect use of units for the Nordhaus estimated emissions in 2015. Careless on my part I accept. But it didn’t call for your long, disparaging rant saying my whole comment was bogus.
It’s a pity you weren’t prepared to make a simple retraction.
But it’s done now. Thanks for the partial mea culpa.
Peter Lang
Let’s summarize and cut the “weasily” crap.
Per capita energy consumption increased by ~14% from 1970 to 2010 (EIA data on energy, UN data on population)
The fossil fuel percentage decreased from 91% to 86%, so fossil fuel consumption increased by ~9% over the period.
This checks with per capita CO2 emissions from fossil fuels, which increased by roughly 10% over the period (CDIAC data on CO2, UN data on population).
You posted a formula for long-term per capita energy consumption, with the comment:
I pointed out that “five times” by 2100 is much higher than what would be expected from a continuation of the observed compounded rate of increase from 1970 to 2010. Using this compounded rate, the per capita CO2 emission would be about 30% higher in 2100 than today (1.3 times) not 5 times.
I did NOT use the word “bogus”
You pointed out that EIA projects a 50% increase in emissions from 2010 to 2040 and:
I pointed out that population growth rates are NOT expected to “continue linearly” from 2040 to 2100, but rather to reduce considerably and then flatten out, so that using a a “linear growth rate” assumption to make future projections was “bogus”.
You then stated:
I pointed out that” 87 Gt/a in 2015” was already 2.5 times higher than the actual 2013 emission from all sources of around 34 Gt/a, so that this assumption, as well as the projection of “197 Gt/a in 2100”, which is based on the same error was “bogus”.
You have since corrected your numbers (you had the wrong units). That takes care of the “bogus” assumption, as I have written and I have retracted it.
You mentioned the Tol study but did not cite any numbers. I mentioned that the study concluded 3C warming above today by 2100, which implies a doubling of CO2 to 790 ppmv, which I found to be on the high side. I questioned Tol’s input data on CO2 increase, suggesting there might be a case of some GIGO. We’ve since cleared that up.
You cited an interesting Pielke study on decarbonization. I pointed out that the study was interesting, but with rapidly increasing population it would be very difficult (if not impossible) to expect any real decarbonization. I do not believe that we had any substantive disagreement on any of this, although it seemed to bother you that I cited the 1970-2010 population increase, rather than one for the period 1990-2010 (the period used by Pielke, which also shows a rapid population increase).
You then added:
My response was
I stick with that (it is confirmed by EIA, CDIAC and UN data, as I point out above).
After that I pretty much agreed with your conclusions on replacing future coal with nuclear, etc., but added that one should not “use ‘scare mongering’ based on bogus projections” to get a point across.
This apparently set you off.
We have gone through this all ad nauseam. The “bogus” reference was to the highly exaggerated projections you made based on mixing up GtC/decade with GtCO2/a, which results in an exaggeration of ~3 times in projected emissions. You have corrected this. I have accepted your correction and withdrawn the “bogus” reference.
As a more minor point, I also took issue with using a 200,000 year per capita energy requirement correlation as the basis for future emissions, when a more recent 40-year correlation shows a totally different trend. I stick with my point on that.
And the third point was that using a “linear” rate for emissions without taking the declining population growth rate into account would give a false conclusion. Again, I stick with my point on that.
As to my sentence on using “scare mongering based on bonus projections”, I stick with that as a general concept, as well – but I agree that since you corrected the mixup cited above, it does not apply to you, personally.
Can we end thus discussion now and move on to something more productive?
Otherwise, let’s move it off-line so we don’t bore others.
Max
Manacker,
Another long rant and still dodging retracting your (wrong) assertions that my statements were “bogus’.
Let’s sumarise and cut the weasel crap.
Your back-of-an-envelope calculation is simplistic and underestimates the increase in CO2 emissions from 2010 to 2100 for the case with ‘No Controls’ and no cheap alternative to fossil fuels. Some might call it bogus.
Best estimates from the leasing world authorities like Nordhaus are for emissions to double or more from 2010 to 2100, i.e. (CO2 emissions more than double from about 32 Gt/a to about 70 Gt/a CO2).
That’s the summary without the weasel crap.
By the way, you did use the word “bogus” not just once but repeatedly!
And lastly, you’ve shown you are incapable of making a retraction of your unjustifiable derogatory comment.
I’d already accepted the discussion was closed after my last comment and that you have no intention of retracting your slur. I don’t know where the argument could be moved off-line to. But you can stop any time you like. It remains as a minor annoyance for me, not because of the original discussion but because of your long rants trying to defend your comment instead of making a simple retraction.
Manacker,
Below are some relevant data points from RICE 2010 http://www.econ.yale.edu/~nordhaus/homepage/NYRB_RICE.htm . You can use these to calibrate your back-of-an-envelope calculation and, by using the Kaya Identity, you can understand where you are going wrong. Basically, CO2 emissions is calculated from GDP x CO2/GDP (called emissions intensity) (convert tC to tCO2 where applicable).
At 2010:
Population (millions) 6797
Output ($trill) 68.3
World emissions intensity (sigma) 0.133
Industrial emissions (GTC per year) 8.986
Industrial emissions (Gt CO2 per year) 32.9
At 2100:
Population (millions) 8953
Output ($trill) 461.5
World emissions intensity (sigma) 0.042
Industrial emissions (GTC per year) 19.207
Industrial emissions (Gt CO2 per year) 70.4
Ratio 2100 / 2010:
Population (millions) 1.32
Output ($trill) 6.75
World emissions intensity (sigma) 0.31
Industrial emissions (GTC per year) 2.14
Industrial emissions (Gt CO2 per year) 2.14
You can calculate the emissions from GDP x emissions intensity. Nordhaus uses GtC, so need to convert to GtCO2.
Points to note:
1. the projected CO2 emissions in 2100 is 70 Gt CO2/a
2. this is 2.14 times the 2010 emissions
3. He uses 8.953 billion in 2100, so less than you are using
4. the Output (or GDP) is the main driver of CO2 emissions growth
5. Output (or GDP) is 6 times the 2010 GDP
6. I said energy use would be some 5 times higher (so probably in the right ball park)
These figures from RICE 2010 support the figures I stated in my comments that you called bogus (as I acknowledged, I gave one set of emissions figures in wrong units, but the ratio of 2100 / 2010 was still correct, and this was just one of many things you called bogus).
Your simple retraction for repeatedly calling my multiple lines of evidence “silly” and “bogus” would be most welcome.
I acknowledge my error in misquoting the units Nordhaus used in Table 5-6 here: http://www.econ.yale.edu/~nordhaus/homepage/Balance_2nd_proofs.pdf
Peter Lang
You apparently have a problem.
You posted some screwed up data (mixing up GtCO2/year with GtC/decade) and I called this a “bogus” estimate.
You corrected your mistake and I retracted my “bogus” comment.
You also posted a 200,000 year progression on human energy consumption as the basis for projecting future energy use. I pointed out that this correlation did not work for the period 1970-2010 and, as a result IMO, gave “bogus” numbers for the future.
Our numbers basically agree now: CO2 is likely to reach somewhere around 650 to 750 ppmv by 2100.
This is in basic agreement with Nordhaus (686 ppmv) and Tol (not shown, but back-calculated to around 790 ppmv, based on temperature increase).
You postulate that switching to nuclear would reduce these estimates and I agree.
So I really can’t figure out what your problem is.
Can you explain it in one or two sentences (if there still is a problem).
Max
Manacker,
I’d say it is you that has a problem. You have a problem admitting when you are wrong and with retracting. On one hand you want to make a long statement of cherry picked explanations making it look as if I’d been wrong, when in fact it was you. I admitted my one error, but you called “bogus” to all my points. All of which were substantially correct. The ratio of 2100 to 2010 emissions was still correct where I used the wrong units, and you could have checked the reference to spot my misquote of the units.
It’s the fact you called “Bogus” repeatedly throughout the comments, are unwilling to clearly state a retraction for that, and are continuing to try to muddy the waters to cover up and make it look as if you back of an envelope calculation is OK. It’;s not. It’s bogus.
So, why do you have a problem make a clear unblemished retraction of your wrong assertions (many of them) about “Bogus” and “silly”? And why do you not admit that your ‘back of an envelope calculation is incorrect? It is bogus!
Peter Lang
Thanks for your response (comment #517597, April 8, 2014 at 6:07 pm).
My response to you (comment # 517650 below) crossed with your latest comment and has probably addressed all your points
But since you were kind enough to list your specific grievance with me, let me address this specifically in the hopes of ending this exchange with both of us feeling vindicated.
First, I have re-iterated several times that I do not think that any of your projections are “bogus” with two exceptions:
1. The error you made in confusing GtCO2/year with GtC/decade and the resulting exaggerated forecast for the future. (You have corrected this and I have withdrawn my “bogus”.)
2. The extension of the 200,000 year energy consumption correlation to arrive at an energy consumption in 2100 that is “5 times the energy consumption of today”. We are still discussing this. EIA data show that total per capita energy consumption increased by around 15% from 1970 to 2100, with fossil fuel consumption increasing at a rate of 10% over this period. Extrapolating this rate exponentially to year 2100 gives a much lower estimate than “5 times the current value”. My point was (and is) that a more recent shorter term trend line gives a better picture of what will happen in the future than a 200,000 year trend line. (But I have already conceded that I do not think your estimate is “bogus”, I just think it is exaggerated.)
Peter, I don’t really care “who was wrong”. We probably both were in one point or the other. But the important thing is: You have my retraction of the “bogus”.
And, if you wish, my apology for using a term that made you angry. I’ll not use this term again.
My “back of the envelope” calculation, tying the trend of future per capita CO2 emissions to the actual past record, is not a replacement for a detailed economic study (such as Tol or Nordhaus), but simply a quick “reality check” I use to see if claims out there make sense (in this business a whole lot do not, as you know).
It tells me that any CO2 concentration for year 2100 between around 650 and 790 ppmv (including the ranges of Tol and Nordhaus) could be reasonable on a “business as usual” basis, based on UN population growth estimates for the future. IMO 790 ppm is already stretching the limits, but values of 1000 ppmv or higher (such as IPCC’s RCP8.5 worst case “business as usual” scenario) are gross exaggerations, which do not pass my “realty check”.
So you have my retraction of “bogus”, my apology for using the term “bogus”, my “mea culpa” and my statement that my “back of the envelope” calculation is simply a “reality check” I use to weed out unrealistic claims, not a detailed economic study in itself.
See you on the other thread.
Cheers,
Max
‘The ‘end of climate exceptionalism’ was first articulated (as far as I can tell) in this article by Andrew Lilico.’
Lilico’s article appeared in the Guardian on March 25th. Fred Pearce, writing in Yale Environment 360 the day before, wrote this:
Central to that new take is setting climate change in a context of other risks, uncertainties and mega-trends such as poverty and social inequality, urbanization, and the globalization of food systems. What some call “climate exceptionalism” — the idea that climate change is something of an entirely different order to other threats faced by the world — has been rooted out.
I wouldn’t claim Pearce was first but he was twenty-four hours ahead of Lilico. In newspaper terms twenty-four hours is equivalent in length to the Pleistocene.
Is your point that this is a very short time? I.e. like a blink of an eye in geological time scales (2 million years in 4.5 billion years = 4E-4 or 38 seconds in a day)? Is that point you intended to make?
Judith Curry; an excerpt:
‘JC reflections:
”The ‘end of climate exceptionalism’ was first articulated (as far as I can tell) in this article by Andrew Lilico.”’
My quote from the article:
”Our first step in adapting to climate change should be to accept that we aren’t going to mitigate it. We’re going to have to adapt. That doesn’t mean there might not be the odd mitigation-type policy, around the edges, that is cheap and feasible and worthwhile. But it does mean that the grandiloquent schemes for preventing climate change should go. Their day is done. Even the IPCC – albeit implicitly – sees that now.”
JC comments; http://judithcurry.com/2014/03/24/econtalk-christy-and-emanuel ; an excerpt:
”In case you missed it the first time around, my EconTalk interview with Russ Roberts can be found [here] .
‘So, everybody has biases, but it’s the job of scientists to try to weed those out and be as objective as we possibly can. So I view it as this is part of the scientific process, to try to weed out our biases and be as objective as we can. And when you hear disagreement and debate about an issue such as climate change, again the conflicts are not only over science, but they are also about values. And they are also about politics. And sometimes these things get hopelessly mixed up. And I would argue that in the climate change debate, values, politics, and science have gotten rather muddled. Not just in the public debate but even in the minds of scientists. So again there’s no simple solution.’”
The quote above from JC’s statement is an appropriate description on the present muddled situation concerning useless attempts to find a working solution for potential actions to control threats of global warming believed to be caused by athropogenic CO2 emissions. This mess seems to be caused by certain politicians, ideologist and scientists who are one-sidedly overspecialized by anthropogenic warming of climate. That prevents any real cross-disciplinary approach that is needed to find a working solution well enough for actions needed. In my opinion, that is why ‘there’s no simple solution’ available. There are lot of biases which makes difficult to get any properly working solution. As JC says it is the job of scientists to try to weed those biases out and be as objective as we possibly can.
The first challenge is to make clear whether or not the increase of anthropogenic CO2 emissions can really be blamed for recent global warming. In one of my earlier comment I have written that already soon after AR4 I have discribed how to get rid of the prevailing climate exceptionalism:
”Already in a Finnish magazine Materia 3/2008 I have stated in an op-ed (an excerpt of the conclusion translated in English):
a) Endeavours to control the current warming of climate by curtailing anthropogenic CO2 emissions do not seem to have appropriate bases.
b) Instead you have to concentrate on a kind of research work which covers the entirety of the problem of climate warming well enough. Above all it must include potential natural causes of warming.
c) As a first priority of measures you have to regard potential mitigations concerning ‘the sort of catastrophes that history shows’, as well as any new natural climate change or extreme event of weather. In the cases where any sufficent mitigation is not possible, you have to learn to adapt to these circumstances.
d) As to energy policy the first priorities are to protect an availability of energy that is competetive and produced cleanly enough, and to promote the use of energy which must be reasonable and good enough for the welfare of mankind.”
Later on, by using pragmatic logic, e.g. in my comment http://judithcurry.com/2011/08/04/carbon-cycle-questions/#comment-198992 I have proved:
1) All CO2 emissions from CO2 sources to atmosphere and all CO2 absorptions from atmosphere to CO2 sinks together determine the CO2 content in atmosphere. As for instance the recent share of anthropogenic CO2 emission has been only about 4 % of the total CO2 emissions, the anthropogenic share of the recent CO2 content of about 395 ppm in atmosphere has been only 16 ppm at the most.
2) As the recent annual increase of CO2 content in atmosphere has been about 2 ppm, its anthropogenic share of only 4 % has been 0.08 ppm.
3) The recent increase of about 2 ppm CO2 in atmosphere has been dominated by warming of sea surfaces, especially on the areas where sea surface sinks of CO2 are. Even the anthropogenic share of 0.08 ppm is mainly controlled by the warming of sea surfaces. As the annual increase of anthropogenic CO2 emissions has been about 6 % at the most, that is to say about 0.5 GtC, that makes the mere annual increase of anthropogenic CO2 emissions cause only an increase of 0.005 ppm CO2 in atmosphere.
4) Long term trends of CO2 content changes in atmosphere follow climate changes and not vice versa. This has been the average situation during last 100 million years; the same is related to to glacial and inter glacial periods; and especially during the last century the increase of CO2 content in atmosphere seems to be related to warming of sea surface on the areas where sea surface CO2 sinks are. The warming of these sea surface areas of CO2 sinks seems to be related to periods when El Niño events are dominating.
All of this prove that the anthropogenic CO2 emissions have not dominated the recent increase of CO2 content in atmosphere, and that even the recent total increase of CO2 content in atmosphere has not controlled the recent global warming. There is no reason to curtail CO2 emissions. The proper way is how to learn to adapt to threatening extreme events of weather.
The main bias to be weeded out is the belief according which anthropogenic CO2 emissions control the glöbal warming.
Hi Jim D. – I am back on line. With respect to your estimate of the radiative imbalance as between 0.5 and 1 W/m2 positive imbalance.
Levitus et al. [2012] reported that since 1955, the layer from the surface to 2000 m depth had a warming rate of 0.39 Wm-2± 0.031 Wm-2 per unit area of the Earth’s surface which accounts for approximately 90% of the warming of the climate system. Thus, if we add the 10%, the 1955-2010 radiative imbalance = 0.43 Wm-2± 0.031 Wm-2. This does have uncertainty values assigned to it.
For the TOA analyses, Stephens et al . [(2012]) reports a value of the global average radiative imbalance [which Stephens et al calls the “surface imbalance”] as 0.70 Watts per meter squared, but with the uncertainty of 17 W m-2 !
The more robust estimate is close to your lower value of 0.5 W/m2.
You also wrote in one of your earlier e-mails that
“by far the biggest driver of the clearly seen climate change in the last century or two has been the nearly 2 W/m2 from increased CO2,”
What do you estimate is the water vapor feedback in Watts per meter squared (which also must be positive)? This water vapor feedback is supposed to amplify the effect of the added CO2.
How do you reconcile the radiative imbalance diagnosed from Levitus’s work, or even with your values of 1 to 0.5 Watts per meter squared, in terms of each of the radiative feedbacks and forcings (i.e. using the 2014 IPCC values)? Can you show where they sum to the radiative imbalances that we both report?
On the other issue of the summer mean temperature change where you refer to Hansen’s work, you did not present evidence regarding the siting quality of the locations where he concludes there has been the large increases. The satellite lower tropospheric temperature data does show some warming but of a magnitude that is not as high (in terms of standard deviation) as in the surface analyses.
Our work on siting quality (although for other geographic locations) shows systematic biases when the siting quality is poor. As a necessary condition to accept Jim Hansen’s analysis as robust, at a minimum they should provide photographs of the observing locations as we have urged for years; e. g. see
Davey, C.A., and R.A. Pielke Sr., 2005: Microclimate exposures of surface-based weather stations – implications for the assessment of long-term temperature trends. Bull. Amer. Meteor. Soc., Vol. 86, No. 4, 497–504. http://pielkeclimatesci.wordpress.com/files/2009/10/r-274.pdf
Are the stations that he uses unaffected by local changes in their immediate area (note: this issue is distinct from larger land use change which can have a non-local change on temperatures).
We have concluded that local changes in the observing site over time (even in otherwise pristine locations) can result in non-spatially representative sampling. This is due to a vertical redistribution of heat which is in addition to any deeper layer change in heating, if that occurs. We discuss this is our paper
McNider, R.T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J.T. Walters, U.S. Nair, and J.R. Christy, 2012: Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing. J. Geophys. Res., 117, D14106, doi:10.1029/2012JD017578. Copyright (2012) American Geophysical Union. http://pielkeclimatesci.files.wordpress.com/2013/02/r-371.pdf
In our paper we wrote
“it is likely that part of the observed long-term increase in minimum temperature is reflecting a redistribution of heat by changes in turbulence and not by an accumulation of heat in the boundary layer. Because of the sensitivity of the shelter level temperature to parameters and forcing, especially to uncertain turbulence parameterization in the SNBL, there should be caution about the use of minimum temperatures as a diagnostic global warming metric in either observations or models.”
In the higher latitudes, whenever the surface layer is stably stratified (which is all day for large parts of the year), this would, based on our conclusions, lead to a warm bias when the data are used to interpret the effect of global warming.
Hi Roger, normally I am in US time zones, but it is currently late where I am, so a quick response for now. The imbalance depends on the time period you average over, so these numbers will never be comparable unless we have made sure that is true. To me, the imbalance from the recent OHC increase is the best estimate because satellite data are noisy. I don’t have numbers other than the ones you started with which sound right to me.
I would estimate water vapor feedback from transient sensitivity and that crudely from temperature and CO2 increases since 1950. From this we get TCRs of 2 C per doubling, implying ECS somewhat higher and therefore water vapor feedback about doubling the CO2 effect.
Hansen’s plots are global maps based on the GISTEMP analysis. The ratio of temperature change to standard deviation holds as much for populated and unpopulated regions, and is quite uniform and significant over continental areas like northern Canada and parts of Russia where little is expected from population effects.
The comm
After analyzing the carbon cycle in AR5 my reaction was similar to my thoughts on the same graphic in AR4. The math is very simple, however if I have erred or made an incorrect assumption I would be grateful if someone would point it out.
In order to do a fact check on IPCC AR5 re the human CO2 in the atmosphere we go to AR5 ch6 Fig 6.1 Page 120. IPCC which claims 240 Billion tonnes (GT) of human carbon out of 829 total. Adding all the net exchanges of carbon between the land, the water and the atmosphere including human emissions we have 207.1 GT entering the atmosphere and 203.5 leaving it. Net increase 3.6 GT or 0.43% per year.
Now at this rate of exchange the entire quantity of carbon in the atmosphere is exchanged in 4.1 years. Because some CO2 may be absorbed in days and some in years the 4.1 years represents the length of time it takes for 100% of the quantity of carbon on a given day in the atmosphere to be recycled irrespective of when it was generated.
The calculation is as follows. There are 1497 days in 4.1 years, thus 829 tonnes divided by 1497 days gives us 0.556 GT on the first day, this number can be double checked by dividing 203.5 by 365 . On the second day there is 829 – 0.556 GT left of the original carbon i.e. 828.444. Divide that number by 1497 and you get the old carbon loss on the second day from the original 829. This calculation is repeated every day for 30 years (on an excel program of course). Now as a check sum calculate the amount of carbon added to the atmosphere after day 1. If 207.1 GT of carbon is added in a year divide 207.1 by 365 days and you get the daily addition. That number works out to be 0.5674. Therefore after one day 0.556 GT is absorbed and 0.5674 added. After the second day the amount of new carbon is 0.5674 – 0.5674/1497 + 0.5674 since some of the new carbon will have been absorbed. When both these programs are run on excel it can be seen that the total quantity of carbon will increase slightly.
So consider the 240 GT of human Carbon which is supposed to exist in the atmosphere today, as per AR5, after 1 year we loose 21.5% of all the original carbon or 51.5 GT of the original human carbon.
That means that 51.5 GT of human carbon was removed in the first year. But Fossil fuels and cement only produce 7.8 GT a year. Thus we have a massive disconnect in the math.
The numbers work out something like this, the loss %s are cumulative and represent the loss of old carbon which existed on day 1 There is also a loss of new carbon each day. The quantity of old carbon remaining in the atmosphere is cut in half every 2.85 years.
Year 1 loss 21.5%
Year 2 loss 38.4%
Year 3 loss 51.7%
Year 4 loss 62.1%
Year 5 loss 70.2%
Year 10 loss 91.1%
Year 15 loss 97.3%
Year 20 loss 99.2%
It should also be stated that this is an approximation since some of the human Carbon absorbed by the plants and water will be recycled but since there is such a vast amount of natural carbon in the plants and water the dilution effect renders the recycled human carbon insignificant.
Now with a 7.8 GT a year emission rate and 4.1year recycle time the amount of human carbon in the atmosphere due to fossil fuels and cement is 32 GT this can be easily demonstrated by the 2 excel programs one for the loss of old carbon and one for the gain in new carbon, at 2.12 ppm/GT that gives us a human impact of 15 ppm , this assumes a constant emission rate .
Thus the IPCC are dramatically wrong and cannot justify their number which defeats their entire hypothesis.
It should also be noted that the 32 GT of carbon is relatively constant since it is only affected by the increase in human emissions. For this exercise human emissions were held constant thus a small increase in the 15 ppm will occur with increasing emission levels. An analysis of the increase in human emissions cannot possibly explain the 3.6 GT increase per year in the carbon content of the atmosphere since the total human output is only 7.8 GT from fossil fuels and cement. Although it can be proven that CO2 can no longer affect global temperatures 15ppm is definitely not going to have any measurable effect.
It should be emphasized that all the data used came directly from IPCC AR5 and was not edited in any way.
Actually the IPCC have perfected the bait and switch game. If you search Dr. Vincent Gray IPCC peer reviewer you will find out how they do it.
The research papers quoted, and there are thousands, are all serious and honest papers but they merely chronicle the past and virtually exclusively the negative events caused by warming. It is the IPCC editors who insert the reason for the warming, which is always human emissions although they never offer any proof of how it can be human emissions. The original authors rarely give and certainly never prove the cause of the warming that is done strictly by the editors and the authors never get to read the final version.
So the net result is that human influence is always blamed but never proven.
Now when it comes to peer reviewing only the editors evaluate the reviews and all that do not meet their political aims are trashed.
The system in incredibly corrupt but since they are part of the UN and are following the UN’s agenda of share the wealth they have a vast amount of money to spend on their propaganda and they are giving the news media the alarmist fodder which makes them rich.
The real problem comes when they start making predictions which is virtually all you hear about. These predictions are based on research papers which they have edited, so the well is poisoned.
In addition one can not prove a prediction will not occur. Except their predictions of 20+ years ago have all failed which is why they are in trouble so they have to ratchet up the rhetoric to distract people from their failed past.
Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?
Jim D.. Thank you for your reply (even more so if you are in a late night time zone :-)).
When you have the opportunity, please give me your best estimate of the water vapor feedback in Watts per meter squared (since 1950 perhaps) and how that conforms with the ocean heat content change diagnosed radiative imbalance. Then list out all of the feedbacks and forcings and show that they are (within the uncertainty) equal to the estimates of the radiative imbalance [and lets use 1950 or 1979 as the data period).
On the surface temperature issue, the siting bias is independent of population change. It is an issue of how spatially representative is the observing site. If, for example, vegetation grows right next to the observing site (perhaps because it is fenced and grazing animals cannot feed on it), there will be a trend over time due to this effect. In this example, snow would melt more quickly due to the lower albedo of the vegetation. Both effects will change the local vertical mixing of heat even if there was no larger scale heating.
This is a major, unaddressed issue in high latitude continental surface observing sites. The “homogenization” of the data used by GISS, CRU, NCDC etc has not completely addressed this issue. As a start, please show (or refer us) to photographs of the high latitude sites used to construct these analyses. From the ones I have seen photographs, they are often poorly sited.
Roger
Thanks, Roger, now you are asking me for things that whole research groups are needed to provide, and each item would be a publishable paper. I go by what has been published. I have yet to see anyone find any systematic bias with GISTEMP, especially since 1950, which is the span I use it for. Satellite temperatures are broadly consistent with patterns of temperature change in the last 40 years, and this alone rules out urban effects. Similarly for water vapor feedback, in a transient climate with the tropical oceans not warming fast, we see most of the warming in continental and polar ares, so not much water vapor increase has been seen yet, and some would say the relative humidity has decreased. This is just consistent with the distribution of the warming. There is a recent mechanistic paper by Armour et al. 2013 (http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00544.1) that explains how this can happen. Basically the continental areas have less thermal inertia, and so respond to the changing forcing more quickly. From CRUTEM4 and HADSST3 we see that in recent decades the land warming has been twice that of the oceans, as would not be surprising for external forcing of both, as seen in ten-year running means.
http://woodfortrees.org/plot/crutem4vgl/from:1950/mean:120/plot/hadsst3gl/from:1950/mean:120
The Arctic is warming even faster because of its albedo feedback from sea-ice loss. These are consistent pictures for regional responses to a strong external forcing.
Hellooooo
Sent from my iPad
>
Hi Jim D. – There are values of the radiative forcings and feedbacks already published. This includes the water vapor feedback which is supposed to be amplifying the added CO2 forcing. When you add up the forcings and feedbacks, one finds a significant mismatch with the diagnosed radiative imbalance [which is well less than the sum]. We have a short paper in the works on this. This should have been a focus of the IPCC WG1 report but was not.
On the surface temperature issue, we and others have published on this. For another example, see our paper
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841 http://pielkeclimatesci.wordpress.com/files/2009/11/r-345.pdf
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841”, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655. http://pielkeclimatesci.wordpress.com/files/2010/03/r-345a.pdf
When you dig into the data analyses of GISS and the others, their remain a variety of unaddressed issues.
In order to move forward with a discussion of these issues, I would like to see specific rebuttals of the issues of the issues we raised (e.g. such as the McNider et al and Klotzbach et al papers). Saying that
“The Arctic is warming even faster because of its albedo feedback from sea-ice loss.”
does not really address this issue. We hypothesize that a significant fraction of the reported surface warming is not due to a deeper tropospheric warming but due just to a vertical redistribution of heating. Tell us why this hypothesis should be rejected.
Roger Sr.
Roger, if I understand your mechanism for more warming over land. It is focused near the surface because that is where the enhanced greenhouse effect would be expected to have the most effect, especially at night or in winter, and it explains why the lower troposphere shows less warming effect than the land surface. This is also consistent with the lower thermal inertia, I mentioned, because the land surface can respond to the forcing quite quickly. As such, it is a real surface warming effect, and not instrumental bias.
The obvious difference between land and oceans is the availability of water and resultant evaporative cooling.
e.g. http://users.monash.edu.au/~dietmard/papers/dommenget.land-ocean.jcl2009.pdf
Reblogged this on Letstalk_AboutClimateChange.
Jim D. I agree this vertical redistribution of heat is a real issue, and does result in local warming near the surface when the surface layer is stably stratified.
However, as we discuss in our paper, while this can directly occur due to added CO2, it can also occur for other reasons unrelated to this issue. If all of this temperature increase is assigned to added CO2 than it overstates its contribution; which means according to our analyses, including the Klotzbach et al paper, that the magnitude of global warming is too high when dT is used in
dF = dH/dt + dT/lambda.
Roger Sr,
Roger, to make that argument you need to find a natural process that has been warming the surface and a reason that the recently added CO2 isn’t effective even though the timing correlates. This double explanation is an issue that comes up again and again. If not CO2, what else is doing this, whether talking about global warming or related energy budget effects, and then the explanation importantly also has to say why not CO2, because that is what the majority thinks it is, and they won’t be swayed without a convincing argument that it is not. If there is a reason skepticism hasn’t caught on, I think it is this one. You almost have to prove a negative, which I realize is tough.
Regarding the use of these land dT values in the formula, I think this is a case where the lapse rate will not follow the surface fully, so that the outgoing radiative effect of this type of surface warming may be muted, leading to higher effective sensitivity in that formula.
There are really three questions here.
The land sea contrast – which is dominated by differences in evaporative cooling.
An alternative to CO2 warming in the 1976/1977 period – which the satellite data suggests is dominated by cloud radiative effects.
What the so-called consensus on AGW means – which the emergent paradigm of abrupt climate change replaces with something of significantly greater explanatory power,
Robert Ellison, yes, the land has to warm more because it has less ways of losing energy, being limited in moisture to evaporate. It also lacks much thermal inertia allowing a quick response to forcing compared to the ocean.
I don’t know what you think is special about 76/77.
Your “emergent abrupt climate change” won’t get anywhere without a mechanistic explanation. If it is Tisdale’s El Nino energy source it is DOA from application of the first law of thermodynamics (energy conservation). El Ninos release stored ocean energy, and don’t create any of their own.
Abrupt climate change works through the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation, the Southern Annular Mode, the Artic Oscillation, the Indian Ocean Dipole and other measures of ocean and atmospheric states. These are measurements of sea surface temperature and atmospheric pressure over more than 100 years which show evidence for abrupt change to new climate conditions that persist for up to a few decades before shifting again. Global rainfall and flood records likewise show evidence for abrupt shifts and regimes that persist for decades. In Australia, less frequent flooding from early last century to the mid 1940’s, more frequent flooding to the late 1970’s and again a low rainfall regime to recent times.
Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.
It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.
Changes in cloud radiative effects are associated with changes in atmospheric and ocean circulation. That you can’t allow yourself to see this is not my problem Jimbo.
Those are about a tenth or two of a degree. I would not call them climate change. They are in the range of natural variation, somewhat smaller than ENSO, for example. I don’t know why anyone would think this is important for climate.
It is some 1.9W/m2 increase in net forcing between the 80’s and 90’s (IPCC 2007).
R. Ellison, and how did IPCC explain that, or was the old 80’s satellite data too noisy to make anything of? Clouds decreased in the 90’s, and this was during warming, so it was probably a positive feedback to forced changes, because it was too weak to have caused much of the warming. However, people promoting negative cloud feedback won’t thank you for bringing this to their attention, because it is opposite to what they would have hoped for.
IPCC (2007) s3.4.4.1 – based on Wong et al (2006) as well as the ISCCP-FD data.
‘ Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.’
e.g. http://s1114.photobucket.com/user/Chief_Hydrologist/media/cloud_palleandLaken2013_zps73c516f9.png.html?sort=3&o=97
It is utter nonsense to suggest that the changes are other than low frequency climate variability – or that they are not climatologically very significant.
e.g. http://s1114.photobucket.com/user/Chief_Hydrologist/media/ProjectEarthshine-albedo_zps87fc3b7f.png.html?sort=3&o=73
‘Earthshine changes in albedo shown in blue, ISCCP-FD shown in black and CERES in red. A climatologically significant change before CERES followed by a long period of insignificant change. http://www.bbso.njit.edu/Research/EarthShine/
Laughable nonsense to think that such large changes work both ways as a CO2 feedback.
R. Ellison, I thought you knew clouds don’t act spontaneously being short-lived. They are always responding to something else, whether it is climate change or natural variability that typically comes from ocean circulations.
Changes in cloud radiative effects are associated with changes in atmospheric and ocean circulation.
What’s your point Jimbo?
@Jim D
What exactly are you implying by this?:
“… clouds don’t act spontaneously being short-lived.”
Are you saying you don’t think clouds play a role in global warming? I disagree. Individual clouds might be short-lived, but the total cloud cover over the entire earth might be slowly changing monotonically. Increasing OHC could be causing this. A change in total cloud cover will change earth’s albedo.
The effects of ENSO are a zero sum game when it comes to secular warming.
We can analyze the oscillations that provide the effect and see that quite clearly.
I mean clouds are a response to other factors. Sometimes they act as a positive feedback, such as they would be if they were responding to the warming in the 1990’s. Clouds don’t change and drive ocean circulations for example. It is a simple statement that clouds are not a forcing, but can be a feedback. Some have not seen this fact yet. On the other hand, aerosols modify clouds and can be a forcing. More aerosols do change the climate. So when you see cloud albedo changing, consider whether it is really aerosols, or whether the clouds are responding to something else.
Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture. http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf
Are ocean and atmospheric shifts not understood?
Huh. And here I thought the exceptionalism was all on the part of the fossil lobby: everything’s a pollutant except CO2; everything the government spends on private enterprise is a subsidy except for the fossil industry; everything is overregulation except regulations protecting fossil; Physics is fine, except where radiative transfer shows AGW..
Jim D | April 7, 2014 at 4:51 pm | says: “to make that argument you need to find a natural process that has been warming the surface and a reason that the recently added CO2 isn’t effective even though the timing correlates. This double explanation is an issue that comes up again and again. If not CO2, what else is doing this [?]”
A physical explanation of multidecadal and longer-term natural variability is not necessary to refute the contention that CO2 drove the sharp rise in global temps in the last quarter of the 20th century; merely an empirical demonstration of unconnectedness fully suffices. On the contrary, it behooves the AGW camp to explain why the CO2 signal either LAGS temperature or is incoherent with it!
http://woodfortrees.org/plot/hadcrut4gl/from:1950/mean:12/offset:0.05/plot/gistemp/from:1950/mean:12/offset:-0.05/plot/esrl-co2/scale:0.01/offset:-3.3
I would not call this unconnected. It is 60 years worth of connected.
What you show is a systematically biased, manufactured GAT index that has been devised to demonstrate a “trend connection” to the Mauna Loa CO2 record. In reality, GAT took a deep dip in the third quarter of the 20th century, rose sharply in the fourth quarter, and has been going sideways since. That’s what renders it incoherent with the monotonically rising CO2 concentrations at multidecadal frequencies.
On a physical basis, CO2 at present levels already absorbs nearly all of IR that it’s capable of doing; only minor “pressure broadening” of its absorption lines is left. In any event, that trace gas pales by comparison to water vapor in total IR absorption, which, in turn, plays second fiddle to moist convection in warming the atmosphere. You’ve got nothing scientifically solid to support the AGW attribution.
-Jim D | April 7, 2014 at 5:44 pm |
http://woodfortrees.org/plot/hadcrut4gl/from:1950/mean:12/offset:0.05/plot/gistemp/from:1950/mean:12/offset:-0.05/plot/esrl-co2/scale:0.01/offset:-3.3
I would not call this unconnected. It is 60 years worth of connected.-
Connected, except last, oh say, 15 years
John S., you are not believing the message in the data because it doesn’t fit your worldview, so you, as is typical, look for conspiracies that altered it. Actually this just shows an obvious connection that also fits expectations from AGW, while the pause doesn’t show up very well in these longer records either, but the skeptics do believe the pause part of these lines.
gbaikie, tough to see the pause as different from all the other negative periods in the last 60 years, isn’t it? Do you think it is special in some way or could it be just another pause about to end?
Jim D | April 7, 2014 at 6:30 pm says: “John S., you are not believing the message in the data because it doesn’t fit your worldview, so you, as is typical, look for conspiracies that altered it.”
Patently unequipped to distinguish between bona fide data and manufactured indices, you resort predictably to ad hominem projections and attributions based upon fitted “trends.” Apparently it hasn’t occurred to you that postal rates have risen no less monotonically than CO2. I’m only interested in scientific evidence, not classic AGW twaddle.
What makes people write this kind of statements, when anyone can go to the UChicago MODTRAN calculator and see that the truth is the opposite.
If someone tells that MODTRAN engine is a black box, he can go to Science of Doom and pick openly available Matlab-code of an essentially equivalent calculator and verify what it’s based on.
It’s understandable that people disagree on many things, but why to come to public and present claims that are easy to prove wrong, and that have been proved wrong innumerable times.
Pekka:
Here’s a comparison of MODTRAN output for CO2 at 300ppm and at 600ppm:http://commons.wikimedia.org/wiki/File:Modtran_DoubleCO2_NewEquilibrium.gif. If you think you can better refute my other statements that you cite, have a go at it. I’ll respond as time, and interest, permit.
John S, “time permitting” one can clearly see the temperature increase as the spectrum creeps upward to emitting more photons with higher wavenumber. Higher wavenumber photons are more energetic and so produce a higher gray-body emitting temperature.
Do you understand how statistical mechanics works?
With MODTRAN, try looking down at the troposphere from 20 km. Double to 600 ppm and again to 1200 ppm, and each time you get 3.4 W/m2, which looks more like the expected log function than near saturation to me, but perhaps you think these magnitudes represent minor effects on climate, and that would be where your disconnect from reality is.
-Jim D | April 7, 2014 at 6:33 pm |
gbaikie, tough to see the pause as different from all the other negative periods in the last 60 years, isn’t it? Do you think it is special in some way or could it be just another pause about to end?-
I tend to think it might near ending. And expect the coming summer to be warmer or next summer not being as cool as they have been in southern California. But I don’t think it’s a big issue.
But it possible that this pause could be longer lasting, and big variables include being related to low solar activity. Currently we are the second peak of solar Max. A second peak predicted by some who are watching the sun- so it’s comforting that there is some ability to get some things right about the sun.
But same peoples seem to be predicting lower solar activity to come- and not just the very predictable return to solar Min. And of course this solar Max has been very weak.
Now if there wasn’t this solar activity to consider, there also a factor of we are somewhere near the level warming we had during the Medieval warm period. So we have risen from the cooler period of Little Ice Age, and there seems little reason to think we rise to much higher temperatures.
Another big variable is volcanic activity. We seem to be in a quiet a longish period of low volcanic activity, which I don’t know how long this will continue.
Odds could favor a continue of mild volcanic activity, but of course all hell could happen.
Generally I don’t think we should have another super El Nino starting this year, perhaps this year could build towards such an event in few years. but tend think it’s unlikely.
What self-styled “savants” here totally miss is that MODTRAN is a purely computational exercise in RADIATIVE transmitivity, not a comprehensive, proven model of the climate system. It assumes that nothing aside from the composition of the atmosphere changes. That’s fine for determining the opacity of the atmosphere and signal transmission, but not for determining surface temperature, which is governed primarily by NON-radiative processes of thermal energy transfer, as shown by the Bowen ratio in numerous in situ experiments. The computed 3.4W/m^2 increase for doubling of CO2 is a drop in the bucket compared to the cooling effect of highly variable evaporation, which increases exponentially with surface temperature.
John S., a forcing of 3.4 W/m2 is like a 1% solar increase. This is not a drop in the bucket. The sun doesn’t change anywhere near this much in millions of years.
My point was only that the original statement of John S concerning saturation of GHE is totally contrary to the facts and that’s proven by the calculations.
If he wishes to argue about feedbacks or something else, he can do that, but that does not make his statement any more true.
The first two frames of MODTRAN results clearly show that the CO2 absorption band does NOT deepen as the concentration is doubled from 300ppm; the only change is on the “wings” of that band. This is entirely consistent with near-saturation and “line broadening.” While the computed loss of signal transmission for the ASSUMED clear-sky “standard atmosphere” is 2.68W/m^2, no detectable change of opacity has been found in the REAL atmosphere during the last 60 years (see: http://climateclash.com/ferenc-miskolczi-the-stable-stationary-value-of-the-earths-ir-optical-thickness/).
In any event, it is a mistake to treat increased atmospheric opacity as a
“forcing” in a system that has coupled NON-radiative mechanisms available
for transferring heat from surface to atmosphere. Unlike the genuine
forcing of insolation, which produces thermal energy, increased opacity
merely impedes direct radiative transfer. The resulting imbalance drives
moist convection, which produces clouds that invariably REDUCE the local
surface insolation, often by hundreds of W/m^2. The near-constancy of TSI
at TOA is not the relevant factor in climatic temperature variations.
What disciples of AGW dogma cannot explain is the Holocene optimum, when CO2 was consistently below 300ppm, and scores of surface temperature swings of ~1K on multidecadal and longer time-scales evident throughout the Holocene in the best proxy records. And simple fitting of cubic splines readily shows that the recent upswing in the last quarter of the 20th century displays a greatly different “trend” (even in biased indices) than “Keeling’s curve.” Imputing a direct causal connection between the two is beyond naive.
Manacker,
You said:
http://judithcurry.com/2014/04/04/end-of-climate-exceptionalism/#comment-516086
and ain another comment later said:
[My bold]
I’d ask you to retract the assertions that the projections and data I quoted are bogus.
The numbers I quoted for energy and CO2 emissions are correct and were from authoritative sources – (with one already acknowledged misquote of units, but the ratio I stated of 2100 to 2015 emissions was still correct, i.e. about 2).
I suggest the simple back-of-an-envelope calculation you have been doing to project CO2 emissions to 2100 is flawed because you have not included the compounding effect of GDP growth rate in your projection. This dominates the projections.
Below I repeat (slightly edited) my last comment above in case you missed it.
Below I compare Nordhaus (2008) estimates of industrial CO2 emissions in 2015 and 2100 with the statements I made in my first comment which was a sanity check of your estimates of CO2 emissions in 2100. (I’ve corrected my misquote of Nordhaus units; I’d mistakenly quoted Nordhaus’s figures which are in GtC per decade as GtCO2/a; I’ve converted his figures to Gt CO2/a in the figures below).
Nordhaus projections of Industrial CO2 emissions in 2100 for the ‘No Controls’ scenario:
2015 = 32 Gt CO2/a
2100 = 70 Gt CO2/a
Increase = 2.25
Summary of the statements I made in my first comment; which was intended to be a rough sanity check of your back-of-an-envelope calculation:
1. “If the energy is provided with fossil fuels, even with energy efficiency improvements, we are heading for at least a doubling of CO2 emissions from fossil fuels.” [This statement is correct, not bogus]
2. “EIA projection is 50% increase in emissions from 2010 to 2040 ( http://www.eia.gov/forecasts/ieo/more_highlights.cfm ). If this rate continued (linearly), we’d have 150% increase to 2100 (increase by a factor of 2.5) to about 80 Gt/a.” [this statement is roughly correct, not bogus]
3. “William Nordhaus (2008) using DICE estimated emissions would increase from about [32] Gt/a in 2015 to about [70] Gt/a in 2100.” [ The increase is a factor of 2.25 to 2100. This statement is correct, not bogus – numbers in “[]” are converted to GtCO2/a].
So, I suggest, apart from my misquote of Nordhaus’s unots, the ball park figures I gave initially, that you (Manacker) said are bogus, are not bogus. In fact, I tend to believe that Nordhaus and Tol projections are probably as good as any for the scenario where we continue to block development and deployment of the most prospective cheap alternative to fossil fuels.
Peter Lang
We’ve cleared this all up (see my comment #517138 above).
It was precisely your misquote of Nordhaus’ units, which led to my “bogus” comment (plus possibly a misunderstanding on my part of something else you wrote).
But it’s cleared up now.
End of discussion.
We agree.
So let’s move on – OK?
Max
Manacker,
Thank you for a sort of weaselled half retraction – with plenty of caveats, excuses and diversions to muddy the waters – for your uncalled for incorrect assertion that all my data was bogus.
I guess where this ends up is that your back-of-an-envelope estimate understates the likely emissions increase to 2100 in the absence of a cheap alternative to fossil fuels.
Some might say your estimate and method are bogus.
I won’t pretend I am not annoyed by your in ability to admit and retract. Here are some of your original comments:
[Repost with corrected format]
Manacker,
Thank you for a sort of weaselled half retraction – with plenty of caveats, excuses and diversions to muddy the waters – for your uncalled for incorrect assertion that all my data was bogus.
I guess where this ends up is that your back-of-an-envelope estimate understates the likely emissions increase to 2100 in the absence of a cheap alternative to fossil fuels.
Some might say your estimate and method are bogus.
I won’t pretend I am not annoyed by your in ability to admit and retract. Here are some of your original comments:
Wrong! It’s a trend of the past 200,000 years. It incorporates all effects, unless you have a damned good explanation for believing that the trend will change markedly just while we are living.
I said: “we are heading for at least a doubling of CO2 emissions from fossil fuels.”
My figure is correct. Your 30% figure is changing the units and is misleading – i.e. bogus!
You’ve changed units again. Your figures are bogus. Mine are correct. You are continually underestimating or muddying the waters by confusing emissions and concentrations. The argument is about emissions, not concentrations.
I was talking about emission, not emissions per capita. And the decreasing rate of population growth is already included in the figures to 2040. I agree the total to 2100 would be a little less, but it is mostly driven by GDP growth rather than population growth. I’d agree a more sophisticated projection would be a little less than 80 Gt/a in 2100, perhaps 75 or 70 Gt/a. But not a basis for calling it “bogus”, and certainly not given that your estimates are much rougher.
I said “William Nordhaus (2008) using DICE estimated emissions would increase from about 87 Gt/a in 2015 to about 197 Gt/a in 2100 (Table 5-6), and increase of 110 Gt/a”
I did not specify C or CO2. And the ratio is still correct; i.e. a factor of over 2.
You launched into a long rant using ‘silly’, ‘bogus’ and other derogatory terms repeatedly throughout. But you didn’t even bother to look at the reference before launching into the long rant.
At the end of all this I am left with the understanding that your back of an envelope estimate is an understatement of the situation and, could be called, bogus. The roughly factor of 2 increase in emissions from 2010 to 2100 that I’ve been saying is about right and therefore, not bogus.
But I agree: let’s leave it at that.
As you say:
Peter Lang and Manacker,
After you shake hands, would you care to join me in a quick game of Whack-a-Warmist? It’s like Whack-a-Mole, but Warmists are thick, slippery, and Government funded.
It’s always good for a bit of light relaxation, or when there’s only cartoons on the telly.
Live well and prosper,
Mike Flynn.
Mike
Whack-a-Warmist sounds like good gentlemanly sport (open for ladies, as well, of course – but only as Whackers, not as Whackees, at least not by gentlemen, as I’m sure you would agree, being a man of high pedigree and sensibility, as your comments here demonstrate).
As far as I’m concerned, the discussion with Peter Lang is over.
We usually agree pretty closely on the CAGW hysteria.
It appears he mixed up some numbers in a comment, resulting in a three-fold exaggeration of CO2 emissions and a corresponding exaggeration in estimated CO2 levels, to which I reacted by saying these estimates are “bogus”. I admit that I may also not have fully understood all of his points.
Peter took exception to the “bogus” word..
We’ve cleared up the mixup, and I retracted the “bogus” claim.
So we’ve moved on.
Max
Jim D. We do have the evidence of a warm bias in the surface temperature trends. I sent you several of our peer reviewed papers that not only show in the observations, but present reasons why this is so.
This does not negate the importance of added CO2 as a radiative warming influence, but it does indicate that using the surface temperature trends as the metric to quantify global warming is flawed (and has a warm bias).
If we can agree that the ocean heat content changes is the preferred metric for this purpose (as a result of our set of comments) than we have accomplished something. [by the way Jim Hansen agrees with this perspective).
Roger Sr.
Roger, I would go back to this
dF = dH/dt + dT/lambda
to say that both H and T are important to keep track of. H alone doesn’t tell us how much of the forcing has already been cancelled by warming. After an El Nino, H may well go down and T will go up, so H alone would not give a correct impression of what just happened. It is much less intuitive than T, and also not linked directly to effects that are felt. Natural variability produces a lot of cancelling exchanges between these terms, and again looking at one term, such as the pause in T, gives a misleading picture as regards the total magnitude of forcing effects. People have to learn how to look at both and not be one-dimensional in evaluating climate change, otherwise they are missing half the picture.
Roger Sr
Do you have references on the adjustments on GISS temperatures that can discuss the raft of changes in temperatures that turned a slight cooling trend to the heating trend? Steve Goddard and another recently documented the changes. I understand Rud Istvan is preparing a comment on this and Steve Mosher says that BEST shows this is not accurately described. My concern is the UHI impacts have cooled temperatures in the past via undocumented adjustments. Another good reason to not look at land temperatures as a measure of warming.
Temperatures in the global ocean thought are hard to document and don’t have accurate historical records. Only since ARGO is there any useful data. How can estimates from the usual suspects be used to move to a new metric?
Scott
Scott
I queried the 1940’s warming as presented at Hansen’s 1988 hearing and subsequent Giss revision.
The following wasn’t intended to be an article (yet) but here are some links I posted and Mosh gave a long reply to my query as posted at foot.
——– ———- —–
http://image.guardian.co.uk/sys-files/Environment/documents/2008/06/23/ClimateChangeHearing1988.pdf
see figure 1 for global 5 year mean
here is latest giss
http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.A.gif
temperatures seem to have warmed in later years and cooled in 1940’s
http://data.giss.nasa.gov/gistemp/graphs_v3/
hansen lebedeff 1987
http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf
Steven Mosher | September 27, 2013 at 11:18 pm |
Sure tony.
First, its hard to reconstruct piece by piece all the changes that
VARIOUS people made that result in the changes you see.
But let me have a wack.
First, understand that the GISS answers are the result of
Data input and Algorithm.
1. Data input.
There are two principle causes. First is the change in the core dataset. The moves throuh various versions of USCHN will result in changes because the processing of that data changed. Essentially the big adjustments for TOBS and other bits in the US.
By looking at datasets outside USCHN we can see that these adjustments are justified. In fact the adjustments are calibrated by looking at hourly stations close to the USCHN stations.
Next, the GISSTEMP algorithm will change the estimates of the past
as New data for the present comes in. This has to do with the RSM method. This seems bizarre to most folks but once you walk through the math you’ll see how new data about say 1995, changes what you think about 1945. There are also added stations so that plays a role as well.
2. ALgorithm side of things. You have to walk back through all the papers to to get an idea of the changes. But they do impact the answer.
The fundamental confusion people have is that they think that global indexs are averages. And so if Hansen average 1945 in 1987, then why does his average of 1945 change in 2012? Makes no sense right?
Well, it does make sense when you understand that
1. These algorithms do not calculate averages. They estimate fields.
2. If you change the data ( add more, adjust it etc )
3. If you improve the algorithm, your estimate of the past will change. It SHOULD change.
I’ll illustrate this with an example from out work.
To estimate a feild we have the climate field and a correlation field.
When we go back in time, say before 1850, we make an assumption.
The correlation structure of the past will be like the structure of the present. A good skeptic might object.. how do you know?
well, the answer is.. we dont. thats why it has to be assumed.
The structure could be different. I imagine somebody could say
” use this structure I made up” well, you could do that, you could calculate that. you could make a different assumption.. not sure how you would justify it. Therefore, if we get new data which changes our understanding of today that will cascade and reform what we thought the past was.. principly because of the uniformity assumption.
What is kewl is that there are a bunch of data recovery projects going on.. WIth our method we dont need long records. So,
I have predictions for locations in 1790. That prediction was made using a climate field and correlation field. There are no observations at that location. When the recovery data gets posted then I can check the prediction.
http://judithcurry.com/2013/09/27/95/#comment-388617
—— —– hope this helps. For the life of ,me I cant see how you can change historic temperatures from the present, but what do I know?
tonyb
Pingback: End Of Climate Exceptionalism | EPA Abuse
Peter Lang
NOTHING you wrote was “bogus”, except:
1. the screw-up on units, which led to a “bogus” forecast of future CO2 emissions that was much too high
2. the use of a 200,000 year human energy progression that forecasts:
If we ASS-U-ME the same energy mix to 2100 (no actions to switch from fossil fuels) this energy consumption rate would lead to an exaggerated rate of CO2 emission by 2100 (Nordhaus has roughly “2.5 times what it is today” and Tol has “roughly 3.5 times what it is today”
Five times what it is today would mean 160 Gt/a and a cumulative emission (2013-2100) of 8000 Gt CO2 and a concentration of 870 ppmv in 2100 . It would also mean that every man, woman and child on this planet is emitting 30% more CO2 per capita than the inhabitants of the “industrially developed nations” emit today). And this despite the observed fact that the per capita CO2 emission for the developed nations is actually declining year-by-year.
I referred to this estimate as “bogus” (i.e. unrealistic).
If we ASS-U-ME that the energy mix will change dramatically, so that in 2100 we are using a significantly lower fossil fuel fraction, then the forecast for CO2 is not “5 times what it is today”, but significantly lower and no longer unrealistic IMO (i.e. NOT “bogus”).
Other than these two things, everything you wrote makes sense.
You have corrected the error in units above and I have retracted my “bogus” statement. The same holds for the “5 times” estimate, if you are talking only about energy and not about CO2 emission rate.
So let me emphasize: I do NOT believe that you make “bogus” estimates, when you get the units right.
If I offended you, I am deeply sorry, but I meant no harm.
Now let’s close this off and move on before Judith cuts us off.
Max
Manacker,
Thank you for partial correction.
However, the real issue you haven’t addressed is that your back-of-an-envelope calculation is incorrect. It understates the emissions growth to 2100. Your method is bogus and giving bogus results. I’ve been pointing this out to you, very nicely, in some 10 to 20 previous comments over a period of about 2 years. Your continued distraction to my virtually irrelevant and since corrected units in just one of several lines of support, is a diversion. Let’s deal with the initial point I raised and have raised many times before. Your calculation method is wrong. It understates the emissions projections to 2100.
I’ll post a separate comment to try to get across more clearly why your projections are wrong. But first, I’ll deal, again, with the point you keep raising apparently in an attempt to divert from the issue I raised and remains unaddressed – i.e. that your method is wrong and is understating the emissions projected in the future.
This was corrected subsequently, but you continued and still continue to raise it as a diversion from addressing the issue – i.e. that your method is incorrect; it understates the emissions in the future. Furthermore, your criticism is of little relevance, except as an excuse to divert from the real issue, because the ratio of doubling emission by 2100 was the same with either units, and you could easily have checked the source – since I gave the link – instead of making a meal of that mistake.
I need to finish this off with a clear statement explaining why Manacker’s back-of-an-envelope calculation understates the emissions in 2100 – which was the point I was making in my initial comment.
My original comment was intended to point out that his back-of-an-envelope calculation underestimates the likely CO2 emissions in 2100 for the case where fossil fuels continue to supply the same proportion of total world energy as they did in 2010).
I provided a number of different sources to show that emissions would be about 2 to 2.5 times higher in 2100 than in 2010. Unfortunately, in one I misquoted units as Gt CO2/a whereas the units in the source (Nordhaus) are Gt C per decade. However, my misquoting of the units made no difference to the ratio (of 2100 / 2010 emissions) which, for the Nordhaus (2008) figures, was 2.25.
I now understand and can explain why Manacker’s back-of-an-envelope calculation underestimates emissions in 2100. Manacker’s method assumes emissions are proportional to population. That is not correct.
Emissions = GDP x emissions intensity.
So, emissions are proportional to GDP, not population. Emissions growth rates are proportional to GDP growth rates and rates of decline of carbon intensity of the world economy.
GDP grows faster than population and continues to grow even when population doesn’t grow.
The first two terms of the Kaya Identity calculate GDP – i.e. population x GDP per capita = GDP.
Manacker’s back-of-an-envelope calculation was not properly using the Kaya Identity.
I hope this will clarify why Manacker’s method is understating the emissions in 2100 for the scenario we are discussing – i.e. the scenario where there is no cheap alternative to fossil fuels.
QED
Peter Lang
Thanks for your last comment.
You now question the validity of my quick “reality check” method.
Well, regardless of what “Kaya Identity” suppositions you cite, there is no doubt that
1. human CO2 emissions are related to human population (a no-brainer)
2. as per capita GDP increases, so does the relationship between human population, human GDP, human energy consumption and human CO2 emissions.
Over the period 1970 to 2010, the per capita human emission of CO2 (from fossil fuels) per CDIAC and UN increased by around 10%.
Over the same time, the per capita use of fossil fuel energy per EIA data also increased by 10%.This is obviously no coincidence.
Total energy consumption increased by around 15%, as a result of a shift away from fossil fuels to other sources, principally nuclear.
Check the data. They are out there.
From 1970 to 2010:
-population increased by 1.5% per year CAGR
-GDP (constant 1970 $) increased by 3.2% per year CAGR
-CO2 emissions increased by 1.8% per year CAGR
-per capita GDP increased by 1.7% per year CAGR, and
-per capita CO2emissions increased by 0.25% per year CAGR (or around 10% over the entire period)
Whether the 10% rate over 40 years will continue to increase exponentially, as it has since 1970 is a matter of conjecture. I have personally concluded from the data out there that the exponential rate is likely to accelerate somewhat, probably at a slightly faster rate than it did over 1970-2010.
Forecasts of global GDP are dicey.
Forecasts of global population are a bit less dicey.
So I prefer to tie my “reality check” on future CO2 emissions to human population, using an experience factor from the past to adjust for per capita increases resulting from higher GDP.
And with this approach I end up with 650 to 700 ppmv CO2 by 2100, right in line with Nordhaus’ estimate of 686 ppmv.
And I can weed out, as unrealistically exaggerated, any estimate of 1000 ppmv or even more, such as the latest IPCC AR5 RCP8.5 worst case “business as usual” scenario. This is a “bogus” estimate IMO.
Don’t waste your time badmouthing my simple “reality check” method. It works for me.
If you have a better method that works for you, by all means use it.
Max
Manacker,
I see you couldn’t let it go. I see you can’t admit that the method you’ve been using to estimate CO2 emissions is wrong and understates the emissions. Using your own term, your method is bogus. If you can’t acknowledge, that using your own term your method is bogus, then you are also biased and lack objectivity.
I’ll be back to address your points.
Peter Lang and Manacker:
I usually like and read all posts from both of you.
I think you guys agree on most issues and I thought you had this worked out.
It seemed the issue was the word “bogus” and now it seems like it has turned into a Kaya issue.
But I don’t think I will ever know because I am a little bored with the constant back and forth and have started skipping over both of your posts.
Sorry – just saying.
RickA,
Thank you. I take notice. I understand these sorts of arguments annoy many other readers. When others do it, I just go past their comments. From my perspective, I greatly respect Manacker. He is one of the very best consistent contributors to Climate Etc. Manacker and Faustino would be two of the regular commenters I would be likely to try read on every visit to Climate Etc.
Unfortunately, in this argument, all that was needed was a simple acknowledgement that his back-of-an-envelope emission projection method understates emissions and withdrawal of his point about me using bogus data and exaggerating to make my point.
But instead, Manacker felt he had to continue to make a meal of my mistake with the units (even though I’d corrected them and explained and the units were virtually irrelevant to the point I was making anyway). Secondly, he accused me of exaggerating and using bogus data to support the point I was making. He didn’t say ‘bogus’ just once about the wrong units, he extended and over reached by applying it to all the points I’d made.
Most importantly, the point of my original comment was to point out to him, nicely, that his back-of-an-envelope calculation of CO2 emissions is not correct and it understates the emissions projections compared with the authoritative economists’s projections.
I was trying to point out to him, nicely, that he is doing exactly what he then accused me of doing – i.e. exaggerating (understating) the projected emissions to support his arguments. I didn’t call his method “bogus” in my original comment, but in his terms that is what it is.
The key disagreement, from my perspective, is not the use of the term “bogus” but that his method is bogus and is underestimating emissions in the future. Instead of continuing to use it I’d urge him to use the Kaya Identity.
And I accept that the inputs vary according to which sources and economists you want to use. But they are best to provide it. GDP will grow faster than population, and the rate will accelerate, so emissions growth rate will accelerate even with slowing population growth.
Manacker,
Disingenuous. Mostly you’ve been saying 650 ppmv, not 650-700 ppmv. But more important, we are not arguing about concentrations. The argument is about emissions not concentrations and about the factor by which emissions are expected to increase to 2100. As I said before, the conversion from emissions to concentration depends on inputs from climate science (the carbon cycle) and is much more uncertain than emissions projections. That is not what we’ve been arguing about. So, let’s stay focused to avoid getting distracted and avoid muddying the waters.
All estimates are uncertain, and of course some have greater uncertainty than others. But that’s not a valid reason for ignoring the factors that most influence the emissions projections. GDP and emissions intensity are the two inputs we need to estimate emissions. Population is one of several inputs to GDP. The first part of the Kaya Identity, which is the accepted basis for estimating CO2 emissions, estimates GDP from:
GDP = Population x GDP per capita.
Nest step, emissions = GDP x energy intensity (kg CO2 / $ GDP).
Economists project growth rates for population, GDP per capita and energy intensity.
GDP per capita grows faster than population, and continues to grow when population doesn’t grow.
So, you are making an error assuming that population is the main influence on future emissions. It is GDP that is the main influence, and population is just one factor influencing GDP growth rates.
Nordhaus RICE-2013 estimates CO2 emissions at 70 Gt/a in 2100. Below are the emissions and relevant input parameters extracted from RICE-2013 (sheet ‘Global’): http://www.econ.yale.edu/~nordhaus/homepage/NYRB_RICE.htm . The figures below are for 2010 and 2100 and the ratio of 2100 / 2010. The factors are:
• Population (millions)
• Output ($ trillions)
• World Emissions intensity (kgC/$GDP)
• Industrial emissions (GtC/a)
• Industrial emissions (GtCO2/a)
Population (millions)
2010 6,797
2100 8,953
ratio 1.32
Output ($trill)
2010 68.3
2100 461.5
ratio 6.75
World emissions intensity (kgC/$GDP)
2010 0.133
2100 0.042
ratio 0.31
Industrial emissions (GtC per year)
2010 8.986
2100 19.207
ratio 2.14
Industrial emissions (Gt CO2 per year)
2010 32.9
2100 70.4
ratio 2.14
Points to note:
Population in 2100 is projected in this analysis as 8.95 billion, so lower than your estimate of 10.25 billion.
Population increase is estimated to be a factor of 1.32 over 2010.
Output (GDP proxy) projected to be a factor of 6.75 above 2010.
Emissions intensity is expected to reduce by a factor of 0.31 from 2010 to 2100
The projected CO2 emissions in 2100 are 70 Gt/a, which is a factor of 2.14 higher than in 2010.
These figures are consistent with the point of my original comment, and consistent with my main which is that you back-of an envelope calculation understates the emissions increase.
And to prevent confusion all the discussion is for the scenario with the ‘No Controls’ and no cheap, fit for purpose, alternative to fossil fuels.
Peter Lang
We shall see whether or not “the method I use is wrong”.
The jury is still very much out on that.
The past correlation with population is not all that bad, if one allows for the increase in per capita CO2 generation (and energy consumption).
This increased by 0.25% per year compounded from 1970 to 2010.
Will it continue to do so?
Who knows?
You don’t.
I don’t.
And frankly, neither does anyone else.
Out hostess refers to this as “uncertainty”.
But this gives me a good “reality check” on predictions future CO2 emissions and concentrations.
I’ve assumed it would increase at a slightly higher compounded rate of 0.3% per year to arrive at 650 ppmv by 2100.
I’ve also run a separate case using an exponential rate of increase three times as high as the one we saw in the past (a booming economy based principally on the use of more fossil fuel energy), resulting in an 80% higher per capita rate by 2100 and a CO2 concentration of 700 ppmv.
My “reality check” tells me these estimates could have a reasonable probability of actually occurring, if the UN population projection turns out to be right.
On the other hand, I can see that the IPCC RCP8.5 worst case “business as usual” projection (which is used by alarmists, including IPCC, to frighten the public into agreeing to a carbon tax or some other such nonsense) is unrealistic, simply because it ASS-U-MEs that every man, woman and child on this planet will be consuming twice as much fossil fuel energy as the inhabitants of the “industrially developed” nations do today.
And that, Peter, does not pass my “reality check”.
Just that simple.
Max
Peter Lang
Now let me comment on the calculation you made, using the “Kaya Identity” based on GDP and other factors, which concluded:
Great!
My “quickie” estimate at 30% increase in pc CO2 gets me to 60 Gt/a (1.8 times higher than in 2010), and the run at 80% increase in pc CO2 gets me to 77 Gt/a (2.4 times higher than in 2010.
So the two methods give similar results.
And they are nowhere near the IPCC “scare” case RCP8.5.
SO WE AGREE!
Max
Jim D – Global heat changes must be in units of Joules. This is the physics unit for heating and cooling; i. e. “global warming”.
The use of surface temperature is incomplete for this determination. What is called “global warming” when using the surface temperatures alone is really just a “global surface temperature trend”. There is no need to even use this later metric to diagnose global warming if we have the much better characterization in terms of Joules.
I would also point out that you have not defined what T you mean when you refer to a surface temperature.
Roger Sr.
Roger, I was referring to the surface mean temperature, T, which is a measurable proxy for the thermal (Planck) response to the forcing. Ideally you could replace this with outgoing longwave radiation at the top, but that measurement is a lot more difficult. In the end, it is the response term, anyway, which is needed to complete the information. Normally we are in a state of incomplete response to the forcing change with an imbalance represented by the dH/dt term. T/lambda and the forcing term are in W/m2. H is in J/m2.
Scott – You asked
“Do you have references on the adjustments on GISS temperatures that can discuss the raft of changes in temperatures that turned a slight cooling trend to the heating trend?”
I recommend you ask GISS directly this question. The homogenization approach has never, in my view, been properly documented. They (GISS; NCDC; CRU; BEST) should present this homogenization