by Judith Curry
A few things that caught my eye this past week.
In the news
Earth Has 3 Trillion Trees; Losing About 10 Billion Per Year, According to Yale Study [link]
NOAA: “Absolute Global Sea Level Rise Is Believed To Be 1.7 – 1.8 Millimeters/Year” [link]
New from @AGU_Eos: What have we learned 10 years after #HurricaneKatrina? [link]
Good piece by Peter Neilley on forecasting accuracy improvements since Katrina… [link]
Arctic news
Something you don’t see every Epoch: SE Alaska panhandle is in a hurricane forecast track. [link]
Danish Meteorological Institute Data Show Greenland ice mass balance has increased impressively Since 2014 [link]
The state of the Greenland ice sheet in 2015 | [link]
Good news: Ice sheets may be more resilient than thought, say Stanford scientists in Geology. [link]
Scientists: Greenland meltwater “increasing,” but admit they need a new technology to accurately measure meltwater [link] …
New papers
Ocean heat content variability and change in an ensemble of ocean reanalyses [link]
Paper: Temperatures began rising 500 years ago due to increasing solar activity, [link]
Svensmark’s 2015 update on solar-cosmic ray theory of climate [link] …
New article by Willie Soon: Re-evaluating the role of solar variability on Northern Hemisphere temperature trends since the 19th century [link]
“Emergent model for predicting the average surface temperature of rocky planets with diverse atmospheres” [link]
About science
Researchers find an unexpected link between brevity and popularity [link]
FabiusMaximus follow up on consensus kerfuffle with Rick Santorum and Politico [link]
Our brains are built for partisanship: [link]
About those dwindling trees, I noticed the other day they growing like topsy here:
“In the United States, which contains 8 percent of the world’s forests, there are more trees than there were 100 years ago. According to the Food and Agriculture Organization (FAO), Forest growth nationally has exceeded harvest since the 1940s. By 1997, forest growth exceeded harvest by 42 percent and the volume of forest growth was 380 percent greater than it had been in 1920. The greatest gains have been seen on the East Coast (with average volumes of wood per acre almost doubling since the ’50s) which was the area most heavily logged by European settlers beginning in the 1600s, soon after their arrival.” ~MNN
That’s an example of how rich countries look after the environment much better than poor countries.
Therefore, policies should be aimed at making poor countries richer as fast as possible.
Therefore, policies should be economically rational
Therefore, we should::
– not be wasting money on policies to incentivise uneconomic, economically damaging schemes like carbon pricing and renewable energy
– encouraging policies like free trade, globalisation, etc
– welcome multi-national companies as they are the best and fastest way to move make poorer countries richer as well as increase the rate of economic growth in rich countries by providing cheaper goods and services that the populations of the rich world want
Absolutely.
Kill dictators …
+1
Peter, here’s encouraging news on Meltdown Proof Nuclear Reactors: http://junkscience.com/2015/09/05/meltdown-proof-nuke-reactors-i-vote-yes/
Thank omanuel. Interesting. I’ve been following especially the IFR for a long time. But I think any of these new designs are a decades away from being economically viable – probably not in my lifetime. Did you see this interesting post:
Molten Salt Fast Reactor Technology – An Overview
by Hubert Flocard? http://euanmearns.com/molten-salt-fast-reactor-technology-an-overview/
Hubert Flocard:
“A former student of the Ecole Normale Supérieure (St Cloud) Hubert Flocard is a retired director of research at the French basic science institute CNRS. He worked mostly in the theory of Fermi liquids with a special emphasis on nuclear physics. He has taught at the French Ecole Polytechnique and at the Paris University at Orsay. He was for several years a visiting fellow of the Lawrence Berkeley Laboratory and he spent a year as visiting professor at the theory department of MIT (Cambridge). He has worked as an editor for the journals Physical Review C and Review of Modern Physics (APS, USA) and Reports on Progress in Physics (IoP, UK). He has chaired the nuclear physics scientific committee INTC at CERN (Switzerland). When the French parliament asked CNRS to get involved in research on civilian nuclear energy, he was charged to set up and to manage the corresponding CNRS interdisciplinary programme. He still acts as a referee to evaluate research projects submitted to Euratom.”
The nineteenth century saw many advances which required cheap energy, and the expansion of agriculture into virgin territory. These two forces acted together to spur vast deforestation. Luckily for the eastern US forests, the switch to fossil fuels and eventual decline in cultivated acreage reduced those pressures. Mark Twain described this in “Life on the Mississippi”.
“Mississippi steamboating was born about 1812; at the end of thirty years, it had grown to mighty proportions; and in less than thirty more, it was dead! A strangely short life for so majestic a creature…. I suppose that St. Louis and New Orleans have not suffered materially by the change, but alas for the wood-yard man! He used to fringe the river all the way; his close-ranked merchandise stretched from the one city to the other, along the banks, and he sold uncountable cords of it every year for cash on the nail…”
“The Amazon is actually the least endangered forest in the world,” according to Patrick Moore, co-founder of Greenpeace (see, Confessions of a Greenpeace Dropout…). Despite all the warnings, “only 10 percent of the Amazon,” says Moore, “has been converted to date from what was original forest to agriculture and settlement.”
Wagathon,
Furthermore, if we didn’t have fossil fuels the Amazon and all other forests would be long gone by now – torn down for fuels and to grow food with no tractors and not fertilisers to give us the productivity the world has achieved so far – just like the Europeans did to their oak forests before the start of the fossil fuel era.
It’s good to see actual numbers about the population and death rates of global trees.
Given that the ‘death-count’ is 10 billion per annum and the population is currently 3 trillion all we now need is the ‘birth-rate’ of your average tree and, hey presto, we should be able to predict either (a) When the last tree on Earth will perish or (b) our entire land mass will consist of nothing but trees.
I’m guessing that someone has, or is able, to do the math (not me though) otherwise we’ll end up being endlessly assaulted on TV by some Hollywood dude or dudette with the ‘fact’ that ten billion trees die needlessly every year and we should immediately send just three Euro/Dollars/Pounds or whatever to a charity of their choice.
Anyone have any ideas?
Pingback: Week in review – science edition | Enjeux énergies et environnement
About those short title of highly cited papers.
Not a surprise. A title should not be an abstract.
The best book I have read concerning how to communicate (both written and verbal) is by Frank Luntz and is entitled “Words that work.” He makes the point about brevity and understanding very clearly.
George Devries Klein, PhD, PG, FGSA.
OT – Reiss messing up his own book in a Salon interview:
So, not “20 years” as per the original quote with the misremembered date (and not “40 years” as some corrections have it). A target date of 2030, and more than that, a condition of doubled CO2, which is way beyond current predictions for emissions. Furthermore, the “as warming progresses” implies that the remainder of the predictions are for some indeterminate time later.
Somewhere recently I crossed up two numbers – 2030 and 40 – from this quote from an “anonymous” commenter on a blog. I said 2040 when I should have said 2030.
I have the book and it most definitely does say 2030.
Also, the only sensible interpretation of the theoretical the reporter was asking Hansen to consider is a doubling of CO2 in 1988. Given that, what would we be looking at in 2030?
Technocracy up…
https://sustainabledevelopment.un.org/post2015/transformingourworld
What should we be looking at in 2030? We should be looking FOR more ways to pay attention to what Hansen said, in 1988, 1990s, 2000s, and onward, and more ways to ameliorate and eliminate pollution that causes the problems.
You can’t leave out global ‘sustainability’.
Which means, all the more, we should pay attention to Hansen. Food production has been hammered by drought and unseasonable weather already. If what we’ve seen is just prologue to the more serious effects, we’ve got a lot of air cleaning to do to feed 9 billion people — let alone 11 billion.
How accurate are the population numbers? They must be an estimate for much of the world. Adjusted, homogenized… double counted immigrants? I can understand your fear but I don’t believe the numbers.
UN’s population estimators have been very accurate over the past 40 years. I have no reason to think they’re far off, now.
There is nothing to suggest population growth will significantly drop before we hit 9 billion people — if it does, it would probably be good (unless the cause were war or famine). As people get rich as a population, they have fewer children, and population growth drops. So lower population growth usually means there is a vibrant and growing economy and families are accumulating income and assets.
We’ve relied on technological miracles in the past to meet population growth. Norman Borlaug is dead, however, and we need to work harder now to make things go well.
Besides we have same sex marriage. That will do the trick.
No indication same sex marriage slows population growth.
Gotta stick with the facts.
No indication same sex marriage slows population growth.
Male gays aren’t going to add much. But the partners in a lesbian marriage could in principle reproduce at twice the rate of a heterosexual marriage.
So are there more male or female gay marriages? Way above my pay grade.
An assumption that same-sex weddings will change the rate of human reproduction rests on an unevidenced assumption that homosexuals now reproduce at the same rate as the rest of the population.
I don’t know of any good evidence on that issue one way or the other. With no baseline, and no new evidence to suggest that the baseline, whatever it might be, will change, it’s rather difficult to make a good case that a number we don’t know will move in any direction based on forces acting on that number that we also don’t know.
It will be even, a better world when we are issued a erg limit when we are issued. Thanksgiving. Uniformly.
Ed Darrell | September 5, 2015 at 12:45 pm says:
“We should be looking FOR more ways to pay attention to what Hansen said…”
Ed, first you irritate me with your Hansen-worsship and then you are too lazy to spell out what he did say and send us looking for it in the literature. As far as I go Hansen was the worst thing that happened to NASA-GISS. When he went to England to call trains carrying coal “death trains” he should have been fired immediately. NASA has no business with that but he was falsely represented as carrying NASA opinions. What he did there was to interfere in the internal affairs of a foreign nation for his own personal ideological reason. Which is of course his pseudo-scientific belief that climate change is caused by anthropogenic carbon dioxide in the atmosphere. Just to bring it up to date, the existence of the “pause” in warming for the last 18 years is sufficient proof that carbon dioxide does not cause greenhouse warming, and never has. In case you need a picture drawn about it, currently carbon dioxide in the atmosphere is increasing but global temperature is not. The Arrhenius greenhouse theory, however requires that it should cause warming. Since there is no warming Arrhenius has made a false prediction and belongs in the waste basket of history. But IPCC has been claiming that AGW – anthropogenic greenhouse warming – is caused by greenhouse warming which we must stop by not burning any fossil fuels. All that greenhouse warming stuff got started by Hansen in 1988 when he claimed to the Senate that he had discovered greenhouse warming. What was his proof? you might want to ask. He claimed that he had observed a hundred year period of warming. He concluded that “…the earth is warming by an amount which is too large to be a chance fluctuation …the similarity of the warming to that expected from the greenhouse effect represents a very strong case.” We are asked to trust his opinion and that is just what IPCC has been pushing ever since. Fortunately the appearance of the hiatus/pause has set the record straight. Failure of the Arrhenius theory of greenhouse warming tells us that it is time to start putting an end to irrational and costly emission control and mitigation measures based on belief in the existence of an imaginary greenhouse warming.
Worshippers of Anything-But-Hansen may find it odd to discover that their (herb-induced we hope?) heightened sensitivities to the mere mention of Jim Hansen do not constitute Hansen worship in normal people, nor even someone who bothers to point out the obvious.
“Gee, Dr. Snow was right about the pump in that neighborhood. But if anyone suggests cholera might get into the water, let’s malign them by calling them Snow-worshippers! That’ll make everything better.”
Are there any scientists anywhere working to provide the information that, it seems to me, is most needed for rational policy analysis, such as probability distributions for:
1. time to start of next abrupt climate change
2. direction of change (to warmer or cooler)
3. rate of change
4. duration of change
5. total change
4. damage function (per region)
If there is research underway into these, are there any preliminary results or summary of authoritative results available that would be relevant for policy analysis?
Don’t forget the benefit function (per region) :)
Hi Jim2,
What we are missing is the economic costs of climate change (could be a net negative cost, i.e. a benefit, of course). The damage function defines the costs/benefits per degree of change. Once we have the damage function we can estimate the economic costs and benefits of climate changes and – making an enormous assumption that it is correct – then we can estimate the net economic cost/benefit of a proposed policy – an the uncertainties to keep the ‘Uncertainty Monster’ content :)
Richard Tol and William Nordhaus have been among the leaders in trying to estimete the damage function since the 1980s. However, the results have enormous, uncertainties, and I am not aware that anyone has tried to define the uncertainties.
William Nordhaus gives a brief explanation of the Damage Function in his DICE-2013R User Manual: http://www.econ.yale.edu/~nordhaus/homepage/documents/DICE_Manual_103113r2.pdf
(Search for ‘damage function’)
Thanks Peter. I was being a bit facetious. I suppose damage function is fitting for the Dismal Science. I mean, after all, benefit function would have worked just as well and would have been a bit more upbeat.
Bah, all change is bad. Unless it is hopefully voted for. All change must be predetermined and agreed upon by consensus.
you assume the problem can be engineered. As Dr. Curry explains the problem is a wicked mess.
Mosher,
As usual you say nothing. You assume that CO2 emissions do damage. If you can’t estimate the damage, you’re making baseless assumptions, or simply believing the the CAGW cult members beliefs
If assuming “the problem can be engineered” is wrong, just confess that green energy developments aren’t useful unless they have an economic advantage over fossil fuels.
The entire principle of CO2 reduction is that we can reduce CO2 and global average temperature back to what it was during the Dust Bowl.
NOAA: “Absolute Global Sea Level Rise Is Believed To Be 1.7 – 1.8 Millimeters/Year.” On a long term basis this corresponds to around 1.4″ in 25 years and 6.9″ in 100 years. That is, if the models are correct. These latest figures have been downsized from 3-5 mm per year. The “models” used include the consensus (IPCC) model of global temperature outlook due to manmade and non manmade causes; and model of sea water rise which must also take into account (using separate models) agricultural demand and hydrological cycles. For the U.S. severing California from the contiguous 48 states and shipping it out to sea would do great things for this problem. Perhaps China or Russia would be interested.
You seem confident that melt and thermal expansion are linear. From where have you drawn this confidence?
redbbs, I didn’t mention about melt vs. thermal expansion. Based on the thermodynamic / PVT properties of matter (water) within the range we are talking thermal expansion vs. temperature is near linear for our purposes. Ice melt is a wicked problem involving the effect of global land and ocean heat balances and flows etc. in fact is always unsteady state dynamic not equilibrium matter.
You’ll notice that no period is stated. In fact, that NOAA site is just showing trends for a whole lot of tide stations over (it seems) the lifetime of the site. And they are saying, subtract 1.7-1.8 to get the rate of land uplift. So it is the trend over some long period in the past, not a current or future estimate.
The question why did Dr. Curry include it?
the tree paper was actually the biggest climate science news this past week. I only included one article discussing it.
Is there a paper related to the new estimates? Or is this just an internal NOAA tide guage calculation unrelated to, say, CU’s Sea Level Research Group estimates?
I think you’ve answered your own question.
DW, you have to track Noaa back to the underlying CU source, which has a table for the results of each of several studies. These use differing numbers of tide guages, and cover differing periods. The only study covering only recent decades, using only gauges that had been uplift/subsidence calibrated by differential GPS, came up with a recent rate of 2.8mm/yr, very close to Nasa Altimetry (until this year 2.8 plus ‘phantom’ (because it does not affect shoreline rise) GIA 0.3 for a total of 3.1, newly revised to 3.2). There is still the significant closure problem. ARGO based thermosteric rise plus GRACE/ICESat estimates of ice sheet lose only add up to half-2/3 of this rate including GIA (depends on who did the accounting). Just shows the uncertainty in all three estimates.
“Greenland ice mass balance has increased impressively Since 2014”
Tricky stuff from the NoTricksZone. Under the heading
“The continuing cold weather in Greenland from September 1, 2014, to August 2015 has led to strongly growing sea ice, as the accumulated mass balance of the Danish Meteorological Weather Service impressively shows:”
he shows a map of Greenland land area, saying “…from September 1st to now”. But he never says what “now” is. If you follow the link, you see something totally different.
Looking at filename suggests now is some time on or before 23 August. Only a week of melt left, but still, why not wait for a full year?
But when you read the actual DMI site, they say, after Aug 31:
“Given that it doesn’t include ice losses by calving icebergs and ocean melting, the surface mass budget is usually strongly positive at the end of the year. 2015 was no exception – gaining around 220bn tonnes of new ice – but this is below the average of about 290bn tonnes.”
IOW, NoTricks shows just one side of the story. Not impressive at all.
Yep it seems precise use of terms is essential when it comes to Greenland. Substituting ice mass balance for surface mass balance is key to the confusion caused by NTZ.
Wordsmiths use ambiguity for a precise purpose.
Nick
In examining old sea ice records DMI commonly concluded at August 31 thereby giving a false impression of the final sea melt as peak loss often occurred some weeks later than the end of August as can be seen here
—— ——–
This link leads directly to ‘Arctic Ice’ by N.N.Zubov.
http://www.scribd.com/doc/70835248/90/Section-155-Seasonal-Fluctuations
Page 396 describes 10/10 ice concentration, yet further north very little ice-these extensive leads may not have been taken into account in ice charts where the prime means of observation was by ship. Figure 178, also table 118 on Page 458, concludes that the ice in the Russian sector is hugely variable year by year. In section 162 entitled ‘Warming of the arctic’ Dr Ahlman is mentioned once again.”
“Thus, for example, a vessel which attempted to traverse the Northern Sea Route at any cost in August 1936 would have gotten the impression that the route was completely impassable due to ice. On the contrary, a through passage of the Northern Sea Route in the second half of September of that year did not present particular difficulties. “
(Authors note; this illustrates the potential problems in relying on DMI sea ice charts which usually terminated in August)
—— ——
The above taken from my recent article
tonyb
Tony, yr link ‘ Arctic Ice’ has been deleted.
bts
@cr (quoting Zubov): “Thus, for example, a vessel which attempted to traverse the Northern Sea Route at any cost in August 1936 would have gotten the impression that the route was completely impassable due to ice. On the contrary, a through passage of the Northern Sea Route in the second half of September of that year did not present particular difficulties. “
Whatever the difference between August 31 and September 30 1936, which couldn’t have been much more than 0.3 teratonnes of ice, this plot
http://psc.apl.uw.edu/wordpress/wp-content/uploads/schweiger/ice_volume/BPIOMASIceVolumeAnomalyCurrentV2.1.png
suggests that the difference is dwarfed by the decadal downard trend of 3 teratonnes of ice per decade (rounding ice density to 1). Any through passage of the Northern Sea Route in the second half of September 1936 would have to be a considerably better passage today, even in the depths of winter 2015 and even with the ice build-up since 2010.
As far as trends go, standardizing on August 31 each year seems more reliable than trying to guess the date of peak ice each year.
hmmm
http://www.carbonbrief.org/media/434864/greenland-time-vs-mass.png
Thru April
http://www.arctic.noaa.gov/reportcard/images-essays/fig3.3-tedesco.jpg
Appears to be current.
Yup. DMI waits some months for GRACE data before estimating total mass. NASA also posts GIS simmary each January. 1/15 for 2014 estimate was effectively no loss. The colder temperatures, more snow, and less melt indicated by DMI surface mass suggest Greenland will be found to have gained total ice mass this year. I’ll bet not only that’s so, but that the fact will not emerge until after Paris. Too inconvenient otherwise.
Will eventually fit the emerging theme of Arctic sea ice recovery in something like a 70 year overall cycle. Arctic ice from satellite began in 1979, coincidentally near the ice peak. Ignoring the 2012 cyclone, the Arctic ice nadir was 2007. For all intents and purposed, the NWP was shut this year, another good indicator of the cyclic upswing that appears to have begun. Essay Northwest Passage.
About that tree study… How does the scientific community underestimate the number of trees on the earth by such a huge margin? 400,000 vs. 3 trillion??? It’s not like they are hard to find. They are just sitting there, waiting to be counted. And what does this new 3 trillion number mean to the effect of trees in taking in CO2? It must have some significance.
http://www.antondewit.nl/wp-content/uploads/2013/11/Treebeard.jpg
Sometimes hard ter see the trees fer the woods..
“How does the scientific community underestimate the number of trees on the earth by such a huge margin? 400,000 vs. 3 trillion?”,/i>
It doesn’t say that “the scientific community” estimated 400,000 trees. It says someone (unspecified) once estimated 400 billion. Looks like this lot were the first ones nutty enough to try to specify and count what is and isn’t a tree.
Nick, they are using their own models based on tree density = f ( land mass, aerable land, mountain coverage, bodies of water … there is no such thing as a tree count.
You mean there is not a 97% consensus on what a tree is and how many there are?
The alarmists don’t like any good news. Right, nicky? Is that too many trees for you, nicky? You seem to be angry.
“there is no such thing as a tree count”
In the end, someone has to relate these variables to number of trees. And decide what is and isn’t a tree. Saplings? Shrubs? Suckers? Live? Healthy? Mallee clumps?
Scientists generally deal with biomass, which is a lot more quantifiable and meaningful. There was no incorrect estimate by the scientific community of tree numbers. Who cared? And I still can’t see why they should.
+1 Nick Stokes.
It’s nutty to count trees, says nicky.
says nicky:”Scientists generally deal with biomass, which is a lot more quantifiable and meaningful.”
Somebody ask nicky if estimates of biomass might have been affected somewhat due to the sigh-in-tists getting the estimate of the trees so freaking WRONG!
“There was no incorrect estimate by the scientific community of tree numbers. Who cared? And I still can’t see why they should.”
Well, because nobody cared. They were just interested in biomass, which is not affected by having the estimate of the number of trees off by two and a half freaking trillion specimens.
Case of biomass biass. )
I bet they didnt count my bonzi trees
Well, Don, you may occasionally take a glance at your own mass. Any idea how many cells you have? Does it matter?
A big brain like you should be able to do better than that, nicky. You know that 3 trillion trees weigh a lot more than 400 billion trees. Don’t be like willy.
Nick
If there are ten times more trees than expected, but many more have been cut down in the past, doesn’t that all have an impact on the carbon budget as sinks become sources and on a larger scale both ways than hitherto realised?
tonyb
“If there are ten times more trees than expected”
Again this nonsense meme. Who expected? Scientists had never made use of number of trees as a metric. The press release mentions someone who once previously made an estimate, but no-one has ever said who or when, or whether anyone took any interest.
People have long estimated the mass of carbon contained in trees, or more generally land vegetation. That is what is important for carbon budgets. Not only is the “number of trees” impossible to pin down for definition reasons (no-one here seems to have any idea of their criteria), but it is also hard to relate to actual carbon content.
For example, forestry practice often tries to replace a mass of patchy trees by fewer larger ones. In doing that they cut down many small trees. They are aiming to enhance wood content and probably succeed.
nicky, nicky: “People have long estimated the mass of carbon contained in trees, or more generally land vegetation.”
When you say “people” don’t you mean scientists, nicky? How have scientists long estimated the carbon contained in trees, if they haven’t had a clue about how many trees exist?
This is anti-science, nicky: “Looks like this lot were the first ones nutty enough to try to specify and count what is and isn’t a tree.” Isn’t there significantly more carbon in a tree than there is in a daisy, nicky?
Thanks to the efforts of these scientists, there is a peer reviewed paper that describes a very thorough method for estimating the number of trees and it says there are about 3 trillion, and the previous estimate was 400 billion. What is your problem with that, nicky?
Nick
How can misunderstanding the number of trees by such a great magnitude, now and in the Past, not have an effect on our understanding of the size of sinks and sources that constitute the carbon cycle?
Here is the previous estimate of forest carbon
http://www.unep.org/vitalforest/Report/VFG-12-Forests-and-the-carbon-cycle.pdf
Tonyb
“How can misunderstanding the number of trees by such a great magnitude, now and in the Past”
Tony, you go on about this, but don’t answer the simple question, who misunderstood the number of trees? What is your evidence?
You linked to a carbon cycle doc. There isn’t a single mention of number of trees. It is of no interest to climate scientists, who follow carbon mass. In fact this new paper was written by ecologists. It may be of some interest there. I don’t know why.
Don
“How have scientists long estimated the carbon contained in trees, if they haven’t had a clue about how many trees exist?”
How can you estimate your mass if you don’t have a clue how many cells exist?
Oh, I get what you are saying, nicky. Since I already know what my mass is, I don’t need to know how many cells are in my body. Ergo, whatever: We know how much mass a forest has, so we would be “nutty” to bother counting the trees. All forests got a lot of trees, by definition. We don’t care if the earth’s forests contain 400 billion trees, or 3 freaking trillion. A forest is a forest. Seen one, you seen ’em all.
You should write a critique of that “nutty” tree counting paper and send it to the journal, nicky.
Nick
The article I linked to showed various carbon sources. It included this
‘Since carbon emissions from deforestation represent close to one fifth of
all anthropogenic greenhouse gases, an initiative was created at the Climate Conference in Montreal in 2006 to “Reduce Emissions from Deforestation and Degradation” (REDD). REDD carbon credits are at the moment included only on the voluntary market.’
It also said that the IPCC estimates varied very widely. Surely if the estimates very so widely we need to know what the actual real data is before we can make a worthwhile estimate of likely sinks and sources and where it should be attributed? Or is a wild guess good enough for post modern sciences?
Its alright, there’s no need to reply, as you obviously consider that having a more accurate idea of an important part of the carbon cycle is of no importance. I shall have to wait to see if this gets traction elsewhere. Thanks anyway.
tonyb
Tony, nicky’s problem is that the “nutty” professors found way too many trees. It doesn’t fit the alarmists’ “It’s worse than we thought!” meme.
From Nick Stokes:
Impossible to count from a definition aspect.
Well Nick that will certainly come as a surprise to plant biologists.
“that will certainly come as a surprise to plant biologists”
Well, do you know what definition they use?
I expect they have one. The problem is, how can you define a set whose sum is of any use? Something that gives a unit count to tundra birch, redwood, banyan, treeferns etc; what use can be made of a sum? How does it tell you anything about total carbon storage, for example?
Well, one definition involves a single genetic individual. Which makes things difficult for species such as the American quaking aspen:
https://upload.wikimedia.org/wikipedia/commons/thumb/9/9e/FallPando02.jpg/1024px-FallPando02.jpg
One tree (“Pando”)
Nick,
Go into any decent bookstore (Powell’s Books in Portland is better than decent) and get a guide book on terrestrial plants and trees.
It is easy enough to communicate to 4th graders. Identify the species and then using the reference identify its characteristics.
Now you may look at a mature vine maple and call it a tree. I would have before learning otherwise. But whatever you or I might call it, it remains a shrub.
And it does make a difference. An acre of mature Doug for, Hemlock and Western Red cedar, with little understory growth will contain significantly more biomass than an area of mixed shrub trees.
Timg56,
Um. Maybe not so easy. Are there 23,000 species or over 100,000?
http://www.ncsu.edu/project/treesofstrength/treefact.htm
http://tree-species.blogspot.com/2008/05/how-many-tree-species-in-world.html
(and just for them 4th graders) http://www.realtrees4kids.org/faq.htm
From the paper.
Its simple.
You start with a satellite image of the world.
It will have a resolution.. say 1km.
You have to use a categorization tool to say which grids are trees.
First big error
Then you have to estimate trees per grid
Second big error
In this study they did the same thing, but the ground “truthed” the tree per grid.
How did galileo get the speed of light so horribly wrong?
bad tools.
BUT he proved what he set out to prove… the speed wasn’t infinite
Huh, sound somewhat similar to estimating “global mean temperature”.
The bigger issue is how do they know the number of trees “at the dawn of human civilization” [whenever that was] to two sig figs [46%???]? Damn, I wish I was that good.
What’s a tree? Is the National Forest in Arizona, populated almost solely by saguaro, counted accurately anywhere? When does a cactus become a tree?
In Yellowstone, the trees burned by the 1988 fires by now have been replaced, most of them, by 20 or 30 smaller trees. Most of them will die as the new forests mature. How to count them? When 20-30 small trees do not equal one large tree in carbon, water conservation, or animal habitat, should they be counted at all?
How can one dismiss “the scientific community” when one has not been out with the rangers to estimate tree numbers on the ground, or later in the office, from satellite and aerial photos?
Unpredicted abrupt changes in the middle east emission landscape,inverse to emission scenarios.
http://advances.sciencemag.org/content/1/7/e1500498
Good ol’ rock. Nothing beats rock:
http://static.existentialcomics.com/comics/sisyphusHappy.jpg
Source: http://existentialcomics.com/comic/29
Wrong Rock.
Paper beats rock, twit.
Comic
Arch,
That was for our resident total wit Willard.
If “solar variability has been the dominant influence on Northern Hemisphere temperature trends since at least 1881” – as I suspect and as reported in the paper by Willie Soon et al. – a social experiment is the most reasonable explanation for government policies that instead assume carbon dioxide is the dominant influence: http://tinyurl.com/pkbx2h6
Irreducible uncertainty in regional climate with initial condition problems (Hawkins)
http://link.springer.com/article/10.1007/s00382-015-2806-8
Low-frequency variability with an attracting orbit (ghil is a coauthor)
http://www.sciencedirect.com/science/article/pii/S0167278915001335?np=y
Re emergent models/ atmosphere or merely generally.
For 2 years after 2000 I studied and sold high powered CO2 flowing gas slab lasers mainly used to cut steel plate up to 1 inch thick.
There are many accounts on the Web about how CO2 lasers work. The allowed energy transitions between molecular states of CO2 are important.
Here is a brief description —
“The CO2 laser is a 3-level system. The primary pumping mechanism is that the electrical discharge excites the nitrogen molecules. These then collide with the CO2 molecules. The energy levels just happen to match such that the energy of an excited N2 molecule is the energy needed to raise a CO2 molecule from the ground state (level 1) to level 3, while the N2 molecule relaxes to the ground state. Stimulated emission occurs between levels 3 and 2.
The metastable vibrational level (level 2) has a lifetime of about 2 milliseconds at a gas pressure of a few Torr. The strongest and most common lasing wavelength is 10.6 um but depending on the specific set of energy levels, the lasing wavelength can also be at 9.6 um (which is also quite strong) and at a number of other lines between 9 and 11 um – but these are rarely exploited in commercial CO2 lasers.
Here are some of the more subtle details. (Skip this paragraph if you just want the basics.) As well as the 3 energy levels of CO2 I referred to, there is actually a 4th involved, about midway between the ground state and level 2. After emitting, the CO2 molecules transition from level 2 down to this 4th level, and from there to the ground state (because a direct transition from level 2 to the ground state is forbidden by quantum rules). Level 2 is actually a pair of levels close together, which is why there are 2 separate frequency bands that a CO2 laser can operate on, centred around 9.4 um and 10.4 um (i.e., just above and just below 30 THz). Each of these bands is actually composed of about 40 different vibration/rotation transitions with frequencies spaced about 40 GHz apart. The strongest transition is the one called 10P(20), which is about 10.6 um, so a CO2 laser with no tuning facilities normally operates at this wavelength. It is possible to select a particular transition (and hence frequency) using a diffraction grating instead of one of the mirrors. The exact transition frequencies were known to an accuracy of about +/-50 kHz back in 1980.
The helium in the mixture serves 2 purposes: (1) He atoms collide with CO2 molecules at level 2, helping them relax to the ground state; (2) it improves the thermal conductivity of the gas mixture. This is important because if the CO2 gets hot, the natural population in level 2 increases, negating the population inversion. ”
from http://www.repairfaq.org/sam/laserco2.htm#co2fgl
Those having trouble believing some parts of the science of CO2 in the atmosphere might benefit from reading a few items about CO2 lasers. One cannot take all of these processes to happen in the atmosphere, but some do have overlap, especially the behaviour of CO2.
Hello Geoff. After reading a couple of your comments it would seem that CO2 has indeed many properties that are relevant to the current AGW paradigm of CO2 forcing causing global average temps to rise over the past 2-3 decades. Would you care to comment on more specifically whether CO2 levels on their own would serve to increase global average temps or would there need to be other catalysts needed for this to happen?
No catalyst needed for CO2 to raise the global average surface temperature.
Though other properties of the atmosphere enhance this effect, including the presence of nitrogen, oxygen and argon that make up the major part of the atmosphere.
And then there are feedbacks, both positive and negative.
Peter – we don’t have any really good data over a few centuries that would allow us to know how much of any climate variable change is simply natural variability.
jim2
I think we can make some educated guesses for many countries. For example the year 1540 is thought to have been the warmest in Europe for at least 500 years (and probably since the MWP) with the shoulder years either side providing similar high temperatures and our worst recorded droughts.(C pfister)
There were a number of contenders for coldest ever winter of which 1740 is probably the winner according to Phil Jones.
So we can reasonably say that the warmest and coldest years sans enhanced co2 were due to natural variability. We must surely comfortably exceed them on a consistent basis before we can start to surmise that it is mans-rather-than natures-feet on the throttle?
tonyb
An “interesting” selection. This caught my eye:
Let’s pay due diligence. Click on the link, and a rather dull NOAA page comes up:
This struck me as weird in two ways. Firstly, it’s not news, and secondly, the number quoted is much lower than the currently accepted rate of sea level rise, around 3 mm/yr according to my recollection.
So, being a sceptic, I thought I’d dig a little.
Reading the NOAA text carefully we find ”The graphs can provide an overarching indication of the differing rates of regional vertical land motion”
So, the data is presented as informing rates of land motion primarily, rather than sea level per se. Interesting.
Let’s dig some more. The main NOAA sea level page is here.
http://tidesandcurrents.noaa.gov/sltrends/sltrends.html
It recommends the University of Colorado’s page for a detailed explanation:
That site references Church and White (2011) as the source of tide data sea level rise:
http://sealevel.colorado.edu/content/tide-gauge-sea-level
and gives the following figures:
1993-2009: 2.8 mm/yr +/-0.8
1900-2009: 1.7 mm/yr +/-0.2
So it appears that the NOAA site referenced, whilst not explicit, is considering sea level rise over the whole twentieth century, not current sea level rise as implied by Judith’s quote. Also, it appears that the number was provided to inform rates of coastal land subsidence/uplift rather than sea level.
So, now let’s discover why this rather old data is “news”. Googling the quote in Judith’s post brings up this headline at a site I was unfamiliar with, “NoTricksZone”
That post is a paranoid conspiracy theory as to the manipulation of data:
The NoTricksZone post is then referenced by several other science denying websites.
So. It turns out that this isn’t news, it’s not true and its originated from a science denying conspiracy theory website then spread around the blogosphere. Yet here it is, promoted without comment by Judith.
Judith, you took great umbrage at being labelled a “disinformer”
Advice: If you don’t want to be labelled a “disinformer”, don’t disinform.
“Judith, you took great umbrage at being labelled a “disinformer”
Advice: If you don’t want to be labelled a “disinformer”, don’t disinform.” – vtg
And you’re surprised….. why?
This is Judith’s MO – scepticism and critical thinking switched off for anything that appears to support the anti-IPCC dogma.
Michael, where did I say I was surprised?
Michael, I think it’s just that you don’t want to see ANYTHING that goes against the CAGW religion. Given the sorry state of the science, that’s what it is, a religion.
jim2,
I understand that this is an article of faith with you, but you’ll have to forgive me if I don’t share your unshakeable belief in evidence free assertions.
Interesting post, but overly judgemental. You might have observed that every week Judith posts links without comment. Below the links is a comment section where anyone can, as you did, comment.
Judith linked to a NOAA site, that currently is getting some attention, providing an avenue for discussion here. You provided some perspective on it. Seems like a win win. Is any information linked to by disreputible people supposed to be off the table for discussion as irrevocably tainted?
the tree paper was actually the biggest climate science news this past week. I only included one article discussing it.
?
I’m confused. I appreciate it when Judith post’s a link to some fashionable nonsense that is getting a bit of coverage especially when no one yet has spoken to correct the misinformation At times when this has happened, I’ve addressed the issue being misrepresented, not critiqued Judith for bringing it up. I would think that when a pseudo issue is getting a lot of attention the “best” thing for her to do (when counter arguments have not yet surfaced) is to provide limited links on the topic and open the discussion. I don’t know why this troubles VeryTallGuy.
I’d liken it a doctor runing a blog where they provide links to other blog posts discussing the latest in why vaccines cause autism.
Planning, what Michael said
As far as I can tell, the issue (if any, besides warmunist muckraking,) is that it doesn’t seem to be in the news, except for some nonsense at a denier site. (“Denier”?)
However, that’s wrong: Expert: We’re ‘locked in’ to 3-foot sea level rise, Rising seas threaten to flood launch sites, NASA says. Both these fantastical “news” stories reference a NASA site: Sea Level Rise Hits Home at NASA. (The second one actually links to page 5 of the article, but if you scroll down from the link in the first one, you’ll get to the page the second links to.)
The link provided in the main post is simply to more balanced NOAA site.
BTW, from the NASA page linked above, and in the first “news” story:
This isn’t really very alarming, except to alarmists. In fact, IMO it’s a good thing if NASA is forced to abandon their infrastructure. It’s completely obsolete, and cheaper replacements could probably be built as floating structures, far from any land.
Actually I was planning a full post on the NASA sea level rise stuff, so I pulled that stuff out into a separate post, then decided there wasn’t enough there for an interesting post. The link to the NOAA post was mainly interesting in the broader context of the NASA stuff.
aplanningengineer,
You are thinking about this like a normal, rational, thoughtful person. That is why you have trouble understanding the point of view of some of the more obsessed posters here.
Engineer: Doubt that VTG is troubled. Merely seizing an opening to attack Dr Curry. I think she has explained that her week in review selections are based, not on her preferences, but on a rough count of the current news. I think she has also said that not much time is spent researching the selections. Her intent is to provide topics that can be debated. VTG rightly participated when he dug into the SLR topic but went off the rails when attacking Dr Curry: but nothing new there.
VTG: Re your previous comment on Dr Curry’s WSJ article – your supposed excerpt does not appear in that article. What’s going on inside the VTG skull?
Oh, but think what fun we could have debating just how “alarming” a 6-28 inch sea-level rise by 2050 would be?
How expensive it would be to add 28″ to a seawall:
http://earthobservatory.nasa.gov/Features/NASASeaLevel/images/turn_basin_sunrise.jpg
I still may get to a post on this, but i have some good topics waiting in the wing (that require some work on my part, however)
Be safe and extravagant, and add 3 entire feet.
VTG provides some insight that helps answer the question I raised earlier in the comments, though he did it in an effort to refute something Dr. Curry did not say.
Nevertheless, I can see where using a century (or more) of averaged sea level data would be helpful in estimating tectonic shifts in land elevations.
“I’d liken it a doctor running a blog where they provide links to other blog posts discussing the latest in why vaccines cause autism.”
Ah the citation fallacy ( I think I’ll name it that)
Of course one wants to ask, if X links to Y, and Y links to Z does
X endorse Z?. What does it mean to link to a page?
If science paper X cites Y, does X endorse Y?. Well, no, X can cite Y to demolish Y, and of course Y can cite Z and X can support Z, while demolishing Y. So what does it mean to link to a page?. The simple answer is ask the person who did the linking if you are after the truth.
Linking is not generally endorsement. whining about linking is basically whining about traffic. dont send them traffic because you might get infected by their thinking. Instead create a filter bubble. stick with your kind and then bemoan the decline in conversation.
Sea level rise? Get your inflatable rescue raft ready.
AK:
Many of NASA’s facilities are located in areas of significant land subsidence and/or coastal erosion.
For example, the beach near Cape Canaveral is part of the 40% of Florida’s coastline that is retreating (the remaining 60% is advancing).
http://www.dep.state.fl.us/oceanscouncil/reports/climate_change_and_sea_level_rise.pdf at Section II, page 4.
The Galveston-Harris County area (Johnson Space Center) is subsiding due to groundwater withdrawals, oil/gas production and ground compaction. http://hgsubsidence.org/subsidence-data/
Langley is in the Newport News/Hampton, Virginia area and is subsiding faster than Galveston. http://hamptonroads.com/2010/12/study-sinking-land-not-rising-seas-bigger-worry
The Michoud Assembly Facility is east of New Orleans — the only place in America subsiding faster than Hampton Roads.
And anything near San Francisco is driven by tectonics.
The link provided in the main post is simply to more balanced NOAA site. …
No, it’s not. When the guys at the University of Colorado and the guys from this little office at NOAA get together for a chat, they won’t be arguing with one another about 3 feet at 2100.
Speaking of the Colorado sea level group:
http://www.geosci-model-dev-discuss.net/8/1201/2015/gmdd-8-1201-2015.pdf
From the JCH link:
…
Sea level rise due to anthropogenic climate change constitutes a major impact to the world’s coastlines, low-lying deltas and small island states.
…
This impact thus far is extremely underwhelming. When might we expect to see this “impact?”
Steven Mosher | September 5, 2015 at 11:53 am |
“Ah the citation fallacy ( I think I’ll name it that)….Linking is not generally endorsement.”
Ah, the Mosher fallacy, the failure to READ HARDER.
Who said “endorsement”??
The Mosher fallacy is contigent on an ego greater than your abilities.
Get a clue.
“Advice: If you don’t want to be labelled a “disinformer”, don’t disinform.”
Good advice.
Perhaps you could try taking it yourself some time.
This is really very slimy. verytrollguy:
“Advice: If you don’t want to be labelled a “disinformer”, don’t disinform.”
You need to show that Judith had the intention to “disinform”, putz. She quoted the NOAA item, for which she provided the link. She didn’t quote or link to NoTricksZone, you clown. You made that crap up.
Judith: NOAA: “Absolute Global Sea Level Rise Is Believed To Be 1.7 – 1.8 Millimeters/Year” [link]
You quoted the NOAA above: “…given that the absolute global sea level rise is believed to be 1.7-1.8 millimeters/year.”
Where TF is the Judith disinformation? You are the disinformer. Hillary needs you. You are clumsy, but still an improvement over her bumbling.
We expect better of Australians.
Not sure why this is so contentious. IPCC AR5, pg 1139.
” It is very likely that the global mean rate was 1.7 [1.5 to 1.9] mm yr–1
between 1901 and 2010 for a total sea level rise of 0.19 [0.17 to 0.21].
Between 1993 and 2010, the rate was very likely higher at 3.2 [2.8
to 3.6] mm yr–1; similarly high rates likely occurred between 1920 and
1950.”
http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter13_FINAL.pdf
Like VTG stated. Old news. But is it inaccurate? My impression was that somehow that matters.
Now it’s not clear to me when the compilation of this segment of the IPCC was ended (the date), but it appears that VTG has issue with Dr. Curry’s posting of that which supports the reporting in WG1.
m.
The alarmist trolls hang around here hoping to jump on something, anything with which to discredit Judith for straying from the consensus reservation. That is what this is about. The usual dishonorable crap.
Like vtg I also didn’t see how this was news. The slr figures, both sat and tidal gauge, have been consistent at those rates.
Which still begs the question “Where is the evidence for catastrophic sea level rise?”
Advice: If you don’t want to be labelled a “disinformer”, don’t disinform.
You understand that your inability to tell the difference between linking and endorsing greatly reduces your credibility, right?
Competent scientists know how to distinguish between the two.
This question came up again this week:
Is the Global Warming Hoax over yet?
Andrew
Nope.
Andy Revkin and the NYT are going after rich kids. Revkin’s $525 NYT course for high school students: http://www.nytedu.com/climate-change.html
To his credit Andy includes this as a major topic: “The wide ranging and conflicting positions on the seriousness of global warming.” But of course the rest assumes CAGW.
Hmmm. We know the earth is getting a lot greener. But the number of trees is supposed to be falling. Do the bearers of this bad news bother to explain what all the increased vegetation is, if not trees? Bushes perhaps? Grass? Weeds?
Are trees better than bushes? Is this discriminatory? Where can a dissed bush go to get justice?
This from the first listed article is simply too beautiful to ignore.
“The estimate of 3.04 trillion trees – an estimated 422 for every person – is about eight times higher than a previous estimate of 400 billion trees that was based on satellite imagery but less data from the ground.”
We underestimated the number of trees on the Earth by a factor of 8,but don’t worry –
“The new findings leave abundant reason for concern – with people at the root of the problem.”
And
“”If anything, the scale of these numbers just highlights the need to step up our efforts if we are going to begin to repair some of these effects on a global scale.”
If anything, the fact that we demonstrably don’t have a clue about the numbers of trees in the world, makes us more certain that those trees are disappearing because of globalclimatewarmingchange.
The OCEAN2k twitter link is interesting.
http://climateaudit.org/2015/09/04/the-ocean2k-hockey-stick/
McIntyre says it isn’t a hockey stick but somehow fails to notice it ends in 1900. What it actually shows of interest is very little of an MWP which is more a slight delay in the long-term cooling than an actual warming. This is a new reconstruction of the global sea-surface temperature for the last 2000 years.
It’s a deflated hockey stick.. And it doesn’t “end
sin 1900.” It goes right to 2000.Jim D, you really should read properly before opening your mouth: they’re 200-year bins, the last of which is centered on 1900 and so ends in 2000.
Jim D, SM said he would post later a 20 year bin analysis rather than the odd 200 in the paper. But it is apparent why the oddity if you would have taken the time to read the nonpaywalled SI, and particularly SI figure 10. 25 year bins from 1850 to 2000. Multiple subcomparisons like tropical, extratropical, Mg/Ca, alkenones. Guess what? No hockeysticks. Oops. McIntyre was right about his speculation on why the paper took 4 years to publish and did not make it into AR5. It did not find SST warming in the 20th century. OOPS. What it does do is open a new line of proxy criticism to Karl and the underlying Huang fiddles.
We actually have much better observations of the rise rate in the last 100 years, and this 200-year averaging is probably the best method for washing that out. Anyone upset about the global-averaged representation of the MWP, or was everyone just expecting it at this stage due to its weak supporting evidence? Looks like there was a “pause” in the cooling around 1000 AD.
JimD, Here is the 200 year average of instrumental.
.
Now where’s the hockey stick?
oops, need error bars
–
.
–
Also check out their FAQ 16-19 for their reasoning about why the recent warming doesn’t show up in this analysis, which is 20 times faster than this long-term cooling according to their estimates.
http://www.pages-igbp.org/ini/wg/ocean2k/nature-geoscience-2015-faqs#figure
The Tortoise wins.
JimD, any rationalization about why it doesn’t show up in the past 200 years would apply to any 200 year period. If they narrow their reconstruction to just the tropical oceans they would get something like this.
https://lh3.googleusercontent.com/-xdx0ooU_6sI/VKloOz_8fvI/AAAAAAAAMEo/w31fUFfvQ_c/s693-Ic42/Tropics%2525204000%252520years.png
Sure. We have some differential GPS tide gauge stuff. And since 1979, we have sat altimetry. Read essay Pseudoprecision, and study the footnotes. The newest bird, Jason 2, has a published spec of 3mm repeatable accuracy. What, you think it is easy to measure absolute sea level given waves from several hundred miles up? And the allowable Jason 2 random instrument drift is 1mm/ year. Orbital decay, and all the other real stuff like radiation damage, voltage fluctuation, …
ristvan:
Given that the location of the satellite in three dimensional space is only known within a few cm, before all of the other limitations on satellite measurements, it’s hard to believe they can adjust the gathered data to achieve legitimate accuracy of even 3mm in GMSL.
[these comments appear to be in the wrong sub-thread]
Thanks, JimD. Fascinating attempt to spin a very negative result.
But any proxy result that doesn’t match instrumental temperatures for the last century (a) can’t be relied upon for the past either, and (b) reduces the significance of similar proxies that do match instrumental temperatures for the last century.
Elsewhere Appell mentioned his list of 36 studies that show a Hockey Stick. Many of them don’t, but anyhow: he didn’t list dozens (or hundreds, I imagine) that do not show any good temperature match. Which means that the ones he chose don’t mean much either.
I see the RS is getting concerned with the little problems of Publish-or-Perish and are printing Publish-or-Perish “studies” about how best to preserve the plague of insensate literature generated by careerism.
Our future in their hands!
“New article by Willie Soon: Re-evaluating the role of solar variability on Northern Hemisphere temperature trends since the 19th century”
Scafetta’s planetary harmonic formula are seriously crackpot, that’s not clever work to cite.
Hmmm…
Isn’t that pretty much what they said about Wegener?
“Isn’t that pretty much what they said about Wegener?”
The difference being that most folk initially ignored Wegener’s work, while Scafetta had initial widespread popularity, until it hit the rocks on WUWT.
ulric lyons: Scafetta’s planetary harmonic formula are seriously crackpot, that’s not clever work to cite.
That’s not true, it’s routine harmonic analysis. Where inferential error comes in is to claim that it unequivocally demonstrates the presence of a particular mechanism with the estimated periodicities. Other data and analyses are necessary to test whether such a persistent mechanism exists. Soon et al are examining evidence for such a mechanism. Any other mathematical model for such a mechanism should produce periodicities close to those estimated by Scafetta, if the mechanism exists and the mathematical model of it is accurate.
“it’s routine harmonic analysis.”
Nonsense, it is harmonic postulate, and the main periods are wrong. The average periodicity of solar minima is around 108 years and not 115, and the dominant ocean cycle is closer to 69 years and not 60 years.
Our brains are built for partisanship: [link]
“Our tendency toward partisanship is likely the result of evolution—forming groups is how prehistoric humans survived.”
No way, none whatsoever, not even remotely, how could you be so crass even to think that I am partisan.
Then again, psychological research has a dismal record of being reproducible.
Partisanship; is this one of the “animal” instincts one sees in lion prides? Or, is innate partisanship a myth as the tale of Romulus & Remus being suckled by a she-wolf and fed by a woodpecker would suggest? Would there ever have been a Rome if there had been innate partisanship on the part of the wolf and bird?
Time to look elsewhere.
our brains have the same origin as the climate
so let’s keep chuggin’ ’til we get the number of trees exactly right
Studies of the Gombe Chimpanzee War and many other violent conflicts among our closest relatives suggest that such partisanship predates the split (between human and chimp lineages).
While subsequent evolution, especially of language skills, has probably increased the sophistication of innate human partisanship, as well as (probably, IMO) broadened its flexibility as learned behavior, the story of that war is instructive: evidently there was one band originally, which (per Wiki) appears to have begun to fractionate as early as 1971 although the killing didn’t start until 1974.
The split may have taken place after the death of an older leader, when:
[…]
It would certainly make sense to me that such splits are very rare in the wild, and that the feeding station, established in the 1960’s, may well have catalyzed the process. But if the innate tendencies to partisanship hadn’t been there, it seems unlikely the “war” would have occurred.
My own fairly extensive readings in the primary (and secondary) literature suggest to me that similar processes take place on a regular basis in many/most chimpanzee bands, they just don’t usually reach that level of violence. This would provide strong support for a form of innate partisanship in the great apes, as typified by chimpanzees, bonobos, and humans. (Gorillas and Orang-Utans strike me as probably having a degenerate social structure compared to the general great-ape ancestor.)
As an exercise in examining the interactions which occur among public policy matters and scientific matters, let’s take a quick look at a hypothetical scenario where the HadCRUT4 central temperature trend between 1998 and 2028 runs at +0.12 degrees C per decade; i.e., at roughly half the rate predicted by an average of 90 IPCC AR5 climate models.
The temperature extrapolation which forms the basis for this scenario is illustrated on the following graphic:
http://i1301.photobucket.com/albums/ag108/Beta-Blocker/Beta-Blockers-HadCRUT4-Scenario-1998-2028_zpsdlntpgou.png~original
Here are two questions concerning the implications of this scenario, if it were to occur approximately as illustrated on the above graph:
(1) Are the IPCC AR5 climate models verified — for all practical purposes, as viewed by mainstream climate scientists — if the postulated scenario occurs approximately as illustrated?
(2) Does a contentious debate over what to do about climate change continue unabated into the middle of the 21st Century, if the postulated scenario occurs approximately as illustrated?
Here is the graphic described above:
http://i1301.photobucket.com/albums/ag108/Beta-Blocker/Beta-Blockers-HadCRUT4-Scenario-1998-2028_zpsdlntpgou.png
BB, you asked two questions.
Your first can be answered factually. The models would be considered falsified by the normal rules of science, since their predictions failed. Your postulated time period makes that fairly certain. Of course, climate science is a bit abnormal…
Your second question is answerable only by opinion. Mine is that it would, ceterus paribus. There are now too many vested interests.
But, my expectation is that ceterus paribus will not hold and that the whole CAGW house of cards will crumble rather suddenly. Arctic ice recovery, for example. Stadium wave and other projections of a continued period of stasis/cooling. Warmunist watermellons will experience failure in Paris, failure of predictions, failure of models, and will eventually move on to some other M.O. for their disfunctional socio-economic agendas.
Meanwhile, real issues like contagious disease, refugees, increasingly expensive energy (enjoy oil at $40/bbl, cause it will not last very long), and unsupportable national deficits will probably capture more political attention. I very much doubt Greeks are worried about climate change at the moment. Ditto China and India.
Someday, someone like Mackay will write CAGW up as another crowd delusion equivalent to Dutch tulip bulb mania.
Beta Blocker: Does a contentious debate over what to do about climate change continue unabated into the middle of the 21st Century, if the postulated scenario occurs approximately as illustrated?
My expectation is that such a development would substantially reduce whatever pressure there is to divest the world economy from fossil fuels. However, proponents and opponents would debate as contentiously as ever.
The ’emergent model’ paper is so flawed it is amazing to have passed peer review. It is being touted (in places like Hockeyschtick) as showing radiative physics could be wrong. SkyDragon stuff. Places Automatically discrediting themselves in the blogosphere for asserting stupid things like Pauli’s exclusion principle applies to bosons. Hint, it only applies to fermions. Another hint. Photons are bosons. Saw that ‘not even wrong’ stuff asserted and concluded just more climate crackpottery. Abjectly pathetic ignorance of fundamental quantum physics stuff. Hurts skeptical credibility.
The paper analyzes six ‘rocky’ solar system bodies, at last two of which have effectively no atmosphere vertical part of the figure fit. It pretends to use dimensional analysis to reduce 5 variables to 2. Hint: there are still 5. It then explicitly tries 20 different curve fits of the ‘reduced’ 2 (really 5) variables to the six objects. It gets a ‘fit’ of a double exponential (one of twenty) to 5 of 6 objects, so then manufactures a reason to exclude the 6th. Now there is a Theorem about degrees of freedom and polynomial equation fits. Any 5 variable fit to five data points is ‘perfect’. And completely meaningless. Von Neuman’s dictum, and all that. Bad math, bad science. REALLY BAD. Yet published. Now touted by folks supposedly on the skeptical side who display ignorence of Pauli’s exclusion principle. That kind of help climate pushback does not need.
ristvan: The ’emergent model’ paper is so flawed it is amazing to have passed peer review. It is being touted (in places like Hockeyschtick) as showing radiative physics could be wrong.
It reads a little like a class exercise in a course on nonlinear dynamics: Take these data for this problem, do a lot of analyses that you have learned in this course, and see what you come up with. The kind of thing called “plug and chug”. Clearly only the talented and knowledgeable can do it at all, but to try to draw policy implications or anything other than modeling insights from the results is like building upon a sand castle.
It reminds me a little of pharmaceutical research, where the laboratory leaders say: Synthesize 400 variants of this compound and see whether any of them have good properties. Most fall out, but now and again comes along something like Prozac, Remacemide, or Prilosec.
More extreme BS from RISTVAN: “The ’emergent model’ paper is so flawed it is amazing to have passed peer review.”
Total BS, that peer-reviewed paper was rightly-so published in one of the top astrophysics journals, you know-nothing troll.
“It is being touted (in places like Hockeyschtick) as showing radiative physics could be wrong.”
Extreme BS, name one single such quotation in either the paper or my site.
Direct quote from my site on the Volokin paper:
An important new paper published in Advances in Space Research determines that the Earth surface temperature (as well as the surface temperatures of 5 other rocky planets in our solar system) can be very accurately determined (R2 = 0.9999! & tiny standard error σ=0.0078) solely on the basis of two variables:
1) atmospheric pressure at the surface, and
2) solar irradiance at the top of the atmosphere,
and without any consideration of any greenhouse gas concentrations or ‘radiative forcing’ from greenhouse gases whatsoever.
Thus, the paper adds to the works of at least 40 others (partial list below) who have falsified the Arrhenius radiative theory of catastrophic global warming from increased levels of CO2, and also thereby demonstrated that the Maxwell/Clausius/Carnot/Boltzmann/Feynman atmospheric mass/gravity/pressure greenhouse theory is instead the correct explanation of the 33C greenhouse effect on Earth, and which is independent of “radiative forcing” from greenhouse gases.
Using observed data from the planets Earth, Venus, the Moon, Mars, Titan, and Triton, the authors,
“apply the Dimensional Analysis (DA) methodology to a well-constrained data set of six celestial bodies representing highly diverse physical environments in the solar system, i.e. Venus, Earth, the Moon, Mars, Titan (a moon of Saturn), and Triton (a moon of Neptune). Twelve prospective relationships (models) suggested by DA are investigated via non-linear regression analyses involving dimensionless products comprised of solar irradiance, greenhouse-gas partial pressure/density and total atmospheric pressure/density as forcing variables, and two temperature ratios as dependent (state) variables. One non-linear regression model is found to statistically outperform the rest by a wide margin. Our analysis revealed that GMATs [Global Mean Atmospheric Temperatures] of rocky planets can accurately be predicted over a broad range of atmospheric conditions [0% to over 96% greenhouse gases] and radiative regimes only using two forcing variables: top-of-the-atmosphere solar irradiance and total surface atmospheric pressure [a function of atmospheric mass & gravity]. The new model displays characteristics of an emergent macro-level thermodynamic relationship heretofore unbeknown to science that deserves further investigation and possibly a theoretical interpretation.”
http://hockeyschtick.blogspot.com/2015/08/new-paper-confirms-gravito-thermal.html
WRT Pauli Exclusion Principle, that DOES APPLY TO ELECTRONs, and when a BOSON/PHOTON RAISES an ELECTRON to a HIGHER STATE LEVEL that is ALREADY complete SATURATED with 2 ELECTRONS of OPPOSITE SPIN LOCATED IN THAT SAME QUANTUM STATE, THAT causes another PHOTON of the EXACT SAME QUANTUM E/FREQUENCY/WAVELENGTH to be EJECTED, thus the incoming PHOTON/BOSON is not thermalized by and does not increase the temperature of the WARMER/HIGHER FREQUENCY/ENERGY BLACKBODY. DUH.
Now, ristvan’s Von Neumann claim is also false. The surface T on all 6 planets (and an additional 2 for a total of 8 rocky planets now) is SOLELY DEPENDENT UPON 2 (TWO) VARIABLES ONLY: SOLAR INSOLATION & SURFACE PRESSURE.
I diffuse and dense can you be ristvan to NOT understand the difference in fitting between the numbers TWO and SIX dependent variables?
#Anti-astrophysics troll
matthewrmarler | September 5, 2015 at 11:47 pm
“It reads a little like a class exercise in a course on nonlinear dynamics: Take these data for this problem, do a lot of analyses that you have learned in this course, and see what you come up with. The kind of thing called “plug and chug”. Clearly only the talented and knowledgeable can do it at all, but to try to draw policy implications or anything other than modeling insights from the results is like building upon a sand castle.”
Clearly you don’t understand that the model (which now not only holds true for 6, but 8 planets, ALL planets for which we have adequate data) has an R2=0.9999! using only TWO (2) DEPENDENT VARIABLES, I.E. PRESSURE AND SOLAR INSOLATION.”
WHEREAS, addition of GHG partial pressures causes that OCCAM’s RAZOR very simple 2 dependent variable model to perform 20-40 times WORSE!
Explain THAT FACT, astrophysics, Volokin, and HS-trolls matthewrmarler, ristvan, et al.
Asthma. The answer my friend, is blowin’ in the wind. From the article:
…
For researchers trying to untangle the roots of the current epidemic of asthma, one observation is especially intriguing: Children who grow up on dairy farms are much less likely than the average child to develop the respiratory disease. Now, a European team studying mice has homed in on a possible explanation: Bits of bacteria found in farm dust trigger an inflammatory response in the animals’ lungs that later protects them from asthma. An enzyme involved in this defense is sometimes disabled in people with asthma, suggesting that treatments inspired by this molecule could ward off the condition in people.
The study, published on page 1106, offers new support for the so-called hygiene hypothesis, a 26-year-old idea that posits that our modern zeal for cleanliness and widespread use of antibiotics have purged the environment of microorganisms that once taught a child’s developing immune system not to overreact to foreign substances. “This gives us a tantalizing molecular mechanism for understanding the epidemiological evidence,” says pediatric immunologist Stuart Turvey of the University of British Columbia in Vancouver, Canada, who was not involved with the new work. But others caution that the finding is probably far from the only explanation for why early exposure to microbes can make kids less allergy-prone.
…
http://news.sciencemag.org/europe/2015/09/dirty-farm-air-may-ward-asthma-children
jim 2
I saw the article and responded. Here is my response
“The Hygiene Hypothesis has been around for more than a quarter of a century and the epidemiological evidence has been at time very unclear. Have the families/children left the farms for the cities because of perceived health problems while on the farm? Surveying farm communities does not account for the people who left those communities. Testing those on the farm is testing the “survivors”, a select population.
Too much farm exposure leads to asthma. Too little exposure to farms is speculated to also lead to asthma. Is there a “just right” amount of exposure to the farm? Goldie Locks, please answer.
Ultimately the Greeks of 2500 years ago had it right: it was the Asthma(s). Not a single disorder but many. To believe the Hygiene Hypothesis has a broad application is stretching credulity, and more importantly, the facts.
There is a reason why allergy shots have a transient if any effect on asthma. An indiscriminate treatment applied to an asthma population which has many causes.”
The paradigm that asthma and the immune system are interrelated has been around for almost 100 years. Basically a product of the 1920’s and 1930’s immune enhancing treatments for Pneumoccocal pneumonia with horse serum and other disorders. The discovery of the IgE immunoglobulin and its association with parasites spilled over into the asthma story as some people with asthma had elevated levels of IgE in their bodies. From there the story has waxed and wane between: asthma is an allergic disorder; i.e., eosinophilic disorder with environmental origins, to an genetic disorder, to a cornucopia of etiologies for why people wheeze.
And, like the story in the 1950’s that the understanding of cancer and its cures were just around the corner, asthma and its allergic legacy have persisted including the Hygiene Hypothesis.
The treatment for asthma has included the immune modulating drugs like corticosteroids which now have been acknowledged to harm people, but probably have been too broadly applied to the wheezing population in an effort to help people.
To say: “we don’t know” is too simple an answer.
Absent Goldilocks, I’ll point out that humans, like most mammals, go through rapid immunological changes during their early post-natal life, and the value of exposure, at specific levels, will probably change rapidly during that development. At least at certain times.
Moreover, when it comes to environments probably not experienced much before the invention of agriculture, there’s no reason to suppose that the schedule of immunological changes is identical from one neonate to another. Indeed, from an evolutionary perspective, I can see why some randomness might be adaptive, at least for early ancestors with mouse-sized bodies. And for environments not experienced between then and now (agriculture), there would have been no adaptive pressures to synchronize the schedules. (Although for environments that were experienced, there might have been, and others might have become synchronized as spandrels.)
RiH008,
Supposing there has been an increase in asthma in humans, would one not expect to see a similar increase in other species which suffer from this ailment, presuming a mutual contributing factor? I spent a bit of time googling around, but don’t see anything which seems to be a reasonable evaluation of this question.
AK
“I’ll point out that humans, like most mammals, go through rapid immunological changes during their early post-natal life, and the value of exposure, at specific levels, will probably change rapidly during that development.”
No quarrel from me that the immune system in infancy and children to age 5 years is a system in significant change. Look at the difficulty in developing vaccines for infants and young children to address Hemophylis type B as well as pneumococcal etc. Tetanus needs to be combined with Diphtheria and Hooping Cough (Pertussis) otherwise the D & P injections are not effective.
Young systems are vulnerable even as the Premature Infant immune system is being explored and reported this week.
What I say is that the Hygienic Hypothesis is just that, a hypothesis which has garnered many adherents yet the science for its validity is still lacking; and not for effort in trying to establish such a paradigm.
As for Danny Thomas trying to make sense of the topic of Asthma and allergic disorders, well, partner, welcome to the club of seekers. You will find many “finders”, but they are mistaken as soon as their data is open for examination. This sure doesn’t answer the mother who asks: “what do you think is best for my child?”
Interesting discussion. I have asthma. Haven’t a clue why.
jim 2
At the current state of the art, you should examine your ancestry; whether your mother was smoking pot and eating Bonbons; or your early childhood living next to the 6th Ave Elevated played a role.
Otherwise, the etiology of your asthma lies in a cavern whose darkest recesses contain mysteries both ancient and modern and so far, explored by few.
I can give you a formula for successful treatment, but you will never hear this from others in this lifetime. The past has been debunked; the present is the conventional wisdom; and the future, well, the future holds both promise and dread.
Likewise Jim, bur rye grass seeding and Spring storms
after north winds … (
I believe it will turn out to be mostly genetic, personally.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3151648/
As noted in the article, hay fever frequently co-occurs with asthma. I have severe hay fever. The environmental factors are convenient levers for politically motivated flim flammers, but probably not meaningful beyond that.
Soon et al is quite a useful review of the measurement problems in this area. There is pre print available.
Do conservationists want more trees and less humans? If so, while they’re probably all Leftist, radical environmentalists and global warming alarmists at heart, the anti-modernity conservationists should swoon for more not less global warming: Palm trees swayed on the green shores of Antarctica 50 million years ago while temperatures soared above 20C, a study has shown.”
http://www.dailymail.co.uk/sciencetech/article-2182505/Scientists-reveal-green-lush-past-Antarctica–warn-return.html
“Emergent model for predicting the average surface temperature of rocky planets with diverse atmospheres” [link]
that’s interesting in the abstract way of a lot of the dynamical modeling of planet atmosphere, of the sort presented in the writings of Ghil and Dijkstra. It’s hard to see any immediate utility in the results of the work. That’s the sort of critique one might have written of Kepler’s work before Newton; perhaps this work will stimulate something really productive, and perhaps it will join the dozens of others published every month that have no long-term applicability.
Is the government-funded professional climate class merely trying to maintain the status quo?
That’s the way it reads.
Does global warming make summers less hot?
That seems to be what US GHCN records indicate. And it fits an observation that Manabe had – increased latent heat ( more water vapor in the atmosphere ) means less sensible heat is necessary to restore balances because latent heat exchange increases.
http://realclimatescience.com/wp-content/uploads/2015/09/ScreenHunter_2822-Sep.-01-07.37.gif
http://realclimatescience.com/wp-content/uploads/2015/09/ScreenHunter_2824-Sep.-01-08.17.gif
I find this paper to be noteworthy:
“Ocean heat content variability and change in an ensemble of ocean reanalyses” http://link.springer.com/article/10.1007%2Fs00382-015-2801-0
There are very much of interest in this paper. These are the things which first caught my interest.
“There is a marked reduction in the ensemble spread of OHC (ocean heat content) trends below 300 m as the Argo profiling float observations become available in the early 2000s. In general, we suggest that ORAs (statistical ocean reanalyses) should be treated with caution when employed to understand past ocean warming trends—especially when considering the deeper ocean where there is little in the way of observational constraints.»
—
“For the ORAs (statistical ocean reanalyses) with data available back to 1970 ….
“For 300–700 m (Fig. 12), the agreement across ORAs (statistical ocean reanalyses) is substantially reduced, although there are some still some areas (e.g. North Atlantic) where the signals appear to be common across models. It is also surprising that some signals in the Southern Ocean appear to be consistent across products. However, the number of ORAs available for this comparison is limited (N = 4) so we should treat such evidence of ‘agreement’ across products with caution.”
“For the 700 m-bottom layer (Fig. 13), the ensemble mean and spread across the ensemble is dominated by the very large signals in the GECCO product: this ORA (statistical ocean reanalyses) exhibits a strong warming in the Atlantic and Sourthern Ocean and a cooling in the Indo-Pacific. Given the absence of strong observational constraints for this depth range during this period, it seems likely that these signals are unrepresentative of the real ocean and probably related to adjustment of the OGCM (ocean general circulation models)..”
Hence, the following conclusion by IPCC about ocean warming does not seem to have sufficient observational support.
“Ocean warming dominates the total energy change inventory, accounting for roughly 93% on average from 1971 to 2010 (high confidence). The upper ocean (0-700 m) accounts for about 64% of the total energy change inventory. Melting ice (including Arctic sea ice, ice sheets and glaciers) accounts for 3% of the total, and warming of the continents 3%. Warming of the atmosphere makes up the remaining 1%.”
Ref.: Contribution from Working group I to the fifth assessment report by IPCC On the scientific basis.
It seems to me that this pretty much pulls the rug from under the ad hoc hypothesis introduced by Kevin Trenberth and endorsed by IPCC. The ad hoc hypothesis which was introduced to account for the lack of warming.
Uh, no.
From the article:
…
Climate change is wreaking havoc on the Arctic faster than anywhere else on earth, melting permafrost, scrambling wildlife, releasing carbon into the atmosphere, and creating the perfect conditions for some of the most devastating forest fires parts of the region have ever seen. But the breadth and depth of this havoc is something that scientists still don’t completely understand.
With that in mind, NASA is kicking off a decade-long effort to figure out just how bad things in northern US and Canada really are—and how much worse things might get in years to come.
Called the Arctic Boreal Vulnerability Experiment, or ABoVE for short, the large-scale study will combine on-the-ground field studies as well as data from remote sensors—such as satellites and two season of “intensive airborne surveys”—to improve how scientists analyze and model the effects of climate change on the region.
…
http://motherboard.vice.com/read/the-ten-year-mission-to-study-all-the-ways-the-arctic-is-doomed
From the article:
…
A Major Surge in Atmospheric Warming Is Probably Coming in the Next Five Years
…
http://motherboard.vice.com/read/we-may-see-a-supercharged-surge-in-warming?trk_source=recommended
Every time I see the AMO cited in one of these articles I cringe. The AMO can’t mask anything. It’s a meek little pretender ocean cycle.
The PDO, on the other hand, is the beast.
(On Drudge) From the article:
…
Red shows the September 2012 minimum extent. Green shows the current extent, which is likely the minimum for 2015. The Arctic has gained hundreds of miles of ice over the past three years, much of which is thick, multi-year ice.
Nobel Prize winning climate experts and journalists tell us that the Arctic is ice-free, because they are propagandists pushing an agenda, not actual scientists or journalists.
…
https://stevengoddard.wordpress.com/2015/09/09/arctic-has-gained-hundreds-of-miles-of-ice-the-last-three-years/