by Judith Curry
A few things that caught my eye this past week.
It’s Official: Hydropower Is Dirty Energy [link]
A large ensemble of model runs produced 30 alternate realities for N. American temp trends from 1963-2012 [link]
Modern ecosystems owe their existence to a previously undocumented period of global cooling [link]
Biologist comments on a startling new finding in climate change researc: study measures methane release [link]
“Changes in solar quiet magnetic variations since the Maunder Minimum: a comparison of historical observations…” [link]
“Peat bogs in northern Alberta, Canada, reveal decades of declining atmospheric Pb contamination.” [link]
With solar storm in progress, regional impact forecasts set to begin [link]
The Pacific ‘blob’ caused an ‘unprecedented’ toxic algal bloom – and there’s more to come [link]
“Satellite based estimates underestimate the effect of CO2 fertilization on net primary productivity” [link]
New paper on sustained rise in methane, but from tropics/ag, not oil/gas or Arctic. [link] …
#oceanacidification : seafloor plankton communities decline [link]
Role of heat transport in global climate response to projected Arctic sea ice loss [link]
Impact of Climate Change & Aquatic Salinization on Mangrove Species & Poor Communities in Sundarbans [link]
Global Warming Is Real—But 13 Degrees? Not So Fast [link]
A new paleoclimate record from the data-sparse southern hemisphere’s midlatitudes: [link]
Global water vapor trend 1979-2014 [link]
1st measurements of GHGs from permafrost under fast-warming Arctic lakes: [link]
About science and scientists
Science in crisis: from the sugar scam to Brexit, our faith in experts is fading [link] …
Janet Napolitano on campus free speech [link]
Unsettled science, and more wickedness [link] …
Maths research: the abstract nonsense behind tomorrow’s breakthroughs [link]
Incentive malus: why bad science persists. Poor scientific methods may be hereditary.[link]
The natural selection of bad science [link]
Academic Research: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition [link] … …
The power paradox – excellent read on the surprising and sobering science of how we gain and lose influence [link] …
Statistical vitriol: [link] …
What’s the point of tenure, if we’re terrified to ask certain questions? [link]
Can young researchers thrive in life outside of academia [link]
Full presentation from Andrea Saltelli: Climate numbers and climate wars. A fatal attraction? [link]
Yes, the “combination of perverse incentives and decreased funding”produced the “publish or perish” atmosphere in academic institutions.
The easiest way to get rid of this is to provide all scientists with a minimum baseline funds for research as part of their salary. If you qualify to be an academic on a salary, your pay should include money to do some research you are interested in no matter what your peer think of it. It used to be this way. Departments provided resources, technicians, administrative support and bridging funding. Now all that is gone for most scientists and you have to cover things like your telephone, and your postage stamps from your grants. I have even seen incoming scientists being told they have to find outside funding to finish the interior of the building that their office and lab will be in (installing dry wall and flooring and plumbing for washrooms) or they won’t be hired. Meanwhile grants agencies actually boast openly about how small a percentage of scientists they fund because of how careful they are to only fund the most excellent as judged by the committee members (who almost invariably get 100% funding themselves.) We are doomed.
Administrative “overhead” charges on research grants bribe the university into hiring, promoting, and giving tenure to faculty members that succeed at the “publish or perish” game. Students are thus deprived of genuine scholars; Their academic grades reflect
1. Acceptance of “consensus opinions,” not
2. Ability to advance human knowledge.
Absolutely, and you end up with ridiculous situations where 3/10 researchers in one department have the same half million dollar piece of equipment with one using it maybe a couple of hours a week, the other two unable to use theirs because one can’t even get funding to hire a summer student, one has 1 PhD student but no money to buy supplies to run the equipment, and the department also has one professor with 12 postdocs, 8 PhDs, 14 Masters students, 2 full-time technicians and 3 undergraduate projects and no access to any of the three pieces of the equipment ben though he could use it. He’s spending 12 hours a day writing grants, not supervising staff and students, and no one will share their equipment with him. Since each one bought the equipment with their own grant money there isn’t a darn thing the 400K/year Dean of Research can do about it except have the successful guy do a departmental seminar on how everyone else can improve their grantsmanship.
I understand that Prescott Bush, father of the elder ex-President, George Bush, designed the system of federal research grants after WWII that forced academic leaders to ally with political leaders that controlled research grant funding.
TWS, so you are proposing that all the research should be paid for by student tuition, rather than grants? And that no results should be required?
It has already been shown it is cheaper to simply pay all scientists a minimum research wage that is higher than the average grant by simply getting rid of the huge bureaucracy required to run a grants system for the Canadian NSERC grants. I suspect the same would apply for most other systems. See: https://www.ncbi.nlm.nih.gov/pubmed/19247851 This would encourage more innovation and collegiality but it would mean getting rid of a lot of deans or research and government bureaucrats. We know you can’t have that.
Oh and getting approved of results is what got us into this mess in the first place. Good research can take years to get real results. And if the research produces unpopular results no one will count it as a result. I am talking about a system where scientists have enough to do the job we pay them for. Imagine if a policeman was hired and then told to get a grant for his uniform, his gun, his radio, his car he had to pay overheads for the 911 service. That is the way it works for scientists today.
DW, I cannot speak for TWS, but federal research grants have effectively destroyed the integrity of academic institutions, just as the EU and UN have destroyed the integrity of national governments. Pointman pulled away the “fig leaf” from Angela’s EU today,
to expose the same trickery Drs. Carl Von Weizsacker and Hans Bethe used in 1935-36 to obscure the source of energy Einstein discovered in 1905 and Aston measured in every atom in 1922: E = mc^2
No matter the method of payment to academics, grants or paycheck, the source of the money is the government. By definition they are government employees, bureaucrats as it were.
Governmental politics rule government employees. If one is not hewing to the political meme of the moment, one is toast. Especially if one is a small fry.
Unless one gets into the upper technical or management sides the pay isn’t all that great. Witness the 30-somethings leaving for greener pastures. With so many superfluous Phd’s in the climate racket, what are their options? Back to momma’s basement?
So a number of people in climate science academe are in their thirties before they are forced out to make a real living. We could save all the millions (billions?) wasted on their education, institutions, instructors, etc. by leaving them home in their mom’s basement during all that time. All the other slackers do it.
Publish or perish actually means get results or lose your job. Makes sense to me. Nor is this new.
Getting officially approved of results is what got us into this mess in the first place. Good research can take years to get real results. And if the research produces unpopular results no one will count it as a result. There has to be better measures of productivity. The current ones don’t work.
If it’s official that “Hydropower Is Dirty Energy,” then a government bureaucracy that condones laws giving tax credits to millionaires to use such legacy power to charge the batteries of a Teslas so it can be driven in special reserved lanes the freeways that are paid for by a public that pays taxes on gas and fuel must be beyond ludicrous.
I have long wondered exactly what exactly do these environmentalist expect us ordinary humans to do for power? I am not a fan of Glenn Beck but I do enjoy dystopic sci fi and speculative fiction. The book Agenda 21 which Beck coauthored (or more likely just put his name on) proposes a world where everyone eats cubes of algae, lives in a tightly controlled communes with a one room shared cell, and spends hours each day doing manual labor on a treadmill to painstakingly charge individual battery cells for others to use, while spending every evening watching propaganda movies of how horrible humanity is does not seem so unlikely.
The program was outlined for all of us in a black & white, movie.
Oh dear. My sustainable fresh water supply causes global catastrophe. https://en.wikipedia.org/wiki/Triadelphia_Reservoir My electrical power comes from the Calvert Cliffs Nuclear Power Plant. https://en.wikipedia.org/wiki/Calvert_Cliffs_Nuclear_Power_Plant I’m a one home climate disaster.
“I’m a one home climate disaster.” Yep. It appears we all are.
Regarding this quote from Science in crisis: “Scientific research deteriorates when it is entrusted to contract research organisations, working on a short leash held by commercial interests.”
It is even worse when the short leash is held by governments, which is the general rule in basic research.
This one on the other hand is probably correct: “Could the internet be to science what the printing press was to the church?”
But I think this science-is-broken stuff is itself an overblown fad.
I think you are right when you say ” science-is-broken stuff is itself an overblown fad” and that the internet is making science available to everyone. Some of the best scientists I have worked for/with have not earned a PhD in anything.
“It’s Official: Hydropower Is Dirty Energy [link]”
After recently spending a week motoring a houseboat on Lake Powell, created by damming the Colorado River by the Glen Canyon Dam I see that the warmists would benefit from such a journey themselves.
Glen Canyon Dam: The 710-foot (220 m) high dam was built by the U.S. Bureau of Reclamation (USBR) from 1956 to 1966 and forms Lake Powell, one of the largest man-made reservoirs in the U.S. with a capacity of 27 million acre feet (33 km3)” (Wiki)
The Glen Canyon Dam was built by: US Bureau of (Land) Reclamation. Its primary purpose is to provide water to agriculture. A much lesser amount of water goes to towns and cities, but that is not its primary purpose.
As for generating electricity, that is a permissive role, when Lake levels are high enough, there is sufficient water to spin the turbines. Water flows as much as 25,000 cf/s will generate the most electric power although steady state recently, with the current water levels has been 12,000 cf/s.
One of the other notable features of the Lake, is recreational use. There is about $ 1billion in infrastructure and boats for recreational purposes employing thousands of permanent and part time people. In this high desert region, employment is a good thing. The newest city of AZ, Page, 9,000 souls, was constructed to build the Dam and has remained because of the permanent employment due to the Glen Canyon Dam and Lake Powell.
When I first launched into the article, I thought the greenies were going to complain about the evaporation of the Lake with the water vapor causing climate change down wind. I guess not.
However, I do like their effort to alarm about greenhouse gas emissions estimates in the gigaton range. Nice work if you can get it.
True, true, the “greatest generation” despite it’s great intentions apparently gave birth to a generation of sick hypocrites.
Hot World Syndrome is a phenomenon where the global warming apocalyptic content of mass media imbues viewers with the notion that the world is a hotter and more intimidating place to live than it actually is, and prompts a desire for more protection than is warranted by any actual threat. Hot World Syndrome is one of the main conclusions of the anti-humanism movement of the United Nations. Additionally, murderous examples of failed socialism — as witnessed by large segments of Leftist-lib society from the safety and comfort of Western civilization — has created a global psychosis, causing people to turn on the morals, principals and ethics that otherwise would sustain their spirits and prevent them from succumbing to moral decline and mental helplessness. Individuals who do not rely on the mainstream media and who understand the floccinaucinihilipilification of the cabinets and cabinets full of worthless global warming research, have a far more accurate view of the real world than those who do not, are able to more accurately assess their vulnerability to present and future weather conditions, and all the myriad vagaries of life over which they have no control. The global warming realists do not fear the hand of man and tend to be nicer people with a life and have a wider and healthier variety of beliefs, attitudes, behaviors and lifestyles. Towing a boat to the river with the family in the back of a SUV is not evil, no matter what the liberal fascists may wish to believe today.
The ability of anoxic, eutrophying, sediment to generate methane is well known.
OTOH, I suspect a profitable industry could be developed based on scooping up this sediment, processing it, and selling it as a soil additive.
Just the mixing of oxygen into the bottom waters alone would probably reduce the methane production to near zero.
Not just reservoirs. http://www.dailypress.com/news/science/dp-nws-chesapeake-bay-methane-20160719-story.html
RE: Global water vapor trend 1979-2014
Dr. Curry or anyone who might know. Were the trends found in the re-analysis the same as trends in the global climate models? (I’m guessing the global climate models are all over the map, but thought I would ask anyway.)
And also, does a marginal increase in water vapor result in a marginal increase in albedo from a marginal increase in cloud cover?
J2, article is paywalled, so cannot tell how important because it all depends on altitude. Some partial answers to your questions. AR4WG1 discusses WV extensively. The CMIP3 models all produce roughly the Clausius Clapeyron result at all altitudes. WV rises with temp (relative humidity ~ constant). This is observationally true over oceans at the surface, and generally observationally true over land at the surface, with desert exceptions. Paltridge published a paper in 2009 using NCEP that also showed it true globally at the surface over time as temperature rose, mostly true in mid troposphere, but not true in the upper troposphere where the WV feedback arises. Relative humidity declined. There is substantial evidence that the models have too much WV feedback–absence of the modeled tropical troposphere hot spot being exibit A. Other observational evidence is covered in essay Humidity is still Wet.
What about the increasing cloud part? Isn’t this Lindzen’s iris effect?
DW and J2, Lindzen’s adaptive infrared iris paper (BAMS 2001) depended on convection cells (TStorms) becoming bigger and more violent with increased temperature and humidity. In which case the rainout reduces anvil detrainment, so reduces cirrus, so GHE. Judith and I did back to back posts on this a while ago findable via the search tool. Does not address all clouds, and therefore only loosely WV-cloud interactions, and so not to general cloud albedo at all. Cirrus has almost no albedo because ice is mostly transparent to incoming solar. Cirrus always warms because mostly opaque to outgoing infrared; Lindzen’s simple observation was less cirrus warms less.
But the more water-cloud-albedo negative feedback potential is still there, right?
Regional population collapse followed initial agriculture booms in mid-Holocene Europe
Yes, I think Malthus is wrongly derided by the skeptic leaning intellectual. Erlich gets the timing and scale wrong, but Malthusian events are a real risk. Kind of and inverse of the axiom “past performance is not a guarantee of future success.” I think the SWA is one area where population may have out paced likely resource availability in coming decades.
As Rud has pointed out in the past (and may be Ridley too), Syria’s population grew hugely in recent decades and the mid-2000s drought wasn’t really out of the ordinary.
I think one of the biggest risks created by global warming is that we have been really lucky with unusual good weather in the 80 and 90 and CO2 fertilization and hydrological responses have buffeted us from productivity shocks. I think we may have develop a false sense of security and we have not prepared for large regional disasters that are to come.
For the US, I worry that we are putting too many eggs in the natural gas basket. We are moving to dependency on it for baseline production, peaking capacity, and heat. I worry the we do not have enough storage and peaking capacity and distribution infrastructure to weather a large regional cold weather disaster that persists for maybe weeks. I have not researched this at all (my brother has done some for investment research purposes, and he’s pretty comfortable with ability to produce gas and electricity when needed–if I understand his tweets correctly, he’s a fan of NRG and it’s undervalued), it’s just a hunch of mine.
Perhaps Rud and Planning Engineer can address this.
aaron, you are right to worry. It’s the old “too many eggs in one basket” problem. Although it’s more of a price problem for the foreseeable future as it relates to gas supply.
When green NGOs and politicians design our electric power supply systems, what could possibly go wrong?
Natural selection of bad science. Very thought provoking paper. Key is to change natural selection fitness. From quantity to quality. No rewards without replication.
What fraction of the researchers would you divert from research into replication? What fraction of the 2 million papers per year would you have replicated? Who is to pay for all this replication effort, which contributes no new results?
DW, a lot of papers are unimportant. So start by replicating only those with lots of citations, or important by some other metric. Mann’s TAR hockey stick is an example. Lew and Cook’s most recent dreck of of the sort where no time need be wasted. Ditto the new paper claiming higher Arctic CO2 permafrost CO2 emissions in winter, which is obviously wrong from first principles and overlooks the NH seaonalitynof the Keeling curve.
Second, simply bin all the ‘important but underpowered’ stuff. ( One method woild be to require journals to scrutinize paper statistics, and asteric all underpowered papers. That requires only a trained on staff statisician, who either makes the call or finds someone how can. Misuse of P stat is rampant.) see my essay Cause and Effect for an example of how easy this is, shredding Shakuns CO2 leads T nonsense with a very simple reanalysis of his proxies.
And, as to proportion of time, perhaps a third. Replication is a good way to train grad students. That is how the coding goof in the Harvard paper on deficits was uncovered.
If it costs (money or opportunity loss of new research) to save scientific credibility, so be it.
So you propose to cut the amount of research done by one third, thinking this will somehow save the credibility of science? I am against that, not only because of the huge loss of research, but because I doubt it would have any effect except to pay a lot of scientists to do basically nothing except repeat what has already been done.
As for citations, these take 2 to 5 years to accrue, so we would have millions of scientists spending one third of their research time (or one third spending all their time trying to replicate what was done 3 to 6 years ago.
Important papers with lots of citations typically get checked by those who cite them because they are using the claimed results. In the climate arena they are scrutinized by hordes of critics. The flaws in the hockey stick were pointed out very quickly.
DW, cutting reasearch that is half wrong by a third does not solve the half wrong problem completely. But it would be a step in the right direction. The status quo cannot hold. I am imagining multiple partial solutions. What are yours? Or, do you think there is not a junk paper problem?
You disagree with Ionnaiddis? In which case I can easily disabuse you of that misconception. A sampler: Shakun on CO2 lag, Marcott misconduct on paleo, Oleary misconduct on Sudden SLR, Fabricius on ocean acidification and corals, PMEL on ocean acidification and corals, … and many others called out in other Blowing Smoke essays. Just a sampler.
ristvan, there is money in them thar studies!
A bureaucracy is graded on its ability to spend its budget. A big budget for climate studies automatically results in funding for many marginal studies.
A normal person cannot keep track of all the inanities. Don’t try! Just defund the bast*rds.
Since much of the published science apparently is dreck it would make sense to use a quarter to a third of taxpayer money to try and duplicate published papers, particularly those that are hardly cited and those that are heavily cited. Actually, if the original studies had been properly designed they would already have a confirmation experiment designed, or they would have designed confirmation into the original experiments. A basic principal of experimental design includes either adequate replication, or plans for follow-on replication as a backstop.
Good experiments are difficult to design and often are costly by their nature. Drug testing is one example, almost all psycho-social experiments use so few test, poor subject screening, and such little replication the results are almost assured of being meaningless or outright wrong.
The other source of minimally useful experimentation is granting PhD’s for essentially doing science fair-like replications of well-established results. This is especially common in the softer sciences such as Biology, Food Science, Psychology and similar areas.
When the money dries up, the crap will dry up. Until then, it’s up to you to find the diamonds in the dreck. Good luck!
Without the internet, all you have is government propaganda. I actually should say NGO propaganda. They feed each other.
We are left with the salami example: A small slice, nobody notices. That small slice, however, is a huge subsidy to a rent seeker. With that huge subsidy the rent seeker can take a little off the top and influence the subsidy-provider (politician, bureaucrat, etc.).
Watch the communication and money flows to see who is doing who. It is instructive to see how the green NGOs (and their darling, Al Gore) interact with Democratic Attorneys General.
Senator Whitehouse? Ha, ha!
Huh, an interesting thought. Development of some kind of financial option. Perhaps a market could be designed to facilitate this. Could be financed with by some portion of university endowments.
Oh yeah, Rud, would you please look at my comment to AK above (https://judithcurry.com/2016/10/01/week-in-review-science-edition-57/#comment-814991)? Am I crazy?
Re: Statistical vitriol. I read that and I wanted to stand up and cheer. I was really fortunate in my training in that my mentors insisted on proper training in statistical analysis and we had a weekly journal club where we would take turns choosing and dissecting two papers, especially including their statistical methods. We had a statistician on staff to consult with. He used to give some of us these sideways look and say that dreaded phrase “You can’t do that.” I didn’t realize how unusual that was until I got out into the “real” world and discovered exactly it is as described in this blog.
Plus many. Proliferation of stats packages has led to a proliferation of stats ignorance.
Yes! How many times have I seen a student with a stats package run a dataset through the program and then not get the expected result, to which the solution is, adjust parameters at random and keep rerunning the program until you get the right result. I got as nasty reputation for asking people “And what is your justification for doing this?” which would invariably be followed by a bewildered shrug. The packages have made statistics accessible to everyone in a manner not unlike saying being merely being blind should not prevent anyone from driving a car.
Or, I have one set of proxy data but the stupid computer program won’t go because it says I don’t have enough data points so I’ll just enter the same proxy dataset twice.
Are you saying that every scientist should be an expert in statistics? That is not feasible (like most of the complaints/proposals in the science-is-broken fiasco). As someone who does scientometrics, I have actually studied these arguments and thought about what is involved. Most of it is nonsense.
Every scientists should have a statistician on his team or at least available for consultation. That shouldn’t be too burdensome, I would think.
We are saying every statistical finding should be checked by a statistician. Not that every researcher should be one. Although the abysmal peer literature knowledge says every Ph.D except in humanities ought to have more training than they do. Basic stuff goes missing: Heteroscedasticity, autocorrelation, Bonferroni correction, degrees of freedom (a power issue), … Come on, Dessler 2010b found ‘significant positive cloud feedback’ with an r^2 of 0.02! Big writeup on NASA climate website. That paper would have flunked my first undergrad stats class.
You want to use stats (or computer models), you should have to prove you know what you are doing. The best way is to require everyone to publish all the raw data and all the scripts and all the input values as supplementary data so anyone else can recheck findings and every journal have a mechanism to shoot down crap. Watch how quickly people would start double checking data with a good statistician. Instead we have ‘scientists’ (and I use the term loosely) hiding their raw data and suing people for slander when challenged and claiming a script is somehow proprietary.
HERE IS PART ONE.
Here is what I have concluded. My explanation as to how the climate may change conforms to the historical climatic data record which has led me to this type of an explanation. It does not try to make the historical climatic record conform to my explanation. It is in two parts.
HOW THE CLIMATE MAY CHANGE
Below are my thoughts about how the climatic system may work. It starts with interesting observations made by Don Easterbrook. I then reply and ask some intriguing questions at the end which I hope might generate some feedback responses. I then conclude with my own thoughts to the questions I pose.
From Don Easterbrook – Aside from the statistical analyses, there are very serious problems with the Milankovitch theory. For example, (1) as John Mercer pointed out decades ago, the synchronicity of glaciations in both hemispheres is ‘’a fly in the Malankovitch soup,’ (2) glaciations typically end very abruptly, not slowly, (3) the Dansgaard-Oeschger events are so abrupt that they could not possibility be caused by Milankovitch changes (this is why the YD is so significant), and (4) since the magnitude of the Younger Dryas changes were from full non-glacial to full glacial temperatures for 1000+ years and back to full non-glacial temperatures (20+ degrees in a century), it is clear that something other than Milankovitch cycles can cause full Pleistocene glaciations. Until we more clearly understand abrupt climate changes that are simultaneous in both hemispheres we will not understand the cause of glaciations and climate changes.
I agree that the data does give rise to the questions/thoughts Don Easterbrook, presents in the above. That data in turn leads me to believe along with the questions I pose at the end of this article, that a climatic variable force which changes often which is superimposed upon the climate trend has to be at play in the changing climatic scheme of things. The most likely candidate for that climatic variable force that comes to mind is solar variability (because I can think of no other force that can change or reverse in a different trend often enough, and quick enough to account for the historical climatic record, and can perhaps result in primary and secondary climatic effects due to this solar variability, which I feel are a significant player in glacial/inter-glacial cycles, counter climatic trends when taken into consideration with these factors which are , land/ocean arrangements , mean land elevation ,mean magnetic field strength of the earth(magnetic excursions), the mean state of the climate (average global temperature gradient equator to pole), the initial state of the earth’s climate(how close to interglacial-glacial threshold condition it is/ average global temperature) the state of random terrestrial(violent volcanic eruption, or a random atmospheric circulation/oceanic pattern that feeds upon itself possibly) /extra terrestrial events (super-nova in vicinity of earth or a random impact) along with Milankovitch Cycles, and maybe a roll for Lunar Effects.
What I think happens is land /ocean arrangements, mean land elevation, mean magnetic field strength of the earth, the mean state of the climate, the initial state of the climate, and Milankovitch Cycles, keep the climate of the earth moving in a general trend toward either cooling or warming on a very loose cyclic or semi cyclic beat(1470 years or so) but get consistently interrupted by solar variability and the associated primary and secondary effects associated with this solar variability, and on occasion from random terrestrial/extra terrestrial events, which brings about at times counter trends in the climate of the earth within the overall trend. While at other times when the factors I have mentioned setting the gradual background for the climate trend for either cooling or warming, those being land/ocean arrangements, mean land elevation, mean state of the climate, initial state of the climate, Milankovitch Cycles , then drive the climate of the earth gradually into a cooler/warmer trend(unless interrupted by a random terrestrial or extra terrestrial event in which case it would drive the climate to a different state much more rapidly even if the climate initially was far from the glacial /inter-glacial threshold, or whatever general trend it may have been in ) UNTIL it is near that inter- glacial/glacial threshold or climate intersection at which time allows any solar variability and the associated secondary effects, and or other forcing no matter how SLIGHT at that point to be enough to not only promote a counter trend to the climate, but cascade the climate into an abrupt climatic change. The back ground for the abrupt climatic change being in the making all along until the threshold glacial/inter-glacial intersection for the climate is reached ,which then gives rise to the abrupt climatic changes that occur and possibly feed upon themselves while the climate is around that glacial/inter-glacial threshold resulting in dramatic semi cyclic constant swings in the climate from glacial to inter-glacial while factors allow such an occurrence to take place. Which was the case 20000 years ago to 10000 years ago.
The climatic back ground factors (those factors being previously mentioned) driving the climate gradually toward or away from the climate intersection or threshold of glacial versus interglacial. However when the climate is at the intersection the climate gets wild and abrupt, while once away from that intersection the climate is more stable.
Although random terrestrial events and extra terrestrial events could be involved some times to account for some of the dramatic swings in the climatic history of the earth( perhaps to the tune of 10% ) at any time , while solar variability and the associated secondary effects are superimposed upon the otherwise gradual climatic trend, resulting in counter climatic trends, no matter where the initial state of the climate is although the further from the glacial/inter-glacial threshold the climate is the less dramatic the overall climatic change should be, all other items being equal.
The climate is chaotic, random, and non linear, but in addition it is never in the same mean state or initial state which gives rise to given forcing to the climatic system always resulting in a different climatic out-come although the semi cyclic nature of the climate can still be derived to a degree amongst all the noise and counter trends within the main trend.
Why is it when ever the climate changes the climate does not stray indefinitely from it’s mean in either a positive or negative direction? Why or rather what ALWAYS brings the climate back toward it’s mean value ? Why does the climate never go in the same direction once it heads in that direction?
Along those lines ,why is it that when the ice sheets expand the higher albedo /lower temperature more ice expansion positive feedback cycle does not keep going on once it is set into motion? What causes it not only to stop but reverse?
Vice Versa why is it when the Paleocene – Eocene Thermal Maximum once set into motion, that being an increase in CO2/higher temperature positive feedback cycle did not feed upon itself? Again it did not only stop but reversed?
My conclusion is the climate system is always in a general gradual trend toward a warmer or cooler climate in a semi cyclic fashion which at times brings the climate system toward thresholds which make it subject to dramatic change with the slightest change of force superimposed upon the general trend and applied to it. While at other times the climate is subject to randomness being brought about from terrestrial /extra terrestrial events which can set up a rapid counter trend within the general slow moving climatic trend.
Despite this ,if enough time goes by (much time) the same factors that drive the climate toward a general gradual warming trend or cooling trend will prevail bringing the climate away from glacial/inter-glacial threshold conditions it had once brought the climate toward ending abrupt climatic change periods eventually, or reversing over time dramatic climate changes from randomness, because the climate is always under a semi extra terrestrial cyclic beat which stops the climate from going in one direction for eternity.
NOTE 1- Thermohaline Circulation Changes are more likely in my opinion when the climate is near the glacial/
inter-glacial threshold probably due to greater sources of fresh water input into the North Atlantic.
Along those lines ,why is it that when the ice sheets expand the higher albedo /lower temperature more ice expansion positive feedback cycle does not keep going on once it is set into motion? What causes it not only to stop but reverse?
The correct answer is really simple. Occam would have spotted this right away. Look at the ice core data. When oceans are warm and the polar oceans are thawed, ocean effect snowfall rebuilds the ice on land, Antarctic, Greenland and hight mountain Glaciers. The ice piles up, spreads out and the increased ice extent makes earth cold. After it gets cold, the polar oceans freeze, ocean effect snowfall reduces a huge amount. The ice continues to spread and deplete as ice melts every year and there is not enough snowfall to replace what melts. The ice depletes, runs out of enough weight to push ice faster than melt rates, ice extent decreases and earth warms again. The polar oceans freeze and thaw at the same temperatures. That provides the thermostats that turn snowfall on and off to regulate ice volume which regulates ice extent.
This ice cycle is easy to understand and is clearly supported by ice core data. Ice accumulation rates are highest in warmest times and lowest in coldest times.
interrupted by a random terrestrial or extra terrestrial event in which case it would drive the climate to a different state much more rapidly even if the climate initially was far from the glacial /inter-glacial threshold, or whatever general trend it may have been in
Ice Extent and Water dumps. Earth warms rapidly as ice extent decreases, earth cools rapidly as ice extent increases. Temperature drops suddenly when water from melted ice from glaciers breaks free and enters the oceans with huge surges of ice cold water that had thawed but was blocked from entering the oceans until it over came the land and ice dams that held it back. This cooling is associated with rapid sea level rises. This is all in the data.
Salvatore, I agree with you about a lot of this. The reason for the central trend is that global temperature can be shown to be a centrally biased random walk with no significant trend, see;
Note also that 10Be flux rates indicate that DO events, the Younger Dryas and Termination I may be generated by different mechanisms. See
There is a plausible explanation of the sudden onset of Terminations and the YD which I am happy to discuss with anyone who may be interested.
So all along it’s been a math problem and not a physics one? Re: 40 earths.
“Creating an envelope of what can be considered natural also makes it possible to see when the signal of human-caused climate change has pushed an observation beyond the natural variability. The Large Ensemble can also clarify the climate change “signal” in the model. That’s because averaging together the 40 ensemble members can effectively cancel out the natural variability — a La Niña in one model run might cancel out an El Niño in another, for example — leaving behind only changes due to climate change.”
Color me a bit skeptical, but built in to this discussion is:
1) Models (in this case 40 of them) are all inaccurate otherwise wouldn’t only one be needed?
2) If one uses 40 inaccurate resources how does this substantiate ‘the envelope’ in which ‘natural climate’ is contained’?
3) If one ‘averages’ only 39 of the inaccurate models does that not make the results invalid?
4) El Nino’s and La Nina’s apparently have equal and opposite impacts?
Bothersome to me.
‘Natural’ is a word invented by exorcists, poets, and marketing executives.
There ain’t no envelope of what can be considered natural.unless one is created with consideration.
When Creating an envelope of what can be considered natural, consider real data and throw the model output away.
Skeptics don’t usually doubt that natural internal variability can be large locally, so the response to this article that shows that a model confirms that view is surprising.
It’s not a question of the ‘natural variability’. If the verification is done via a set of inaccurate models is it actually verified or happenstance that the models managed to encompass the range?
What do you think being ‘skeptical’ means?
I was trying to figure out where the disappointment was. How else would you test the range of natural variability? There is no other way.
So what determined what was ‘outside’ the so called natural variability? If 40 (or 39?) are inaccurate why would one accept the bounds as in any was useful? They exist, but so what? The ‘ensemble’ is 97.5% inaccurate. Anyone suggesting they know the pathway? Doubt it. I don’t see they suggest they do.
I don’t see it as ‘disappointment’ at all. It’s not about the variability. Guess I wasn’t clear enough when I said that the first time.
It’s exactly about the magnitude of natural internal variability, and why you can’t predict the climate with any more certainty than the range shown. Think of it as demonstrating the internal chaotic contribution to trends, even without solar and volcanic variations superimposed. It is just the atmosphere, ocean, ice and land doing this. This is a variation on Lorenz with a more complex model.
Errr….surely you look at the range of natural variabiloty in instrumental records and add in the LIA and the MWP? very much colder than today and some what warmer . If we were to look at the Minoan and roman warm periods the variability range increases further at the top end.
Take the period from 1963-2012 over North America, like they have, how do you know how much of that trend is natural variability? In this study, some of the ensemble members gave trends like the observed ones, while others were on either side. That’s a demonstration of the magnitude of natural internal variability.
According to the model programmers, Jim D.
Climate modeling is a joke. They had all the historical data when they developed the model. And they still couldn’t track trends outside of the late 20th Century.
Part of tracking trends more accurately is getting the forcing right. The GHG part is fairly well known in the last 60 years, but the aerosols, sun and volcanoes add some uncertainty of their own to the forcing over this period.
The “GHG part” has been known for over 100 years and modelers cannot get it right, Jim D. If it wasn’t for ridiculous volcanic eruption assumptions, models would have gone off the rails long ago.
Is forcing all there is, my friend? If it is, just keep on dancing; and bring out the booze.
What about clouds? You know, the low ones?
What about humidity at the water vapor emission layer? You know, the hot spot?
No, forcing is not all there is. If you read the article there is also natural internal variability, and that is also quantifiable from studies like this.
B.S., Jim D. Using modeled results to verify models?
Get out. Look around. They are finding human artifacts in the wake of glacial retreat. They are finding ancient treelines North of current lines. They are finding Viking graves in the current permafrost. Ice and sediment cores show warmer climes in the past.
And you want us to base important decisions on a model? I’ll just defer any decisions to people in the future who have a better handle on the issues, better technology and more money.
Yes, climate is always changing, not anything like this fast on a global scale, but it is always changing (see Milankovitch and paleoclimate).
Oh, really? So temperatures in the past have not changed as rapidly as those in the last 30 years? What about 1915 to 1945?
I didn’t say that, but this one is more sustained and just adds on top of what happened earlier. The rise rate in the last century is 20 times faster than the decline rate in the last millennium.
Trends? We don’t need no steenkin’ trends!
The same data shows this. You are free to choose how you want to hide the trend, but there it is.
Jim, please. The graph shows something you may not want shown: A linear relationship between GISTEMP and CO2.
Since T is a log2 function of CO2, they should not track.
Try again, this time with a longer timeframe.
Actually in a short period where we had only a third of a doubling, linear fits well, as you can see, but the CO2 rise rate was accelerating, so nothing is really linear in this period. It works out to 2.3 C per doubling.
Show me the longer timeframe, Jim!
Do you have CO2 in a longer timeframe? This timeframe accounts for 75% of the CO2 rise and 75% of the temperature rise since pre-industrial times.
But, except for an El Nino blip 2014-2016, temperatures for the past 20 years or so have not tracked CO2. Thanks to India and China, assisted by the the failure of Western CO2 reduction programs, CO2 has continued to rise. No climate problem!
You may notice that the average El Nino of 2016 was much warmer than the super El Nino of 1998. There is a trend in El Nino peaks and La Nina troughs, just as in the mean.
One: No two El Ninos are the same.
Two: The current El Nino started in 2014, along with the Blob. They has been driving world temperatures since. Super or not, this El Nino has had profound effects.
Three: World average temperatures plateaued at a slightly higher temperature beginning in the late 20th Century.
Four: Climate models only track over their parameterization periods. Other than that, their temperature trends vary wildly from actuals.
Five: Green NGOs lie.
Certainly this El Nino makes it hard to talk about the pause with a straight face.
The main trouble as I see it is that you don’t define ‘accurate’, Danny.
We might start with “all models are always wrong”. So are ‘observations’. It would be trivially easy for me to parlay Uncertainty Monsters into an endless hit-parade of claiming we don’t know nuffin’ about nuffin’. It’s easy to see why one might not find that state of affairs at all ‘disappointing’.
Thank you, thank you, Brandon. I agree, we actually “don’t know nuffin’ about nuffin’” as it relates to IPCC climate modeling. But, lets go spend a few trillion on nuffin’! Twits.
“The main trouble as I see it is that you don’t define ‘accurate’, Danny.”
Fair enough. A little surprised if I misused the word, but here it is:”(of information, measurements, statistics, etc.) correct in all details; exact.”
“We might start with “all models are always wrong”. Sure, but are they useful? This ensemble is 40 runs.
“The Large Ensemble helps resolve this dilemma. Because each member is run using the same model, the differences between runs can be attributed to differences in natural variability alone. The Large Ensemble also offers context for comparing simulations in a multi-model ensemble. If the simulations appear to disagree about what the future may look like—but they still fit within the envelope of natural variability characterized by the Large Ensemble—that could be a clue that the models do not actually disagree on the fundamentals. Instead, they may just be representing different sequences of natural variability.”
Instead, they MAY just be representing different sequences of natural variability.
I’m not a modeler. But this leads me to ask ……..or?
The study’s assertion that “the differences between runs can be attributed to differences in natural variability alone” is a rather heroic assumption. There is no discussion of how a miniscule perturbation in initial temperatures could interact differentially between natural math and assumed man-made math in the model’s math. It’s the math, stupid.
““the differences between runs can be attributed to differences in natural variability alone” is a rather heroic assumption.”
Exactly my concerns.
Because they do not explain the basis of that assumption. A rather major oversight, in my opinion.
Even if they had how could it be correct? If the only perturbation to the model was temperature, this must mean that all other mechanisms (El Nino as a simple representation) must have been maintained as originally set. Can’t imagine there has been a standing Nino (or La Nina or a lack of either) for the entire time frame represented.
“”We gave the temperature in the atmosphere the tiniest tickle in the model — you could never measure it — and the resulting diversity of climate projections is astounding,” Deser said. “It’s been really eye-opening for people.”” Earth’s climate is not a set of algorithms within which only one perturbation occurs at a time.
I’m still skeptical.
This is indicative of their mindset: “We gave the temperature in the atmosphere the tiniest tickle in the model — you could never measure it — and the resulting diversity of climate projections is astounding,…”
They think the model is real, a real expression of the climate. There is no healthy skepticism! They think the results of their modeling (mental masturbation) is what happens in the real world.
They think that the perturbations of their model is the real envelope of natural climate variations, and present it as such. Their hubris is amazing!
Maybe Jim D, in his sublime understanding of everything climatey sciency can elucidate?
Even described in the link thusly: “The result, called the CESM Large Ensemble, is a staggering display of Earth climates that could have been along with a rich look at future climates that could potentially be.”
And all of that, if only the earth’s climate would have done what we told it to do, and will do it in the future the way we tell it to. Models rule!
If you perturbate the initial conditions of models the output helps you understand the model performance only. To draw inferences from that to what is happening in the real world one needs to have established that the model is modeling the real world.
The second issue is that perturbations in initial condition just start to show some of the uncertainty in the model results (and there really is little basis for claiming this to be the natural variability). Other examples include parameter uncertainty etc. It is perhaps a lower bound on the uncertainty.
On the GHG vs temp comparison, to even begin you need to write down algebraically the relationship you are seeking to show. ORNL has time series for anthropogenic GHG emissions going back in time you could use, however the time element shouldn’t be central, apart from helping to deal with lags and auto-correlations. Primarily the graph should be showing your postulated relationship between a function of AGHG emissions, and a function of some measure of global temp.
This experiment is an extension of that done by Lorenz showing how chaos grows. His model had three interdependent variables and two modes. The climate models have millions of each, but their perturbation growth has similar mathematical properties, and the model’s climate/weather modes, although complex are like those seen in reality.
Regarding the graph, the CO2 growth is fairly smooth, as is the 30-year temperature, and they match well when scaled, as I do, to 100 ppm per degree, which is 2.4 C per doubling in the 300-400 ppm range.
Model runs are not “experiments.” They simply vary inputs to get the modeler’s assumptions about mathematical relationships. Get a grip.
Since, outside the parameterization zone models fail to track observations, the math doesn’t work, does it?
And did you scale the 1915 to 1945 temperature curve to the CO2 curve over that time period, Jim? Did you get the 100 ppm per degree? Cherry picking the time period? Then, again, we could play with aerosols.
Do you expect 1915 to 1945 not to be partly solar because there was a solar lull in 1910, like again in 2010, while by 1940 it was at the strongest of the 20th century, but I guess you ruled that contribution out already for some reason that you won’t specify.
By the way, yes, 100 ppm per degree from the last 60 years of fairly precise CO2 data. Deny it or not.
Take the period from 1963-2012 over North America, like they have, how do you know how much of that trend is natural variability?
All of it is inside the bounds of the natural cycles of the past ten thousand years, so all of that trend is natural variability.
It doesn’t look natural on that time scale, and will look even less so in the future.
And a hockey stick up yours, too. 3 degrees C above MWP?
To anyone with an ounce of sense. To anyone believing in the magic of CO2, maybe not.
the model’s climate/weather modes, although complex are unlike anything seen in reality.
Clearly you have not seen comparisons of animations. Search around. They are easy to find.
charlie didn’t read it, or didn’t comprehend the article, because it says in 1920 the simulations only differed by a trillionth of a degree, which is the whole point.
So, in 1920, simulations, that did not exist then, all had the same wrong, non number.
Exactly. You got it.
Got milk? Same mental effort.
charlie and Danny, do you have this much trouble understanding the relevance of Lorenz’s model to the atmosphere too?
“do you have this much trouble understanding the relevance of Lorenz’s model to the atmosphere too?”
I’ll have to answer yes if methodology is the same (not familiar). In this work, they used slab-oceans (does that exist?) as one example of a condition which was maintained while temperature was perturbed (at an “unmeasurable” level). If the parameters are not a reasonable representation of the climate how can the results of 40 or a 1000 runs define ‘natural variability’, except as a matter of good fortune? If you build a large enough structure, it also happens to be large enough.
Natural variability is beyond our ability to mathematically represent. By definition, natural variability is driven by forces we do not understand. AMO, PDO, ENSO, clouds, upper atmospheric humidity, you name it.
IPCC climate models are bunk. Use them and you are either a twit or a liar.
” climate models are bunk”. I won’t go that far, but as BrandonRGates suggests (and I agree) they are all inaccurate.
This does not make them un-useful.
Danny, they are downright dangerous when used to set governmental climate policy.
“Danny, they are downright dangerous when used to set governmental climate policy.”
Of course the usefulness would have to be selective.
They are oftentimes used for propaganda.
The 40 members are fully coupled with ocean models, as in CMIP5. The true climate appears within the range of this ensemble. There’s only certain ways that the energy can be distributed within the atmosphere, ocean and land globally, and this ensemble captures those modes of variation. Does it capture every possibility? No. Does it sample them well enough? Yes, because reality is within their range. Forty samples should be enough to give a robust mean and variability about it.
“Forty samples should be enough to give a robust mean and variability about it.”
If you build a large enough enclosure, the enclosure is large enough.
Did I misread this: ” For example, CESM/SOM now runs within the full CESM model, including the coupler, using active atmosphere, land, and ice components, and a slab ocean.”? Coulda sworn that says slab ocean.
Think I read this elsewhere and will continue to look for that source for you.
That seems to be for paleo. This is more accurate because the articles link it.
Yep. That was it. “The Large Ensemble Project includes a 40-member ensemble of fully-coupled CESM1 simulations for the period 1920-2100. Each member is subject to the same radiative forcing scenario (historical up to 2005 and RCP8.5 thereafter), but begins from a slightly different initial atmospheric state (created by randomly perturbing temperatures at the level of round-off error). The Large Ensemble Project also includes a set of multi-century control simulations with the atmosphere, slab-ocean,………………”
I’m still plenty comfortable suggesting that if one builds a large enough enclosure the the enclosure is large enough. In part, I suspect, this is the reasoning behind using RCP8.5.
Also, if slab ocean isn’t an inclusion (above references control runs only) what states were chosen? Nino neutral would make the most sense, but would miss effects of both El and La in ‘natural variability’. Makes one wonder if the large enough enclosure would therefore be ‘large enough’?
It is a coupled ocean, so it would have its own version of an ENSO cycle. This is probably what makes the most difference to the variability on sub-decadal scales.
If not a slab-ocean (and I still cannot state with supporting documentation as I can’t find it for CESM1 specifically) then if a perturbation of temp was done and the model held ‘its own version of an ENSO cycle’ and that version was opposite of actual might the corresponding ‘natural variability’ therefore be misrepresented?
The link I provided said coupled ocean. ENSOs are part of the natural internal variability. They say that if the weather was only slightly different in 1920, those cycles would also be different and 1998 likely would not have been a peak, but other years would have been instead.
This link? http://www.cesm.ucar.edu/projects/community-projects/LENS/
It’s where I got the quote which says slab ocean but only in reference to the multi century controls. It’s silent on the CESM1.
Yes, this part.
“All simulations are performed with the nominal 1-degree latitude/longitude version of the Community Earth System Model version 1 (CESM1) with CAM5.2 as its atmospheric component. The Large Ensemble Project includes a 40-member ensemble of fully-coupled CESM1 simulations for the period 1920-2100”
CESM Version 1.2 documentation: “0cean”– The data ocean component has two distinct modes of operation. It can run as a pure data model, reading ocean SSTs (normally climatological) from input datasets, interpolating in space and time, and then passing these to the coupler. Alternatively, docn can compute updated SSTs based on a slab ocean model where bottom ocean heat flux convergence and boundary layer depths are read in and used with the atmosphere/ocean and ice/ocean fluxes obtained from the coupler.
I don’t think we know how it was done.
“Fully-coupled” has only one meaning. A full ocean model coupled to the atmosphere model.
“A CESM component set is comprised of seven components: one component from each model (atm, lnd, rof, ocn, ice, glc, and wav) plus the coupler. Model components are written primarily in Fortran 90/95/2003.”
Ocean: “The data ocean component has two distinct modes of operation. It can run as a pure data model, reading ocean SSTs (normally climatological) from input datasets, interpolating in space and time, and then passing these to the coupler. Alternatively, docn can compute updated SSTs based on a slab ocean model where bottom ocean heat flux convergence and boundary layer depths are read in and used with the atmosphere/ocean and ice/ocean fluxes obtained from the coupler.”
What makes you certain it was POP?
The above documentation shows many more components than only ocean and atm. And within ocean it shows two mode for the data set one of which is slab.
I’ve heard of CMIP, but not FCMIP.
The C in CMIP stands for coupled, as in a full ocean model. This is how all the IPCC groups had to run CMIP5 to qualify.
You can also look at Figure 1 in the paper.
I know the C represents coupled. That’s why I stated I’d never seen one described as (F as in fully) FCMIP.
The fig 1 (thanks for that link I only had paywalled) does show POP and confirms.
But as suggested before, it really doesn’t matter in that if you construct a large enough enclosure the enclosure is large enough.
They could choose to run POP (as they did according to Fig 1) or could have gone slab. Again, from the documentation it states one could use docn in lieu of pop and in doing so: “Alternatively, docn can compute updated SSTs based on a slab ocean model where bottom ocean heat flux convergence and boundary layer depths are read in and used with the atmosphere/ocean and ice/ocean fluxes obtained from the coupler.”
Guess we’ll leave it there. You have all the answers and I don’t.
There are no answers; only dogma.
The slab ocean would not have given any ENSO and has less degrees of freedom than the coupled model and the real climate. That would not have been a very useful experiment for capturing the natural variability of the coupled system.
“The slab ocean would not have given any ENSO and has less degrees of freedom than the coupled model and the real climate.”
This is why I was seeking how it was performed. But even so, were the ENSO’s modeled as occurred (we have observations on which to base this for the time period sampled).
I’d be curious if the perturbations of temps (the only changes made) resulted in the actual (observation based) climate response of ENSO.
Going back to Dr. C’s linked article: “That’s because averaging together the 40 ensemble members can effectively cancel out the natural variability — a La Niña in one model run might cancel out an El Niño in another, for example — leaving behind only changes due to climate change.” It states ‘can’ but did it in actuality? They did control runs. There are observations.
I don’t have a clue how to parse this out.
I think from the spread shown in the paper, the modeled yearly variability was similar to observed. Most of this spread is likely ocean-caused variability because the atmosphere alone can’t cause year-long perturbations.
That chart is labeled GMT anomaly. How does that address the ENSO? If we can’t verify the ENSO the temps should be suspect.
ENSOs are a large part of the natural internal chaotic variability that these wiggles show. What do you mean by “verify the ENSO”? Each run will have ENSO wiggles in different phases. Do you think they have to match the real ones year by year, or something?
” Do you think they have to match the real ones year by year, or something?”
Good and interesting question. Answer is yes and no. In order to make general characterizations of ‘the climate’ for the time period it does not need to be specific. But to make the statement that this work ‘defines’ natural variability it does. Would this year have been as warm as it has been had the timing of the most recent El Nino been changed? Applies equally to 1998.
There is no chance of matching the real ENSO because of the uncertainty of the initial conditions and the chaotic nature of the coupled system. They can barely even predict ENSOs a year ahead. Given that, any year would have a probability distribution of temperatures with a width near 0.5 C for annual temperatures, and El Ninos would put it near the top of that distribution, so 2016 would be like that, 0.25 C above the mean trend line, as it is.
“There is no chance of matching the real ENSO because of the uncertainty of the initial conditions and the chaotic nature of the coupled system. They can barely even predict ENSOs a year ahead.”
No disagreement. But they’re the one’s touting that this ensemble ‘defines’ natural variability. If they don’t get this one segment correct that pronouncement is suspect. If the timing of all the other ‘components’ of climate are off how can they characterize it as having a handle on nature?
It defines the range of natural internal variability. I have never been sure of what else you took it to mean, but that was the meaning.
“It defines the range of natural internal variability.” I understand that what it proposes to do. Accepting that premise based on inaccurate modeling (and it’s inaccurate 40 times) is not something with which I’m comfortable.
Since you seem to have such comfort with the results, can you tell me the range and level of probability? Recognizing if the probability is not 100% then the range remains undefined.
The range, as I said, looks like about 0.5 C for the annual temperature, just visually. Also they showed maps for North America that show local natural variability in trends, and actual trends within the ensemble range. We know that there are forced parts to natural variability that were not counted, mainly from the sun and volcanoes, but this was focused on internal modes of variability for a given forcing.
“The range, as I said, looks like about 0.5 C for the annual temperature, just visually.”
Temperature is not climate, it’s only a part. The premise is ‘natural variability’ not natural variability of temperature.
Sure, they made all their data available. If you have other things in mind, it’s there too. Most people mean temperature, but maybe you want to look at drought frequencies and extremes and people are using this dataset to do that too. They advertise its availability for studies. The paper is about the availability of the dataset which is new.
” Most people mean temperature”
And the title: ” 40 Earths. NCAR’S LARGE ENSEMBLE REVEALS STAGGERING CLIMATE VARIABILITY”
I think maybe only you would see ‘temperature’ in that.
For me, when I think climate, temperature is a minor portion.
I looked at the article and that was mostly what they presented plus blocking frequencies.
Danny, if I were interested in any historical climate metric, I’d just look up the relevant data set. Over time, that data would logically provide an accurate representation of “climate variability” for my location of choice. Why do I need a model?
Sure. Data (observations) are a much better metric but outside the ranges for which we have the data the models can be a useful proxy. But as you stated before, I’d be hard pressed to base policy decisions on them especially since some have proven to be ineffective.
Interesting conversation and I’ve learned more and I thank you guys for that (as always). I fear I’ve been a bit of a blog hog once again and for that I apologize.
Let me get this straight. You are saying that a slightly perturbed ensemble of a climate model “…defines the range of natural internal variability.” of the whole climate system.
This is from a model that can’t do: AMO, PDO, ENSO, clouds, upper atmospheric humidity and a whole host of other physical properties of the climate?
And why do you think I should trust the modelers? Because they work for the government?
The model has those modes. What are you talking about?
Has anybody noticed that the post-2000 model outputs don’t track observations? Could it be that models don’t predict very well?
Forcing assumptions in CMIP5 may not have accounted for the solar lull and increased aerosols from China in the early 2000’s. But, as you see, it is still within the natural variability range anyway.
Post facto rationalizations for model failures.
“Natural variability range?” Sez who? What a fiction.
Says the paper.
Look, Jim, smear a model ensemble all you want, it is still not natural variability. Whenever an IPCC climate model gets outside its parameterization zone it goes wacko, forward or backward.
The POP ocean model in your table.
Thank you, Jim, for proving my point again by trying again and again to defend the indefensible. Describing a model ensemble output as “…reality is within their range.” is something a soothsayer would tell you.
Models are developed based on a modeler’s understanding of the physics and a particular climatic history, leavened with a bunch of parameterizations. If an ensemble of perturbed outputs can’t bracket reality, we’ve wasted millions! If the 40 run ensemble doesn’t, just up the number of runs.
It’s in their predictions that models primarily fail, but their hindcasts are also pretty terrible; outside the parameterization periods models do not track well with reality.
Blasting a wall with a shotgun would give you a comparable prediction of the future as would a typical IPCC climate model.
What do you think of models like the ECMWF one that Judith relies on as part of her side-business? We are getting useful five-day forecasts of Hurricane Matthew from these types of models that you just dismiss.
Wrongo, Jimbo! I said IPCC climate models are bunk. Weather models, especially ECMWF, are validated. Get a grip.
Same physics, same principles.
Different models, different time periods, etc. This is getting tedious, Jim D.
Same physics applies to different time periods.
Same physics but only differing time scale?
Great. So why is weather forecast and climate projected (after once being predicted)?
Is the problem with:
The difference is that predictability breaks down by ten days. Lorenz-style butterfly-effect chaos sets in. All you can project is consequences of changing forcing on the envelope of possible weather states, and it can shift that envelope by a few standard deviations.
My mistake for not being more *precise* about my intended meaning of ‘define’. I take it as a given that no model is ever going to be ‘correct in all details; exact’ … by definition. Your answer is still useful to me because it indicates that you *may* have impossible expectations of what climate models will ever be able to tell us.
… they *could* be fundamentally wrong about the fundamentals, of course. The beauty of expecting absolute accuracy of a model is that its imperfect results will therefore always suspect, and thus never useful.
Now for a moment of Zen: even a perfectly ‘exact’ model run could be ‘correct in all details’ by sheer coincidence.
I don’t think I’m quite that bad. :)
Having said that, do you apply a similar view when the ensemble product is praised with:”Scientists have so far relied on the CESM Large Ensemble to study everything from oxygen levels in the ocean to potential geoengineering scenarios to possible changes in the frequency of moisture-laden atmospheric rivers making landfall. In fact, so many researchers have found the Large Ensemble so useful that Kay and Deser were honored with the 2016 CESM Distinguished Achievement Award, which recognizes significant contributions to the climate modeling community.”
And the article goes further stating: ” What is the range of possible futures we might expect in the face of a changing climate? How much warmer will summers become? When will summer Arctic sea ice disappear? How will climate change affect ocean life?”
Continuing: “But the Large Ensemble is also an extremely valuable tool for understanding our past. This vast storehouse of data helps scientists evaluate observations and put them in context: How unusual is a particular heat wave? Is a recent change in rainfall patterns the result of global warming or could it be from solely natural causes?”
Concluding: “Armed with 40 different simulations, scientists can characterize the range of historic natural variability.” (So let’s see the forward run, see how much GW is A, and nail down the policies based on that!)
Re: CO2………”transformative tool”.
All while it ‘defines’ natural viability. This definition is a noble goal but since we’re here, we’re pretty much all done huh?
So is the product oversold or am I being overly skeptical?
Snake Oil: Cures everything!
Supplying the dictionary definition of ‘accuracy’ as your response leaves little other room for interpretation.
Way I’m reading it, you’re not being skeptical enough of your own assumptions. For evidence to substantiate my *opinion*, I refer to your appeal to suspicion.
“Way I’m reading it, you’re not being skeptical enough of your own assumptions.”
Are you confident that this model ensemble had ‘defined’ natural variability?
“appeal to suspicion”. Huh?
Jim D kindly provide the graphic I was was wanting to see:
Timescale is important here and you didn’t specify one. Those results are perfectly useless for ruling out millennial-scale natural variability. In my *opinion* (which is *subjective*), the ensemble does a good job bounding decadal internal variability, and the ensemble mean is much better than anything published in AR5.
That gives me *some* *additional* confidence that they’re on the right track. *Absolute* confidence? Not a chance. Too many unknowns.
If we can’t verify the ENSO the temps should be suspect.
If they don’t get this one segment correct that pronouncement is suspect.
I’m still plenty comfortable suggesting that if one builds a large enough enclosure the the enclosure is large enough. In part, I suspect, this is the reasoning behind using RCP8.5.
I’m ‘loving’ the conditionals in the first two. It’s hard to fault bomb-proof logic. Note that a single flaw is enough to make any result ‘suspect’. Conveniently for your arguments, there will always be flaws in Teh Modulz, by definition.
I deem the final one where you impute motive to be particularly suspect. Here would be a better example of you lacking self-skepticism:
<So if you’ll forgive, I’ll revise to ‘only one model RUN’ was accurate out of 40. But I’ll need to see the data verifying all the unmodified climate inputs were accurate within the run matching ‘natural variability’. Never happened.
Emphasis added. My turn to ask you how sure you are of such assertions, or whether, *perhaps* you *might* be overselling your suspicions.
By the way, the data are out there. What have you done to evaluate them?
Okay. I appreciate your ‘subjective’ opinion, but if the data are out there what have you done to evaluate?
Brandon, I’m not a modeling expert, I do appreciate this contribution to science, and have a few questions/comments.
First. Timescales were provided in the work to 2100 as Jim’s temperature chart shows. As I suggested to Jim, temperature is smallish portion of ‘global climate’. This product should show us the natural bounds of: Sea levels (up and down and regions/locals?), Ice extents glacial and sea, vegetation extents and status, and so on.
“If we can’t verify the ENSO the temps should be suspect.” I don’t say wrong, I say suspect. Based on your ‘subjective’ ‘opinion’ what will the global climate states be in 2100? It’s easy math as presented. We know the bounds of nature so just add on based on RCP and viola. Of course error bars are perfectly okay, but confidence levels must by 100%.
“If they don’t get this one segment correct that pronouncement is suspect.” The pronouncemment being that we know the bounds of nature. Confidence levels 100%.
There should therefore be no further ‘projections’. Solid ‘predictions’ and ‘forecasts’ only.
Are you doing as Jim was and thinking this ensemble is about temperature only?
Erratum: …and the ensemble mean is much better than anything published in AR5.
I was reading the plot incorrectly, the black model run is the single realization which best matched observation … it’s not the ensemble mean. Just eyeballing the spaghetti, the ensemble mean looks comparable to the AR5 ensemble over the same interval.
Even so, 40 runs of the same model over historical data is much better than anything published in AR5, which still does improve my confidence. It would nice if the the other 37 models used in AR5 could devote as many runs to an ensemble.
How did I correctly suspect you would ‘answer’ by asking me the same question?
Thus far, all I’ve done is look at the plot Jim D provided, which is exactly what I wished to see. It’s what I would have produced if I’d gone to the trouble of downloading the data and plotting the GMST curves for each member of the ensemble against, say, HADCRUT4.
This is better. I’m no expert either, just an interested lay person.
Timescales *plural*, from inter-annual, to decadal to centennial. Being specific is important in this discussion, because — as I think about it — annual variability isn’t climate, it’s weather. +/- 0.25 C fluctuations from year to year *cannot* be due to CO2 forcing because it doesn’t vary enough on that timescale to be causing those wiggles. CO2 forcing only becomes relevant on multi-decadal timescales, and even then, natural/internal variability plus a slew of other anthropogenic effects are confounding factors.
Temperature is mainly what the press release talks about, but not the only thing.
I don’t disagree.
I reckon that simulating an entire planet is hard, especially when ‘observations’ of what it’s actually doing are also quite imperfect … or in blunter terms, *wrong*.
I maintain that your suspicions are suspect.
How in bloody Hades am I supposed to be able to answer that? *Probably* warmer most places is my most confident answer.
I’m not finding where it was said that centennial-scale natural variability was ‘known’.
I see no statements of 100% confidence in the results.
We’re going not going to be able to predict every solar fluctuation or volcanic eruption on top of ‘exact’ future emissions scenarios.
These are impossible expectations, Danny. It’s Modulz and projections all the way down. It simply cannot be any other way.
As it stands now, the AR5 model ensembles remove most of the unforced inter-annual variability, so what you’re asking for here has essentially already been done. Within any given RCP, the inter-model spread is of the most interest to me because the *centennial* scale uncertainty of *forced* trends is literally the bazillion-dollar question.
First thing I did after reading the press release was browse to the data repository.
It’s beyond both the scope of my abilities and level of interest to churn through each of those outputs and smell check them against observation. I’ll likely do what I usually do: read the papers and cross check a few of the easy ones.
“How did I correctly suspect you would ‘answer’ by asking me the same question?” Hey, it’s only fair. But I guess it makes you ‘skeptical’ of me so what the heck. :)
I found this interesting: “Temperature is mainly what the press release talks about, but not the only thing.”
Title: “40 EARTHS: NCAR’S LARGE ENSEMBLE REVEALS STAGGERING CLIMATE VARIABILITY
Data set an instant hit with climate and Earth system researchers”
Temperature is mentioned 3 times total in the press release and probably a couple dozen other ‘systems’ are discussed if briefly Plus the works which have cited this project http://journals.ametsoc.org/doi/citedby/10.1175/BAMS-D-13-00255.1
list temperature 10 times (in word search) out of 100 peer reviewed cites.
Interesting that you and Jim D both found focus on temperature predominating and I found ‘climate’.
“I’ll likely do what I usually do: read the papers and cross check a few of the easy ones.” That’s probably my effort too, but it’s still a building block for me. No one IMO can be an expert in all things climate. And then those like me come along and likely can’t be an expert in any.
I’ll leave the rest for now as it’s just more volleying and I’ve hogged this WIR already.
1) Models (in this case 40 of them) are all inaccurate otherwise wouldn’t only one be needed?
Color me Totally convinced that YOU dont know what you are talking about
its 40 RUNS of the same MODEL… not 40 different MODELS
“To explore the possible impact of miniscule perturbations to the climate — and gain a fuller understanding of the range of climate variability that could occur — Deser and her colleague Jennifer Kay, an assistant professor at the University of Colorado Boulder and an NCAR visiting scientist, led a project to run the NCAR-based Community Earth System Model (CESM) 40 times from 1920 forward to 2100. With each simulation, the scientists modified the model’s starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree, touching off a unique and chaotic chain of climate events.”
“led a project to run the NCAR-based Community Earth System Model (CESM) 40 times from 1920 forward to 2100. ”
That way you wont ask
‘1) Models (in this case 40 of them) are all inaccurate otherwise wouldn’t only one be needed?”
stuck on stupid are we?
Why cite only 30 of 40 runs, then only for 1963 to 2012, not from 1920? Does 1963 to 2012 encompass the parameterization zone?
charlie didn’t read it, or didn’t comprehend the article, because it says in 1920 the simulations only differed by a trillionth of a degree, which is the whole point.
Geez, Jim. I must have a comprehension problem. Initializing in 1920 by varying a trillionth of a degree leads to model results meaning nothing until 1963. I wish I were as perceptive as you are.
What happened in all those intervening years? What happened to the 10 excluded model runs? Could my comprehension level envelope such abstractions? Not everybody is a aware as you are of arcane climate science politics.
It must be a conspiracy.
No, Jim. Just poor science, with an agenda.
On second thought, maybe your single model run 40 different times explains how El Nino and La Nina perfectly offset. Add 1 trillionth of a degree then rerun removing.
Your semantics argument is not helpful. But do you feel better?
“With each simulation, the scientists modified the model’s starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree, touching off a unique and chaotic chain of climate events.”
Vs. “Initial condition ensembles involve the same model in terms of the same atmospheric physics parameters and forcings, but run from variety of different starting states. Because the climate system is chaotic, tiny changes in things such as temperatures, winds, and humidity in one place can lead to very different paths for the system as a whole. We can work around this by setting off several runs started with slightly different starting conditions, and then look at the evolution of the group as a whole. This is similar to what they do in weather forecasting.
Having an initial condition ensemble can help to identify natural variability in the system and deal with it.”
So if you’ll forgive, I’ll revise to ‘only one model RUN’ was accurate out of 40. But I’ll need to see the data verifying all the unmodified climate inputs were accurate within the run matching ‘natural variability’. Never happened.
‘1) Models (in this case 40 of them) are all inaccurate otherwise wouldn’t only one be needed?”
stuck on stupid are we?
That’s another quarter in the snark jar for you.
Yes, the paper clearly states 40 runs of the CESM model.
But the point of the question is even more apt – the results are a striking reminder of the UNpredictablility of climate ( there is a difference between inaccurate and unpredictable, but both apply here ). That’s nothing new – instability in the solution to the underlying equations has been known since they were formulated.
Now, perhaps the global average temperature is more predictable than atmospheric circulation. RF at the top of the atmosphere is much less determined by unstable solutions. But each of the temperature (and circulation patterns, presumably pressure field changes indicated by the line contours ) do effect change of the RF.
Incidentally, the failure of the models is not so much the temperature change between EM and OBS, but the pressure field – EM says no change, obs say lots of change:
Your objection to slab-oceans overlooks the utility of doing the multi-century control simulations, which sets a baseline for unforced interannual variability … in Model World, mind. It would be nice if we could do 40 unforced control runs of the real system and 40 forced runs of the same, but we can’t. On realization of reality is all we get, unfortunately.
Here’s what is written about the slab ocean option in CESM:
Slab Ocean Model
A slab ocean model configuration allows the user to run a full atmosphere model on top a much simplified ocean model. The simplified ocean model is essentially a 0-dimension model running at every ocean point on the globe and meant to be an approximation of the well-mixed ocean mixed layer. The thermodynamic calculation should have a mixed-layer depth specified, and the temperature of the slab is calculated based on the depth and the surface energy fluxes. This configuration is useful for understanding the climate sensitivity of the whole coupled simulation, where, on the timescale of decades, the ocean mixed layer depth is the dominant player. It also useful for a simple analysis of the coupled system where only simple interactions with just the mixed layer ocean are of interest e.g. the Madden Julian Oscillation (i.e., situations in which the role of ocean dynamics is minimal).
My emphasis. Why a ‘much simplified’ ocean model? I *assume* that when one is interested in bounding decadal variability — as opposed to, say, millennial-scale full-depth ocean equilibrium — and one wishes to do a century-scale ensemble of 40 model runs, CPU time becomes a key constraint of the analysis.
That conflates two concepts which I think ought not be. I warrant that I agree more with Mosher on your lack of understanding (or appreciating as the case *may* be) the experimental design than you and I agree on Teh Modulz being ‘inaccurate’.
Your suspicions are unassailable. I can’t “prove” that the reason one can shove an oil tanker sideways through the gap between RCP8.5 and RCP2.6 at 2100 is NOT to generate an ‘unfalsifiable’ prediction. At best, I can only express doubt in the plausibility of that being a motive, much less *the* motive.
If we’re at the point that ‘natural variability’ is now defined, I propose reruning using all alternative RCP’s, subtract the natural variability from each. This would then tell us exactly where to set policy.
So we’re all done here, nothing more to see, and we can just move along.
Danny, they did something equivalent. They ran an 1850 forcing control, which can be compared with the forcing applied for current and future climates. This can be used to answer your question about sensitivity of temperature trends to forcing.
“Danny, they did something equivalent.” Not clear on equivalent to what? The varying RCP scenarios subtracted from natural variability to 2100? So we can evaluate where we are now scenario wise? We ‘know’ where will be in 2100 in all ‘climate systems’? We take the difference and know which cities to move and sea walls to build, etc.?
Work was done in 2015. How come that resolution isn’t in the media?
OK, Danny, then I don’t understand what you are asking for. What we have is a measure of natural internal variability. There would be an additional natural external forcing variability (solar, volcanoes) that would about double the internal one for decadal-averaged temperatures, so that will give you a total. Then you have the effect of anthropogenic forcing which is given by the gradient towards 2100. What else do you need from the model study?
I’m not sure how I’m not clear so I’ll try this way.
This product is touted as having bounded natural variability, correct?
RCP’s are various scenarios based on emissions.
All that need be done is to subtract the difference and we know which policies to implement.
External natural forcings (volcanoes/solar/etc.) are wild cards and unprojectable (current state of the art) so can be left out.
You keep referring to temperatures. That’s not what this product is sold as having bounded. It’s bounded ‘natural variability’ (land/ocean/atmosphere) which of course is a very broad area. But there it is.
I don’t ‘need’ anything from the study. My ‘skepticism’ is that it’s accomplished what it indicates it’s accomplished.
Brandon perceives that I’m not skeptical enough of my perceptions which may be valid. I’m human and have biases. My perception is this product is oversold. I can be proven wrong. Wouldn’t be the first time.
Show me the clouds, baby. Show me the clouds. Somewhere I read ‘clouds are hard’.
Maybe I am still not understanding you, but the purpose of this dataset is exactly to make it available for more applications. If you want to look at the future floods in the UK, or heatwaves in the central US, this gives you a way to estimate the changing frequency with not only one realization of the past (which we have without models), but with 40, and same for the future. As you seem to want to indicate, this may well still underestimate the probability of extreme events in the future because other uncertainties add to it, but it is a start.
Jim, do you actually read what you write? “If you want to look at the future floods in the UK, or heatwaves in the central US, this gives you a way to estimate the changing frequency with not only one realization of the past (which we have without models), but with 40, and same for the future.”
So, the future variation of the frequency of floods, heatwaves, etc. depends on the (miniscule) perturbations of the model as it is compared to the past single reality? And you call that reality only one realization (of your supposed 40!) of the past? Perturbations of models are now reality?
You are saying that the future frequencies (and magnitudes?) of floods, heatwaves, etc. can be determined today by looking at not only the model masturbation, but also the scatter pattern of the ejaculation.
When you can validate the models, get back to us thinking people.
This is validation. The model gets both the variability and the trend well.
Congratulations, Jim! You have validated the model with the model.
Observations are in there too, but that’s just a detail.
The observations belie the model. Just a detail?
They are not statistically distinguishable.
Well, take this from a statistician, Steve McIntyre:
Statistically a big difference, not?
As shown with the 40 members, natural variability covers the observed trend.
Temperature (by itself) is NOT CLIMATE!
Temperature is where we have the longest global record, and it is the primary response to forcing changes. It drives the climate, even if it is not climate alone.
“Temperature is where we have the longest global record, and it is the primary response to forcing changes. It drives the climate, even if it is not climate alone.”
You’re obviously a Global warmer and not a ‘climate changer’. Guess I’ll just have to assume you voted ‘no’ when the name change came about.
“Armed with 40 different simulations, scientists can characterize the range of historic natural variability.”
Hmm. Wonder why they chose those words and not range of historic temperature?
No, variability is a better word. Temperature is only the primary metric, but locally rainfall and droughts change too, as a consequence of global temperature patterns changing.
Okay. Then realistically are you 100% confident that CESM1 ensemble project has captured 100% of all of climate’s ‘historic’ ‘natural variability’? (Just to be clear. That’d be all land, all ocean, all ice, and all atmosphere over all of history).
Or maybe you’re skeptical.
You, too, Danny. AMF.
No, I am skeptical because it may still underestimate the variability in the future, so you have to be cautious not to be too complacent about bracketing extremes going forwards with these projections. This only captures the range of internal variability of the model, and even then I think unmodeled sea-ice loss or rapidly melting glaciers could lead to some surprises. There may be also be external factors. So it has to be taken as a lower limit of variability, perhaps a no-surprises scenario.
“No, I am skeptical because…………………..”
Me too. It took all that to get here.
G’nite Jim. GTG.
I mentioned this 3 hours ago, but OK.
Danny, gotta love ya! All climate is local.
All efforts to downscale IPCC climate models to regional forecasts have been failures. Anybody telling you this or that weather pattern is consistent with AGW is lying to you. Anybody telling you that there is a more or less percentage of a particular weather pattern occurring under AGW is lying to you. There are no statistics backing up any of that crap.
One last time, Jim. CESM1 was developed knowing all the measured data prior to 2005. If it’s perturbed ensemble runs did not envelope measured data, I’d fire its developers.
I’d still fire them because early 20th and 21st Century model runs did not track actual. Additionally, their 1979 to 2015 trends were way too high.
Actual data define natural variation. Model variations are a fiction of their developers imagination. Quit trying to put lipstick on a pig.
If you look at other periods, you find the trend up to 2000 was too low. The trend is variable. The natural variability of trends is captured, and the physics of forcing accounts for the general trend because you only eliminate it by turning off the forcing change as in the 1850 experiment. There is no way to account for the trend with natural variability alone.
Jim, I weep. I would hope that they could pay for better logic than you exhibit.
Why do people keep insisting that model masturbation can tell us anything beyond tomorrow?
“As you seem to want to indicate, this may well still underestimate the probability of extreme events in the future because other uncertainties add to it, but it is a start.”
IMO ‘uncertainties’ go both ways. I don’t want to under or overestimate. I can only go by what has been projected to date and what has occurred. Hopefully this ensemble is an improvement. We’ll have to wait and see.
Danny, the 40-member ensemble is a collection of slightly perturbed computer runs of an un-validated, inaccurate model. Look at the differences between the little black and red wiggles in the early 20th and 21st Centuries of their graph.
They don’t track in hindcast and they don’t track in forecast. Don’t let them confuse you with the grey stuff, “ranges.” If one looks at the trends of decadal temperatures across any study period outside of late 20th Century, models are 2 to 3 times too high or too low.
“If one looks at the trends of decadal temperatures across any study period outside of late 20th Century, models are 2 to 3 times too high or too low.”
I’ve not checked the Bob Tisdale link yet (time crunched this eve). to see if it does include the CESM1 model ensemble. I’m cautious about tossing the baby out with the bath water. It’s recognized that past projections have tended to run hot w/r/t temps. I’m most uncomfortable with the premise that this ensemble has the capability of bounding ‘climate’. But I learn stuff every day.
While you are at it, Danny, try Steve McIntyre’s statistical analysis of temperature trends from climate models (including CESM1):
And they knew the actual temperatures 1979 through 2005! They can’t even parameterize well.
I’m simply going to ignore arguments from you containing conditionals.
Not even remotely. The physical models are only the first step. After that are biology and economics, with their own set of Uncertainty Monsters.
I don’t consider strawmanning to be a robust form of skepticism, Danny.
Ignore that which you choose to ignore. I’m trying to address your comment.
Not sure how a ‘proposal’ morphs to a conditional, but okay.
“This would then tell us exactly where to set policy.
“Not even remotely.” What? We’re been working to set policy prior to the release of this work and we now have the bounds of natural variability which must lead to tighter boundaries and more focused policy. Question (not strawman) has the setting of policy been premature? If you don’t care for that question, should policy be revisited based on this product?
“So we’re all done here, nothing more to see, and we can just move along.” Was sarcasm, not strawmanning. I shoulda tagged it I guess.
‘If we’re at the point that ‘natural variability’ is now defined, I propose reruning using all alternative RCP’s, subtract the natural variability from each. This would then tell us exactly where to set policy.”
We are never at a point where natural variability is defined.
1. We dont have a million seperate earths to “run” over the same
period of time, to estimate the scopre of natural variability.
2. What we have AT HAND, is the best modelling we can muster.
Run those models multiple times without anthro forcing
and you get a best estimate of this variability
3. Will there be STANDARD objections to this? yes,
what about unicorns? what about the things we dont model
well, what about this or what about that. All of that CHEAP
skepticism applies forever..FOREVER.
4. Science is not stopped by cheap skepticism, policy is not
stopped by cheap skepticism. folks push on.
5. Your proposals are silly, which is why no one will do them.
6. We will NEVER know where to set policy, ever. Models are not
used to fine tune policy. there is no fine tuning policy..
7. The policy number that has been selected is thou shall not go
beyond +2C from pre industrial. You lost that fight.
8. You dont need a GCM to figure out How to stay below 2C. never have
needed one. Simple models are much better for this.
Mr. Mosher, when you step outside your personal weed patch you say a lot of funny stuff. Here is just one:
“7. The policy number that has been selected is thou shall not go beyond +2C from pre industrial. You lost that fight.”
Since our deep thinking leaders have acknowledged that current models show the earth would exceed the magical +2C with Paris ’15 commitments, even if the U.S. ceased ALL CO2 emissions, what would you propose as reasonable U.S. policy proscriptions?
“We are never at a point where natural variability is defined.”
1000 aren’t needed Steven, only 40 are: “Armed with 40 different simulations, scientists can characterize the range of historic natural variability.”
That’s pretty clear. So if you have a problem with it, take it up with the authors.
I happen to agree with your argument, but what do I know? Based on that agreement YOU must be wrong because I’m stupid or so you say.
No argument that this ensemble is the best ‘AT HAND’ but that’s mighty faint praise.
Who asked for science to be stopped?
Musta been you, ’cause it wasn’t me.
Jim D suggests my proposals have ‘effectively’ been done. Take that up with him.
Policy is not fined tuned? No. Kyoto, Rio, and Paris were exactly the same. Asinine comment.
2C was never my fight. I directly challenge you to prove it was. (My preference would be zero. Why do you want 2?)
Not one of your number points, but want to address it proactively. Being a jerk is not useful. You’re much better when you’re not one.
The word ‘if’ is sufficient to establish a conditional proposition. You’re actually quite free to supply your own definition of ‘natural variability’ and argue from that definition.
I thought the press release did a decent enough job of it: solar, volcanoes, and modes of internal variability like ENSO, which operate on scales of years to decades. The main argument is that a 40-member ensemble averages away the internal variability leaving behind ‘only’ (I would have written *mostly* only) forced natural and anthropogenic variability over longer time scales. Even in Model World, that’s a useful result, particularly because it’s so rare — so many runs is not a trivial proposition from a computational resources perspective.
Did I stutter? Again: the physical models only tell us, at best, what climate *might* be like in some unknown future for a given emissions scenario, which scenarios are in and of themselves contrived — ranging from worst-case nightmares at RCP8.5 to wishful thinking at RCP2.6. Even a ‘perfect’ AOGCM tells us nothing about how the economy will react … even supposing that the biological models get the ‘correct’ results from the impossibly ‘accurate’ physical simulations.
Thus, an impeccably ‘exact’ physical model all by its lonesome doesn’t even remotely begin to answer the relevant policy questions we might wish to ask of it.
Not for the first time, I’ll note that I think about Uncertainty Monsters as NOT my friend. When the argument is that Teh Modelz are unreliable, suck or are otherwise suspect I’m literally screaming at my computer: Then on what *rational* basis can we say that continuing to change the radiative properties of the atmosphere is ‘good’ policy?!?!!!11111
Makes little difference from my perspective. Underlying your whole argument is an *implied* assumption that unfettered GHG emissions are NOT *potentially* hazardous. Most of the ‘coulds’ and ‘what ifs’ you’re asking are appeals to ignorance on the part of climate modelers. Basically: “How can we make a decision when we don’t know nuffin’?”
It’s an old argument, a smelly, shuffling undead zombie of one, as rank as it was when it was first uttered. I’m finding that I have ever less patience or charity when I see it shambling around grousing at what I think of as perfectly good and appropriate scientific practice.
I don’t know, Danny. Since when did failure to reject the null hypothesis mean that the null is correct? That isn’t how I learnt it.
To your credit, you show more self-skepticism than Turbulent Eddy or Charlie Skeptic. Their diatribes are utterly foreign to how I think about risk assessment and mitigation. “We don’t know” means *WE* — all of us — DO NOT know. Thus, staying with the familiar conditions of recent, succesful history, *seems* the least risky option.
I’m still left wondering whence your *seeming* confidence that, e.g. cloud uncertainty, will be the planet’s saving grace for our lack of caution. What source of information is it which tells you that Uncertainty Monsters and ‘inaccurate’ Modulz means everything will be just hunky dory?
There *might* be a hidden sarc tag or two lurking in my above prose. How many will be inversely proportional to the number of my correct suspicions.
“Underlying your whole argument is an *implied* assumption that unfettered GHG emissions are NOT *potentially* hazardous.”
That’s an assumption applied to me. I’m a lukewarmer with caveats.
My general impression of your comment is that because I have issue with science stating “Armed with 40 different simulations, scientists can characterize the range of historic natural variability.” that you think I’m anti model. I’m not. I’m not willing to accept the sales pitch w/o time and substantiation. Acceptance due to peer review soley doesn’t cut it. It has to be ‘melded’ over time. What can I say. I’m skeptical.
“What source of information is it which tells you that Uncertainty Monsters and ‘inaccurate’ Modulz means everything will be just hunky dory?”
Think you’re applying YOUR impressions on to me. Gotta link that shows I said that?
I don’t care to play your hidden sarc tag. I may have erred earlier but it was not purposeful. Maybe you’re just having a bad day.
Which doesn’t really tell me much other than you don’t outright reject the enhanced greenhouse effect, and for whatever reason put more stock in ‘non-alarmist’ ECS estimates toward the lower end of the published IPCC range.
The thought had occurred, but more what I think is that you’re anti-press release. When you start quoting the papers directly and demonstrating some better understanding of what’s in them, I *might* be more open to you lobbing bricks at the quality of the science.
Which substantiation you don’t specify. You simply assert that it isn’t there. “Never happened.” Hmm?
The clock ticks. What can I say. I’m antsy when:
1) I don’t know how much time we have and
2) I don’t know how much time we need.
We don’t need any stinking models to tell us what to do to slow down the warming. This whole ‘Teh Modulz aren’t good enough to set policy’ argument so popular in these parts is pure and unadulterated crap. The main reason I think models need to get much better is to hedge against the very real possibility that we’re simply not going to meaningfully cut emissions in time to avoid negative consequences. A ‘good’ model *might* be useful for planning adaptation measures.
But who knows. I certainly don’t. It’s not like humanity has ever been down this particular road.
No, of course not. Do you have a link where the IPCC says that the reason for the big gap between RCP8.5 and 2.6 at 2100 is to have a wide enough envelope so as to avoid falsification?
Maybe I just don’t like speculation and suspicion masquerading as skepticism. What can I say. I’m cynical.
I’m not on about ‘the models’, I’m on about this one’s claim to have captured (all of the) ‘historic’ natural variability. The link was to the press release and the release came from guess who? NCAR. Who’s product is CESM? NCAR! Sounds like you ought call and chew ’em out.
While yer at it give Mosher a hollar as he and you need to chat about ‘the models’. https://judithcurry.com/2016/10/01/week-in-review-science-edition-57/#comment-815148
“Maybe I just don’t like speculation and suspicion masquerading as skepticism. What can I say. I’m cynical.”
All of science, from my short couple of years looking more deeply at this subject, allows time and acceptance to settle in to works (peer reviewed or not) before they are melded in to ‘mainstream’.
I guess you are having a bad day. Maybe it’s the stress of the clock ticking. I went to my using my phone. Eliminates that issue.
I’ll leave the balance for future (or not).
Well, “sigh”, this again.
40 model runs.. The models don’t model the atmospheric warming trend or anything else correctly and have low spacial and temporal resolution.
Running 40 model runs tells you something the variability of the model and its sensitivity to input conditions.
Claiming it tells you anything about the real world is just a claim.
Thank you, Mr. Mosher. Those weeds tell us there is no problem. IPCC models are bunk.
For what? Not having a PR department that writes ‘skeptic’-proof releases? Life’s too short to chase that horde of squirrels.
When I have something to say to Mosher, I say it. No external encouragement required.
We’ve only been studying climate by way of models in earnest since the early 1960s.
I must say I’m having some difficulty sorting out your position here. On the one hand you say that you’d prefer a temperature change of zero to a change of two degrees. And yet on the other, a frequent theme of your posts on this thread is needing more time for … whatever.
Call it one of my own prejudices … I’ve not known luckwarmers to be big on consistency. *Maybe* it has something to do with being based on taking a central position somewhere between zero and a central estimate, i.e., it’s a position of positionining justified by selective appeal to published literature and just asking questions.
And *maybe* if I keep speculating about what’s going on inside your head and tossing any other red herring into the discussion I can lay hands on you’ll see what I’m on about.
“A large ensemble of model runs produced 30 alternate realities for N. American temp trends from 1963-2012”
This is a stunning article. Instead of seeing this a hint towards the limitations of climate models scientists seem to regard this as the last word about natural variability.
Of the 40 member ensemble, we are shown the temperature estimates for only 30. And those were shown only for the period 1963 to 2012, dropping off the first quarter century.
I agree with krmmtoday; Such extreme output variations from inconsequential input variations should scare the crap out of any thinking person. What are the impacts of all the assumptions made in programming the model’s math?
Parameterizations are a big bugaboo. By tuning the model to the end of the 20th century, they get what they want over that period. But for the first half of the 20th and beginning of the 21st Centuries, they get variances from observations. Playing with aerosols in the early record is a dangerous game. Smearing things over 1963 to 2012 (ignoring 1920 to 1962) in 30 of 40 runs doesn’t make up for the horrendous uncertainties in the model.
Tuning hides the fact that modelers are unaware of (many?) natural climate drivers, thusly not included in this or any model. To use model outputs to claim they have identified the range of natural variation (an IPCC trick) is risible in the extreme.
When they can model clouds and the humidity at the water vapor emission level, they might begin to have something.
If 40 is good is 51 better? (Yes, it’s ‘only’ weather, but the ranges are significant IMO). https://www.washingtonpost.com/news/capital-weather-gang/wp/2016/09/29/this-is-why-you-cant-put-all-of-your-trust-in-the-hurricane-cone-of-uncertainty/
If 40 is good is 51 better?
Right – what will happen is not the average of the 40, is there a fallacy of the ensemble mean? If you take the mean of a musical ensemble, the result will not be music, or at least no music you’d want to listen to.
The actual events of the next century will not be the mean, but one of the possible solutions. And, even assuming the models contained no egregious errors, it might not be in model runs 1 through 40 but in model runs 41 through 80 or any of the infinite array of solutions.
Declining Pb. Good news. Shows that real air pollution problems can be tackled successfully. Eliminate tetraethyl lead in gasoline. Precipitators on coal flue stacks. (There is even research on recovering lead and zinc from fly ash.) Three stage filtration on lead smelters: baghouse, electrostatic precipitators, finally wet flue scrub.
Carbon pollution isn’t a real one.
The crap that plants need to live and fix photons and make carbohydrates so everything else can live is POISON!
Ah… Such wonderful Logic.
You know Vitamin A?
Is it good for you or a Poison?
So yes, C02 can be BOTH beneficial and Harmful
c02 is a pharmakon
SM, have you taken your Pb supplements? Explains much otherwise unforgivable.
What Rud, you deny that a substance can be good or benign in some quantities
and situations ( like water) and dangerous in different situations?
Ya, C02 is a pollutant.. its ok in small quantities.. but dose baby dose
So yes, C02 can be BOTH beneficial and Harmful
The difference between vitamin and poison is dosage.
Fortunately, given experiments, CO2 appears to be vitamin not poison for a large amount of increase to come.
RE: The power paradox – excellent read on the surprising and sobering science of how we gain and lose influence [link] …
This is an unbelievably simplistic view of power relations in a society. One could easily come away with the idea that it is the unconditional altruists who make a society work. If human social existence were only so simple, and so touchy-feely.
To the contrary, the strong reciprocators — the altruistic punishers — are the ones who play the major role in making society work.
As Peter Turchin notes in War and Peace and War:
For a far more nuanced and sophisticated analysis of social organization, there is the third section of this book:
Yet more alarmism…
Worldwide, we are facing a joint crisis in science and expertise.
For as often as we see someone pushing this chicken-little meme, we rarely (ever?) see even any attempt to present evidence substantiating the claim. What are the metrics used to establish this “crisis.” Is this anything other than a subjective determination? Is this merely the evolution of science? Is it merely a process of change and adaptation? Is it even reflective of a growing trend?
There is a ton of evidence showing no substantial trend in the public’s trust in “science,” even if there is some evidence of a small decline in a particular cross-section of the public, in some countries, of trust in particular scientific institutions – (evidence shown through polling data of what people say about their opinions of some scientific institutions, but not on the basis of whether people put less faith in the outcomes of science as they live their daily lives).
Where are the large-scale, carefully controlled longitudinal data to support these claims or societal trends? Why are people who are so science-oriented accept these claims even though they are made without substantiating evidence of that sort? Is it merely because they are willing to lower their standards to confirm biases?
How ironic. People who conflate fact with opinion hand-wringing about the state of “science.”
Joshua, read Ioanaiddis’ several peer reviewed papers on this exact topic, focused on his specialty of medical/biological research, then get back. There are two explanatory possibilities for your post, neither complementary.
I have read Ioanaiddis, and more.
That there are issues (say, problems with replication) neither establishes a trend (which requires longitudinal data) , nor provides an objective evaluation of what comprises a “crises.”
Try dealing with those questions.
If it don’t fit you got to diverge…
Extended warranty not included.
Science papers are a plenty, the good and the bad even more so.
However that was not always the case. On January 7th (Ortodox Easter) one of the worlds greatest inventors, Nikola Tesla naturalised American of Serbian origin died in a New York hotel.
On the day (or more likely the next day) of his death a lorry load of his papers disappeared from a Manhattan warehouse without a trace. Supposedly “IBF” agents were involved, but they said it was Alien Property Custodian office, while the APC denies that had anything to do with it.
Apparently the documents were given a thorough examination by a group of “IBF” agents led by one John G. Trump, the uncle of Donald.
Trump who was M.I.T. Professor of Engineering, who helped design X-ray machines for cancer patients and did radar research work during World War II concluded:
-Tesla’s “thoughts and efforts during at least the past 15 years were primarily of a speculative, philosophical, and somewhat promotional character,” but “did not include new, sound, workable principles or methods for realizing such results.-
The Trump’s assessment didn’t count for much since for the number of years after followed a somewhat farcical chase of possible espionage, Yugoslav and Soviet com-munist related investigations, military intelligence, Tesla’s relatives etc.
An FOI few days ago made some documents (held at “IBF” with numerous ‘reductions’) available on line.
A large ensemble of model runs produced 30 alternate realities for N. American temp trends from 1963-2012
They describe what the did as changing the initial conditions slightly and getting different outcomes. So the question remains, how are these GCMs started? Quite awhile back it was said that they don’t input the observed conditions as the initial conditions and I decided that was true. In that case, the models could be spun up until they were in something like an equilibrium state. Steven Mosher had linked some chaos videos recently. With the Lorenz two lobe butterfly, it didn’t matter what the initial conditions were, the PDFs were pretty similar. But that was a Lorenz model. So are the models in effect showing two lobe behavior or multi-lobe behavior? If they worked like the Lorenz model does, they should.
Here a weather forecast is made:
I see two things. Big storm and not big storm. The system was going fall into one of two basins. Two lobes.
Using the Winter plot in the article we can ask, Warmer or not warmer Alaska? I see about 5 cases of not warmer. With this model small changes in initial conditions are said to cause the different outcomes. The Ensemble Mean (EM) says a warmer Alaska. But there are long periods of that not being correct in these runs. So with the Alaska EM an average is not as useful as a PDF. About 5 out of 30 Alaskas were cooler.
The next study to be done should change the initial conditions by greater amounts. Use a GMST initial conditions of plus 1 C and minus 1 C in a barbell distribution. If the results are a similar EM as in this one, the models converge without regard to initial conditions.
“So the question remains, how are these GCMs started? Quite awhile back it was said that they don’t input the observed conditions as the initial conditions and I decided that was true.”
for global historical runs ( starting at 1850) They are not enught observations to initialize them. So they are spun up to equillibrium ( drift less that 1% I recall) and then they are let go.
For decadal runs ( this last Ar5) they started in the modern period
and so they might be able to initialize with Obs
The run mentioned in the paper are REGIONAL ( north america ) and start in 1920. Not sure what they did for initializing, other than their study was ficused on small perturbations.
I’d suggest folks get the data. there are other data sets with 70 runs or so..
I recall Lucia looking at those to characterize weather noise or internal variability.
jeez spel check mosher
Mr. Mosher, nobody has the money nor the time to Wander in the Weeds the way you are funded. Anyway, model outputs depend on the modelers.
Okay. I am making progress. They used to be spun up and may still be. But a lot of that would still be estimates for grid cell sizes of 100 km on a side so it’s likely they are still spun up. They then have an equilibrium tendency. This regional model is assumed have input of identical input with the slightest tiniest variances while having an equilibrium tendency. The trait of spinning up would seem to be accompanied by an equilibrium tendency as if it just flies off into an ice age its usefulness goes down and computer time costs go up with each failed run. I’ll also assume their regional GCM has the equilibrium tendency. So even with that we’ll get a PDF where something like 5 of 30 times we get a colder Alaska or on 5 out of 30 days, we get colder Alaskan days. Or 5 out of 30 years Alaska is colder. It compares nicely to the Lorenz model where 5 out of 30 times, we are on the cooling lobe.
“Deser agrees that it’s important to communicate to the public that, in the climate system, there will always be this “irreducible” uncertainty.”
“Scientific reductionism is the idea of reducing complex interactions and entities to the sum of their constituent parts, in order to make them easier to study.”
Say we have 10 decimal places. And that we could measure that but there were the 11th and 12th decimal places we could not quantify. Science reductionism fails and chaos takes over. All we know is that we are likely to get a PDF but not one answer. If we assume we find no physical Lorenz butterfly lobes in nature, what do we still get? Distributions that could have made by them.
“Mr. Mosher, nobody has the money nor the time to Wander in the Weeds the way you are funded. Anyway, model outputs depend on the modelers.”
Trivially True. yes, modeling outputs depend on modelers, and data, and computers
No body has ever doubted that.
Some skeptic you are.
“Trivially True.” as observed by Mr. Mosher. The story of my life.
Trivial? Truley! One floats atop the stew; somewhat like the large chunks (no septic tank jokes here). While others are digging into the true meat of the matter, I, trivially no doubt, take a look at the larger picture, skimming the cream as it were. It’s amazing what one can see when one just looks (thanks, Yogi).
Mr. Mosher tells me that my observations are: “Trivially True [sic]. yes [sic], modeling outputs depend on modelers, and data, and computers [sic] No body [sic] has ever doubted that.” But many do not grasp the implications of those trivial facts.
Modelers get to pick the data, the mathematical algorithms, the computer coding, the parameterizations, etc. And they don’t tell you or me about their selections. They are black boxes to the uninitiated. They mean what the modelers say they mean.
Only the high priests (modelers) can fathom the intricacies of the oracles (models). When the oracles don’t track observations, the high priest, Gavin Schmidt, casts the runes, adjusts the outputs and reveals truth. Trivia rules.
check the forecast time on your plots. come back when u have a point about climate
You have repeatedly stated on previous threads that I refused to read or watch the videos you posted links to which argue we don’t need a damage function. I have replied to each of the comments I’ve seen explaining that I did read and watch (part of) what you referred me to. However, as I explained the point was that we do not need a damage function because we can start from the assumption that 2 C warming is dangerous and go from there.
So, I’d ask you to stop misrepresenting that I didn’t not respond to your comment. Please refer back to where you made your post and I responded (twice), and many times since that you’ve misrepresented on this.
The forecasts above are from this:
Probabilistic prediction of climate using multi-model ensembles: from basics to applications
T. N. Palmer was one of the authors. If we were looking at decadal or longer regional precipitation forecasts using ensembles I think we’d get similar looking outcomes to the figure’s forecasts. I was trying to highlight the similarities.
“You have repeatedly stated”
When you watch the whole thing.
If you agree that we dont NEED a damage function, then glad to see you agree with me. go in peace.
The point is this.
In YOUR approach YOU need a damage function. that is why you repeatedly demanded one.
but YOU are not the only person thinking about the problem.
Other people think about a the problem and in their approaches ONE doesnt need a damage function.
you might prefer a damage function approach, but their is no logical need for one.
Since the policy makers have decided on a 2C guard rail, the problem changes form.
You might not like that… Pfft
So, I’ll accept your apology
You seem to be really losing it. You can’t make a rational sensible response.
Clearly I said nothing of the sort. So you’ve resorted to the same sort of intellectual dishonesty as Stephen Segrest uses to dodge and weave and misrepresent others. Basically, a liar.
Exactly my point. The problem has changed from being tackled by rational, objective research and analysis, to one driven by beliefs and ideology. We are back to the pre-Enlightenment approach. That seems to be what you are advocating. i.e. we should just trust the ideologues.
Peter Lang said:
“We are back to the pre-Enlightenment approach. That seems to be what you are advocating. ”
Sort of. But the fact is science was always a religion, and a particularly brutal and nasty one at that.
Its just that people are starting to realize this fact now that it has successfully taken over and is showing its utter bankruptcy and inability to answer any questions that are interesting or relevant to society.
Didn’t you know the game was crooked? Yes, but it was the only game in town.
40 Earths: NCAR’s Large Ensemble reveals staggering climate variability
“The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change.”
As ENSO and the AMO are responses to solar variability, they are not modeling the real climate system in the first place. And there’s the Arctic warming conundrum again, it warms with increased -NAO, but increased forcing of the climate increases +NAO.
A large ensemble of model runs produced 30 alternate realities for N. American temp trends from 1963-2012 [link]
in other words, honest words, the climate models are much worse than useless.
So you use a result from the models to conclude that the models are useless?
…if the models are useless, though, then you can’t draw that conclusion, as you used the models to do it. It’s self-contradictory.
In any case, they’re trying to use the models to quantify the natural variability. Does that seem useless to you? Don’t we want to know the natural variability?
This assumes that small perturbations of initial conditions will tease out subtle variations in their coded math for their assumed natural climate drivers, leading to significant variance in their metrics of choice. The resulting mathematical envelope of variations then define natural variation of the metric.
During this, somehow, the math for anthropogenic drivers is somehow unperturbed. Nice trick.
No, there’s no variations in the coded math. The variations in the metrics are the result of using the same math; the same equations. Weather is chaotic, and greatly dependent up on the initial conditions, so the variations come from that. Using the same math.
No, they certainly don’t define the natural variation – the real-world laws of physics do that. But they are a piece of evidence we can use to try to ascertain the natural variation. Not without caveats, of course — part of the point of such work is to better understand both the similarities and differences between real-world natural variability and modeled natural variability.
But it’s basically nonsense to say that these results are useless. They help us understand both the real world and the models better.
Modern ecosystems owe their existence to a previously undocumented period of global cooling [link]
And if the cooling was indeed driven by a reduction in atmospheric CO2, it could explain a critical shift in global vegetation that occurred during the late Miocene: the transition from forests to grassland and savanna in the subtropical regions of North and South America, Asia and Africa. These ecosystems are still present today. In Africa, these are the habitats associated with the evolution of our early human ancestors.
If the reduction in CO2 could explain a critical shift in global vegetation that occurred during the late Miocene then we better not repeat the reduction of CO2, that could be totally stupid, and likely would be.
Full presentation from Andrea Saltelli: Climate numbers and climate wars. A fatal attraction? [link]
a list of criticism moved to the realism of
“[…] the point is that estimates based on these models are
very sensitive to assumptions and are likely to lead to gross
The real point is that estimates based on these models are totally baseless,very sensitive to assumptions and are likely to lead to gross
overestimation and false estimation.
Correct!!!. The question is when will the CAGW alarmists start to question their beliefs by doing objective, unbiased research into the foundations for their beliefs?
29 links in the post and not a single one withg empirical evidence to support that contention that GHG emssiions are doing more harm than good for planet Earth.
Mosher will be very pleased he can make it through another week without having to face up to the facts that his beliefs in CAGW (or that GHG emissions are dangerous or net damaging) are unsupported.
Since the damages are almost ENTIRELY in the future, its hard to
have empirical evidence of the future.If you believe Tol there are even net benefits up to a point…
Simple: I predict that if peter sticks his head in the Toliet, his face will get
I have no empirical evidence of his face being wet today, but I predict.
in the future..
based on what we know… we predict future wetness
Peter, go test that.
If Peter chooses not to place his head in the toilet, what then, Confucius?
This is a really stupid comment by someone who thinks he’s God’s gift to modelling. We have no evidence of anything in the future. That doesn’t prevent you and other modelers using past experience and evidence from the past to make projections of the future. The models you love attempt it all the time.
If you believe we can’t project future damages of GHG emissions, what basis is there for you and the other Alarmists telling us GHG emissions will do more harm than good?
We have half a billion years of evidence that the planet is in a rare cold period – only the second time the planet has been this cold in the past half-billion years – and life thrived when warmer but not when colder. It is rare for there to be ice at the poles – only 25% of the past half billion years (ref. IPCC AR4 WG1 Chapter 6 Figure 6.1 https://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-6-1.html ).
There is no persuasive empirical evidence to suggest that 2C or 3C warming will do more harm than good. [As an aside it probably can’t happen anyway, given lower ECS, TCR, until the tectonic plates realign to block flow of water around Antarctica and allow free flow of ocean currents around the globe in the low latitudes.]
But, back to your argument that we don’t need a damage function to estimate the postulated damages of GHG emissions. Can you please explain in a few short, clear, well written paragraphs (in English), what you are advocating as an alternative way to estimate the damages of unmitigated GHG emissions, and the social cost of carbon.
Also, if your suggestion could work, please explain why you believe the climate economists are still using IAMs, and still using damage functions. Why is EPA, Stern and everyone else that estimates damages, net benefit of GHG abatement policies, and optional carbon prices still using IAMS which depend on damage functions?
Please try to answer these questions directly and clearly without dodging and weaving.
Also, could you please post the link to the video again so others here can look at it and get involved in this discussion?
Perhaps some day you will get to model something that works similar to this Earth’s atmosphere, rather than your religious fantasy.
Since the damages are almost ENTIRELY in the future, its hard to
have empirical evidence of the future.
If the claimed damages are entirely in the future, the claimed damages could be entirely mythical.
There isn’t going to be (net) an agricultural issue. The CSIRO 1982-2010 study showed 20+% more growth in all locations we weren’t actively killing off vegetation. The low growth areas in the CSIRO study are a map of deforestation.
The Antarctic is gaining ice. The “vulnerable” areas of Greenland and Antarctica are less than 10% of the total ice mass. It is a coin toss if increased melting will outpace increased snowfall.
The roughly 40,000 GT of carbon in the ocean which is being increased 1/5000th per year in the 30 years or so that we are going to burning fossil fuels in quantity is going to change less than 1/100th of the average variation in ocean alkalinity. Until a serious study determines how much carbon it will take to completely neutralize the ocean it isn’t even clear there will ever be a problem.
I guess the solution is to hold off the discussion of global warming until warmunists can show some actual net damages. We have plenty of time. We are only adding 4 GT/Y to the atmosphere. The 6 GT per year of carbon leaving the atmosphere indicates that atmospheric carbon has a short lifetime and we will get out of much danger faster than we got into it.
Lang scored a major goal, Mosh. You can’t know what the future damages will be and you also can’t know what the future benefits will be. Benefits appear to overwhelm any damages currently. Based on that, the future looks bright.
I’m by no means a modeler Jim D, but even I know you can’t take the average of 40 models and get anything meaningful. Not when they are all different.
It’s like taking the average score of NBA, NFL, MLB and NHL games and using it to predict the average score in any one league. The parameters differ. The average of each league tells you nothing.
You can run an experiment to test your hypothesis.
You can stick your own head (or maybe bribe a grand kid into doing it) in the toilet and see if your face gets wet.
Personally, I’m of the opinion Peter might be well served by volunteering for the experiment. Perhaps someone can use the time to pull that stick out while he’s bent over the porcelain fixture.
29 links in the post and not a single one with empirical evidence to support that contention that GHG emssiions are doing more harm than good for planet Earth.
That sums it up very well.
Following the logic of Lang’s arguments, Dr. Mario Molina should have never been awarded the Nobel Prize on his ozone depletion work. President Reagan should have never accepted Dr. Molina’s work in getting World consensus for the Montreal Protocol. Per Lang’s argument, the World should have listened to Fred Singer who argued (and still argues) that there is no valid empirical evidence of CFCs and ozone depletion. After all, everything Molina had done at the time of the Montreal Protocol was in a controlled lab environment and with modelling.
Lang continues to be a good example of the Dunning-Kruger effect on a broad range of topics (macro grid engineering economics, U.S. tax laws and energy markets, ect. and ect.). Lang’s MO is to come up with incredible straw-men and then bully/demand any respondent to refute his arguments. Denizens at CE should notice that true experts like Tol (and others) just walk away from Lang — its a waste of time.
Ah, the Nobel Prize. I believe Obama got one for breathing.
Under Capt Dallas’s argument, Nobel Prizes in hard sciences of STEM just don’t mean anything. Everything in this world has been ruined by Obama.
Segrest is finally getting it.
Stephen, you get a bigger kick out of over reacting than anyone I know. “my” logic is that an appeal to authority is just that. There is no infallible “scientific” authority.
Captain — No, under your “logical argument” there is no difference between the “process or content” of awarding Nobel Prizes in STEM (e.g., Molina in Chemistry) and Obama’s Peace Prize.
Under your argument, both Stem and Non-Stem Nobel Prizes are just “political”.
By the way — what scientific theory IS absolutely infallible?
Jim2 has the best label/moniker on the CE Blog — meaning hard core binary, black-white, either-or. No grey areas allowed in Jim2’s World.
Stephen, “Captain — No, under your “logical argument” there is no difference between the “process or content” of awarding Nobel Prizes in STEM (e.g., Molina in Chemistry) and Obama’s Peace Prize.”
Damn, I never realized I was that deep :) What argument I thought I was making is that even Nobel Prizes go to idiots every now and again. Perhaps you are ready for some of that Nobel Prize winning electro shock therapy?
While Molina’s work is fascinating, it isn’t the only influence on stratospheric ozone. Volcanoes have a surprising large impact and last I checked they are somewhat natural. The guy or gal that “figures” out all of the influences of “normal” volcanic activity on climate on long time scales should get a double Nobel Prize with oak leaf clusters.
I guess my saltiness triggered moderation :)
Let’s see Segrist,
The ozone hole was found in a location different from what the science predicted, still exists after 25 years post Accord, waxes’ and wanes’ from year to year, but is still relatively similar in size after those 25 years and has produced none of the catastrophic impacts attributed to it.
I’d say game awarded to Fred Singer.
Love it. Solving the problem by reducing CFCs means … Fred Singer was right about there not being a problem.
The only way to convince people like you that a C in front of AGW belongs there as more than just a form of “skeptical” ridicule is to actually let the C happen. And maybe not even then.
If AGW doesn’t do us, this brand of “logic” stands a good chance of finding some other way.
The piece by Romar https://paulromer.net/wp-content/uploads/2016/09/WP-Trouble.pdf linked to by Andrea Saltelli is well worth a read.
Re: “sustained rise in methane, but from tropics/ag, not oil/gas ”
The conventional theory (Callendar Revelle Hansen Lacis) frames the AGW problem in terms of “extraneous carbon” external to the surface-atmosphere carbon cycle and climate system brought up by man (or woman) from deep under the ground where such carbon had been sequestered from the surface-atmosphere system for millions of years. The essence of this theory is that the injection of “extraneous carbon” into the delicately balanced surface-atmosphere system in the large quantities involved is not natural but an unnatural and dangerous perturbation of nature that will upset the climate system with possibly catastrophic results.
emissions of carbon from surface sources such as forest fires, the use of wood fuel, the use of dried cow dung as fuel (in india) and the carbon exhaled from any aperture by animals and by man (or woman) represent the conversion of surface carbon from one form into another and therefore does not constitute the injection of extraneous previously sequestered carbon into the surface-atmosphere system. This why, for example, the use of wood pellets as fuel in power plants is “green”. The carbon in wood pellets is surface carbon not extraneous carbon.
1. carbon emissions in the form of methane due to enteric fermentation and rice cultivation are surface phenomena and therefore green.
2. carbon emissions in the form of methane from hydroelectricity generation derive from vegetation and other carbon life forms that were flooded when the reservoir was created. this carbon is also surface carbon and therefore green.
i could site more examples but these two examples should suffice to point out that AGW theorists appear to have forgotten AGW theory and are now counting all carbon without regard to whether the “emission” is part of the surface system or whether it is a perturbation of the surface system by previously sequestered carbon brought up by man (or woman) from deep under the ground in the form of fossil fuels.
This opinion about hydroelectric effects on climate is not from one of the AGW theorists or IPCC, who have never considered hydro as a climate-change influence and most likely never will. It appears to be the opinion of only the author of this piece (Wockner) as far as I can tell. It won’t catch on for the reasons you mention.
Well, in the early 2000s, methane(CH4) appeared to flat line.
In the last decade, methane has begun to increase again.
Meanwhile, CO2 forcing appears to be decelerating ( the ten year trends which smooth out the noise indicate decline since 2007 ). Presumably, the ten year trends are from secular forces, since the Paris accord is just now being agreed to, but not yet implemented.
So the question is, should the focus be on greenhouse gasses that are already decelerating ( CO2, CFCs, et. al. )? or on the the GHGs that are accelerating ( CH4 and N2O )?
And does that imply the Burrito Tax?
Zona published a study about the effects of drainage in permafrost earlier this year in the journal Nature Geoscience. Additionally, she and fellow SDSU ecologist Walt Oechel, along with colleagues at several other institutions, published another study last year showing that the emission of methane, another greenhouse gas, is highest in the Arctic during the region’s cold season. That was surprising, as most scientists thought little if any greenhouse gases escaped the frozen soil during the cold season.
Sure enough, Kwon’s recent study shows a similar trend for carbon dioxide.
“Importantly, Kwon and colleagues show the increase is highest during the cold season, a notoriously under-studied part of the year in tundra ecosystems,” Zona wrote.
More data is needed to better understand the long-term implications of these findings on larger climate change patterns, Zona said. But that’s difficult, as funding for such studies is scarce.
Read more at: http://phys.org/news/2016-09-biologist-comments-startling-climate.html#jCp
Where have these people been. in cold times CO2 is not reduced by green things that grow, in warm times CO2 is reduced by green things that grow. Look at actual data for CO2 in the NH.
Cannot speak to the methane paper, but the CO2 paper is just awful. The seasonal nature of the Keeling curve has been evident since 1958. Increases in NH winter, decreases in NH summer.
“showing that the emission of methane, another greenhouse gas, is highest in the Arctic during the region’s cold season.
Sure enough, Kwon’s recent study shows a similar trend for carbon dioxide.”
That’s an obvious result that should have been expected given paleoclimatology, At the start of a glacial period GHG levels are kept elevated despite strongly dropping temperatures. The only reasonable explanation is that the biological store is returning part of what it gained during the interglacial. The same explanation is likely for high northern latitudes. During the winter they return part of what was gained during the summer.
I am often surprised by the naivety of many climatologists publishing articles in good journals. Don’t they do their reading of the relevant bibliography?
The evidence says no.
The Snyder paper: “This result suggests that stabilization at today’s greenhouse gas levels may already commit Earth to an eventual total warming of 5 degrees Celsius (range 3 to 7 degrees Celsius, 95 per cent credible interval) over the next few millennia as ice sheets, vegetation and atmospheric dust continue to respond to global warming.”
Is this a joke? An estimate of non-feedback warming of 07 degC from preindustrial time gives a feedback factor of 7. Would not that result in a jo-jo kind of warming and cooling when the CO2 level goes up and down? And some stability in a couple of thousand years with great loss of ice sheets and vegetation and air full of dust. Strange science.
A new climate theory: Dust in the atmosphere is warming the Earth.
Well, they start with the assumption that CO2 provides the lab level forcing in the real world, all things being equal.
All things are never equal.
The fact that the hot spot, the atmospheric profile change, the 21th century warming, antarctic melting (it is gaining ice), etc. all predicted by global warming are simply wrong.
About 90% of the ice sheets are inaccessible to their melting. So they may be partially right (0.5°C) but the closed loop feedback for real warming just doesn’t exist.
It isn’t as warm as it was millions years ago when the ice sheets formed. The claim the ice sheets will unform at modern cooler temperatures is foolish and unfounded.
Carlyn W Snyder building a green carieer: “I am the Director of U.S. EPA’s Climate Protection Partnerships Division. The Division uses the power of partnerships to remove market barriers for energy efficiency and renewable energy and reduce greenhouse gas emissions, resulting in economic and environmental benefits. The Division’s programs include the EPA’s flagship partnership program, ENERGY STAR, which offers energy efficiency solutions across the residential, commercial, and industrial sectors. Other programs include the Green Power Partnership, the Combined Heat and Power Partnership, the Center for Corporate Climate Leadership, and the State and Local Climate Energy Program”.
Here’s a paper that claims positive cloud feedback to the AMO based on observations.
What? A positive feedback to a Unicorn? Say it ain’t so. Next thing you know they will be saying there are positive feedbacks to longer term “natural” variability. Oh, the humanity!
U ding dong. If you can name it….amo…it’s not a unicorn.
Next. Amo can’t cause anything.
Check your units.
How many Unicorns in an Acorn?
Mosher, “Next. Amo can’t cause anything.
Check your units.”
That Peters Principle thing is interesting. A shift in energy from the tropics to the poles can cause an increase in average global temperature without “causing” a change in total energy. Check your model.
It can also cause a change in total energy.
steven “It can also cause a change in total energy.”
Yep, just changing the period of the AMO “oscillation” can change the rate of heat uptake/loss. The “oscillation” still reverts to zero at some point, but there is a change in total energy.
All the stuff on science keeping its credibility! Dr. Curry you are leading this. Congratulations. Science matters and the reason witch doctors were not kings is because you should never bring only a magic knife to a gunfight.
Nothing like another models are about worthless discussion.
“In response to the applied historical and RCP8.5 external forcing from 1920 to 2100, the global surface temperature increases by approximately 5 K in all ensemble members. This consistent ∼5-K global warming signal in all ensemble members by year 2100 reflects the climate response to forcing and feedback processes as represented by this particular climate model. In contrast, internal variability causes a relatively modest 0.4-K spread in warming across ensemble members.”
It was a small spread. 5 K + – 0.2 K. With a sample size of 30, it might be reasonable to expect a confidence interval as low as plus or minus 10%. We get less than half that for seemingly all runs. This isn’t showing a lot of natural variability globally and tiny changes in initials conditions still give well constrained outcomes globally. Yet the banner image shows regional changes and Deser is highlighting internal variability and irreducible uncertainty. Plus or minus 5% is not a lot.
The 30 plots show chaotic behavior on regional scales but all global runs fell into the + 5 K basin. Someone brought up 40 runs. Some story used that number. It’s wrong. They didn’t lose any of the runs to a glacial or to runaway warming. While it’s nice they bring up chaos, did we see any at the global scale?
I see they used RCP8.5. Overwhelming CO2. Smothering. Say I have the best draining soil. It cannot get saturated nor drown my turf. I am trying to determine rainfall by measuring its growth and no one has a rain gauge. I will do that with automatic sprinklers on a set schedule. They give 3 inches a week. The turf grows well. I conclude natural rainfall has hardly anything to do with it. Oh sure, it helped by about 5% but it was mostly the sprinklers. I do this 30 times. I take the spread of results to determine natural variability. My study does not pass peer review. I can’t figure out why?
I am not one of those arguing models are near worthless. I find them interesting. I am trying to learn to what extent they are chaotic, whether planned or not? I am interested when they seem to output chaotic behavior.
Go with what’s been working is a *very* popular theme here. The oddest thing (but not really) is that Denizens rarely argue for keeping CO2 within historic bounds.
Interesting what you find interesting. I’m not aware that GCMs haven’t been chaotic since their first incarnations. As for intentional, I would hope that people intending to model a chaotic system would at the very least not be surprised when their model of same turned out to also exhibit chaotic behavior.
The key question as I see it is whether the real system is a more predictable sort of chaos. For the numerous commentators who think it isn’t — Turbulent Eddie is a perennially good example — I seriously question the wisdom of externally forcing it in favor of other options.
To me, this is a supreme irony of Teh Modulz suck argument against CO2 mitigation.
The model is a system of three ordinary differential equations now known as the Lorenz equations:
d x / d t = σ ( y − x )
d y / d t = x ( ρ − z ) − y
d z / d t = x y − β z
This is my argument. The above equations do not appear in the GCMs. Assume the output of the GCMs are similar to the output of a Lorenz model. We get similar PDFs.
Near the surface of the Earth, the acceleration due to gravity g = 9.81 m/s2
Say we did not know the above but had a model that used others means to get the same results. For instance:
For distance from the start point per equal unit of time. We made our model by recording observations and we are accountants, not mathematicians. Our model gives outputs without the accepted formula. If we ask why, it’s because the formula is part of nature.
So the question is, are the Lorenz equations inherent in nature?
The results of the models have to be chaotic, I think, as weather is chaotic. This is what that IPCC blurb meant when it said “climate is chaotic” – the specific, individual path of the weather through time is very dependent upon initial conditions.
But that doesn’t mean that the climate variations are unbounded. Scientifically, a system can be chaotic while demonstrating very small internal variation.
Which is why we’re always talking about 30-year statistics, and why the IPCC puts things in terms of ensembles and variation. It’s another way to think about climate which incorporates the chaotic variations.
“Will they (robots) care about rising temperatures? “They won’t give a fourpenny bleep about the temperature, because to them the change will be slow, and they can stand quite a big change without any fuss. They could accommodate infinitely greater change through climate change than we can, before things get tricky for them. It’s what the world can stand that is the important thing. They’re going to have a safe platform to live in, so they don’t want Gaia messed about too much.””
Can young researchers thrive in life outside of academia [link]
Sort of. Debt austerity has virtually destroyed R&D in companies as they cancel such projects so they can buy back their stock for investors to cash in on.
And the academy offers up jobs paying $50k in communities where the median house price is $500k.
Bad time to be a scientist. Bad time to bother even going to college, I am beginning to think.
See this one-page summary on Energy in the Solar System:
It hasn’t been this negative since 2013.
So effing what?
So we are probably still in a negative PDO phase and the past 2 yrs were just a blip.
I got a screen that said: “Your comment cannot be posted.”
Just realize the PDO is not real. The Blob and El Nino ruled from about 2014 (2013?) through 2016; they drove PDO.
I’ll leave out comments about models and data manipulation.
The PDO drops into negative numbers.
The real question is should we trust IPCC climate models to the extent we will “fundamentally change” Western societies and economies to conform to Third World notions of social justice?
Dick around with the arcane all you want, but please recognize that you are supporting or opposing a collectivist model of our future.
The IPCC climate models indicate we will be blasting past the +2C barrier, even with estimated human CO2 production in line with current reduction targets by the ’15 Paris pledges. According to our oh-so-intelligent U.S. political leaders, halting all U.S. CO2 production would not affect the outcome.
So, all you climate blog gunslingers, what do you want to do about the hard facts? Argue more arcane factoids?
Another gimmick in the house.
JCH, I assume your “Another gimmick in the house.” refers to the U.S. political solutions that could not affect the outcome even if the CAGWers were right. Gimmicks are the hallmark of fearmongers seeking political or monetary gain.
A random thought: What is going to happen when the WMO suggested temperature anomaly base period is updated to the 30-year period 1991 to 2020?
Pingback: Lorenz validated | Climate Etc.
Pardon if someone else posted this but this seemed extraordinarily alarmy:
“According to Ault’s research, if we continue producing greenhouse gases at our current rate, then there’s a 70 to 99 percent chance of a megadrought that would stretch across the West, from San Francisco to Boulder, Colorado to the Gulf of California. The scientists’ drought analysis was published today in the journal Science Advances.”
The fruition of the Green Dream, Man Kind to succumb for the benefit of other species such as the Polar Bear and Snail Darter, evolution… men at work.
“… men at work.” on modelturbation.