by Judith Curry
A few things that caught my eye this past week (well actually, the past month).
Very good overview on the Madden Julian Oscillation and its implications for rainfall by Carl Schreck [link]
NOAA’s atmospheric river information page [link]
Assessing temperature patterns projections made in 1989 [link]
NASA study improves forecasts of summer Arctic sea ice [link]
Pacific Ocean Heat Content During the Past 10,000 Years [link] … Important research that Medieval warming Period was global.
Forget climate change: Human-started wildfires expand the fire niche across the United States [link]
New temperature record from China gives no hint of any recent warming that can be attributed to atmospheric CO2 [link]
Declining Arctic sea ice influences European weather—but isn’t a cause of colder winters [link]
Counterintuitive claim: Slower snowmelt in a warming world [link]
Review article: Abrupt climate changes of the Holocene [link]
A “chaotic solar system” is the root cause of #ClimateChange [link]
Climatologists say Labrador Sea could cool within a decade before end of this century, leading to unprecedented disruption [link]
The ‘bootstrap’ philosophy of nature: Physicists have found evidence of a mathematical structure that encompasses all quantum theories: [link]
Revolutionary Power Plant Captures All Its Carbon Emissions, At No Extra Cost [link]
A role for tropical forests in stabilizing atmospheric CO2: [link]
Heavy snowfall in Greenland [link]
Study shows China’s severe weather patterns changing drastically since 1960: substantial decrease of severe weather events [link]
“Scientists uncover huge 1.8 million square kilometers reservoir of melting carbon under Western United States” [link]
Don’t expect media focus when comprehensive analysis of sea-bed methane release points firmly away from alarm. [link] …
Scientists Solve Ocean ‘Carbon Sink’ Puzzle [link]
“Regional variations in the ocean response to tropical cyclones: Ocean mixing versus low cloud suppression” [link]
Now in NatureClimate – Snapshot: Extreme Arctic heat [link]
Powerful new tool from NOAA to indicate possible location of life-threatening storm surge. [link]…
Historical carbon dioxide emissions caused by land-use changes are possibly larger than assumed [link]
Social science and policy
Why Some of the Worst Attacks on Social Science Have Come From Liberals [link]
Meta-analysis finds abstracts in climate research papers to be “sensationalized” compared to text, like other fields [link]
Science curiosity trumps politically biased information search [link] …
New article published by Andrea Saltelli “What is wrong with evidence based policy , and how can it be improved?” [link]
Ideological science: BeeGate illustrates how science, activism and politics mix to produce unsound policy. [link]
Industry sponsorship and research outcome [link]
Thoughtful paper on conflation of science & values: The Biodiversity Conservation Paradox [link]
New study: Climate scientists engaging in advocacy have latitude to do so without harming scientific credibility: [link] …
Preparing for disruptions [link]
Long but fascinating read: An epidemic of unnecessary treatment [link]
Labels like “climate denier” undermine civil dialogue & increase destructive polarization around issues. [link]
Is the American elite really elite? [link]
How arguments about nuclear weapons shaped the climate debate [link]
Important & interesting: Motivated Responding in Studies of Factual Learning [link] …
How to Embrace Uncertainty in Participatory Climate Change Risk Management—A roadmap [link]
‘Alternative facts’: A psychiatrist’s guide to twisted relationships to truth [link]
Looks like warming has been a net good: Global economic impacts of climate variability and change during the 20th century [link]
About science and scientists
The lure of rationality: Why does the deficit model persist in science communication? [link]
Restoring Trust in Expertise Requires That Those Describing Themselves as “experts” Embrace Uncertainty [link]
Interesting essay on the history of technology and innovation [link]
History: Interview with Nikola Tesla [link]
Interview with Manabe [link]
A history of Joseph Fourier’s ‘political science’ [link]
Top U.S. scientific body not disclosing conflicts of interest on #GMOs. [link]
Where it is argued that “Galileo’s vast reputation, & the hyperbolic accolades that go with it, are not justified by the real history.” [link]
How policies designed to improve academia are actually messing it up [link] …
Treating science with the respect it deserves [link]
The History of Zero: How Ancient Mesopotamia Invented the Mathematical Concept of Nought and Ancient India Gave It Symbolic Form [link]
Researchers do a good job of estimating the size of errors in measurements but underestimate chance of large errors [link]
Certainty in complex scientific research an unachievable goal [link] …
Some provocative comments on communicating climate science & echo chambers [link] …
Pingback: Week in review – science edition – Enjeux énergies et environnement
Hey there’s a new man-made hockey stick to add to the science debate.
Thousands of Man-Made Minerals—Another Argument for the Anthropocene.
http://www.scientificamerican.com/article/found-thousands-of-man-made-minerals-mdash-another-argument-for-the-anthropocene/
We have added hundreds of thousands of new chemical compounds to the biosphere in just a few hundred years. Seeing it on a graph makes Hansen’s hockey stick look positively anemic.
And the implication is exactly what?
This is the root of the problem. Just because things have been done does not mean they have significance. Just as just because it happens to be warming, does not mean we have done it.
Just more circumstantial blather. Show me the data, your intuition doesn’t work for me.
Ha Ha you took the bait. What does warming have to do with anything. Well if I mix lot’s of chemicals and elements together in a closed system and change the temperature I fully expect a catalytic reaction. Maybe benign or maybe dangerous but at the most basic level it’s a uncontrolled experiment. Personally I ignore first order effects of climate change like measuring the average global temperature, polar ice and CO2 levels. I’m watching for second order effects that actually causes accelerated biosphere divergence. I suspect you will need to see personal economic damage before you start connecting the dots.
Totally open to ANY order effects. Still waiting…
Also Trump wants to defund NOAA
http://www.csmonitor.com/Environment/2017/0304/Budget-cuts-to-NOAA-threaten-climate-monitoring-satellite-program
and NASA
https://qz.com/919982/a-nasa-engineer-explains-why-trumps-plan-to-cut-the-space-agencys-climate-science-program-is-a-lot-harder-than-it-sounds/
satellite observation programs.
Thus spoke Trump: If thine eye offend thee pluck it out.
What’s the difference between being vindictive and malicious as long as it makes you happy?
Know the difference between budget cuts and defunding?
Thought not
Sorry, see comment above.
Maybe he just wants to defund the data parts of NOAA and the earth-pointing satellite parts of NASA. Bates must be pleased with himself sticking it to all his former NOAA colleagues in this very blog. Smith and Trump picked it up. NCEI is being hit hard. It seems blogs have consequences.
Pacific Ocean Heat Content During the Past 10,000 Years. Unfortunately, the link is paywalled. Never mind the past 10,000 years; what was the Pacific Ocean heat content as of January 1, 2017? Can we measure it to a 5% accuracy? 1% accuracy? 0.1% accuracy? Can we express the margin of error in favored terms of Hiroshima atomic bombs?
Rosenthal et al., 2013 It is not a new paper.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.732.130&rep=rep1&type=pdf
It is however a very important paper. It shows that the subsurface waters of the Indo Pacific Warm Pool have been much warmer during previous warm periods, and specially during the Holocene Climatic Optimum.
It essentially destroys Marcott et al., 2013 by showing that their tropical reconstruction, that shows warming instead of cooling, is absurd.
Javier, thank you. I am not defending Marcott, or attacking Rosenthal; I merely think that both are unreliable. I just don’t believe that we can estimate the heat content of the Pacific Ocean with a precision … how would you define a precision of such a measurement? We simply don’t yet have a technology for such a task. And we did not have it 10,000 years ago.
Read the methods section. Proxies are proposed by researchers to represent past local climatic conditions. For example 18O isotopic abundance. If they give a coherent picture with what is known of past climate from other proxies they are accepted by other researchers. Then what they measure with a great degree of precision is the proxy.
When a proxy gives the same result as the calculations for the past orbital changes, as L04 does, it gives a lot of reassurance that the proxy is adequately capturing climatic changes of the past.
All we know about the climate of the past is based on proxies.
Here’s a good review of the Yair Rosenthal paper by Andrew Revkin of the NY Times Dotearth blog.
https://dotearth.blogs.nytimes.com/2013/10/31/10000-year-study-finds-oceans-warming-fast-but-from-a-cool-baseline/
Michael Mann was not pleased!
James,
Delia Oppo from Woods Hole, one of the authors of Rosenthal et al., 2013 has a very good reputation for a very careful researcher. She has another interesting article with Rosenthal and Lindsley:
Oppo, D. W., Rosenthal, Y., & Linsley, B. K. (2009). 2,000-year-long temperature and hydrology reconstructions from the Indo-Pacific warm pool. Nature, 460(7259), 1113-1116.
http://users.clas.ufl.edu/rrusso/gly6932/Oppo_etal_Nature09.pdf
There is a score of paleoclimatologists whose research contradicts many of the propositions from the man-made-CO2-is-responsible-for-climate-change crowd. They just avoid being dragged to the controversy to prevent damage to their career. Understandable. They just wait for the consensus hypothesis to trip to come to the fore and say they knew all along.
One can sign up for a free AAAS account to read the paper. As Javier mentions it is generally a very good effort.
I take issue with the following:
“The early Holocene warmth and subsequent IWT cooling in Indonesia is likely related to temperature variability in the higher-latitude source waters.”
The “warm pool”, even at intermediate depth, cannot be reasonably considered “sourced” at higher latitudes after traversing ~20,000 km of the tropical Pacific Ocean.
If any ocean water on earth can be considered tropical, the warm pool would be it.
At the level of ocean unity or equilibrium, likely on the order of 2k years or the “common” era, cold (or warm) inputs will factor in everywhere. To say that apparent density equivalence over the common era implies high latitude fresh water compensation is also a stretch, IMO.
“Holocene IWT cooling must have been largely compensated by freshening at the high-latitude source regions.”
Salinity is the purported engine of surface to intermediate level energy transport. Warming might be compensated by fresh water, cooling would be amplified.
My favorite. Pretty much says it all.
https://phys.org/news/2017-02-certainty-complex-scientific-unachievable-goal.html
“Climatologists say Labrador Sea could cool within a decade before end of this century”
You’ll find a slow AMOC drives a warm AMO.
Ouch…
“Bill Nye now thinks he is a psychologist
“In fact he’s not a scientist of any kind. He’s just an entertainer. And his account of cognitive dissonance is exactly ass-about. Cognitive dissonance arises when your prophecies fail. He is the one suffering from cognitive dissonance. When has a Greenie prophecy ever got anything right? He’s not even capable of Googling or he would not have made such a howler “
The article that Judith characterized as a “Long but fascinating read: An epidemic of unnecessary treatment” is in fact long and fascinating and well worth the time. Worthwhile because it talks about the types of medicine and procedures we are likely to be prescribed; that many will do nothing; some will make us worse; some will kill us and all are costly.
When Evidence Says No, but Doctors Say Yes
https://www.theatlantic.com/health/archive/2017/02/when-evidence-says-no-but-doctors-say-yes/517368/?utm_source=atltw
Atenolol did not reduce heart attacks or deaths—patients on atenolol just had better blood-pressure numbers when they died.
With greater capability and effectiveness comes more risks.
Like an expense very fast car.
Intervention becomes seductive.
Just gotta see what that baby can do.
Passenger beware.
Not quite like a fast car. A fast car involves risks having to do with speed, etc., but the car is a system that’s been precisely engineered and all the consequences of increasing gas to the cylinder, for example, are well-known and reflected in the engineering of the entire system. Contrast that with a human body that’s incredibly complex and involves many unknowns. What we do know for certain, however, is that if you just eat your vegetables (and lots of ’em) and eat moderately, exercise, and don’t pump yourself up with alcohol and drugs, you’ll do very well.
UCAR Climate Variations in Earth History Figure 9 seems to indicate the climate models’ projections of temperature variations from tropics to poles are not consistent with the hard empirical evidence (i.e. rock-solid, unchanging, no-instrument-drift, geological record):
https://www.ucar.edu/communications/gcip/m10histclimvar/images/m10fig9.gif
Source: https://www.ucar.edu/communications/gcip/m10histclimvar/m10overview.html
This seems to be consistent with Scotese (2016) Figure 12 and 13:
https://html2-f.scribdassets.com/235ko6pvuo4jol5p/images/21-975a2df8d8.png
https://html2-f.scribdassets.com/235ko6pvuo4jol5p/images/22-20227bbb57.jpg
Source: 275277369_Some_Thoughts_on_Global_Climate_Change_The_Transition_for_Icehouse_to_Hothouse_Conditions
This seems to me to be very important. It may indicate the IAMs are overstating the damages and the benefits of GW, and mistakenly suggesting GW is harmful. For more background on this issue, see this comment: https://judithcurry.com/2017/01/29/the-threat-of-climate-change/#comment-836115
… and understating the benefits of GW …
Broken link for Scotese’s Figures 12 ans 13 : https://www.researchgate.net/publication/275277369_Some_Thoughts_on_Global_Climate_Change_The_Transition_for_Icehouse_to_Hothouse_Conditions
I came across a graph that seems to be definitive proof that CO2 can’t be responsible for any increase in temperature this century. Down-Welling IR has been decreasing according to this CERES data.
https://okulaer.files.wordpress.com/2015/11/dwlwir.png?w=640&h=252
Am I missing something?
Yes. That is all sky. Includes clouds. You want to see the GHE bybitself, you need to use clear sky only.
Sorry, I still don’t see how CO2 can be attributed to increasing temperatures with DWIR decreasing. If clouds or some other mechanism is overwhelming CO2’s increasing bandwidth that is what is controlling temperatures not CO2. DWIR must increase in order for CO2 to be causing more heat retention.
gyan1,
Interesting observation, but you are reading too much into it. Under any simple construct of a heating model, the thing which determines whether something is heating or not is not the rate of change of flux, but the flux itself – specifically whether the net flux is positive or negative. All else being equal, and for small relative temperature changes, a LINEAR increase in forcing with time should asymptote to a CONSTANT net flux difference and a consequential LINEAR increase in temperature. Variations of forcing above (below) the linear should then give rise to an increase (decrease) in net flux. Such variations do not provide evidence of a lack of heating in the longer term. For downwelling LW in particular, if it were solely controlled by an exponential increase in CO2 concentration giving rise to a linear increase in CO2 forcing, we might expect it to rise to a constant value and stay there. But it is not solely controlled by CO2, nor is the forcing change strictly linear. Moreover, the downwelling LW which you show is not measured by CERES as such; it is a construct based on matching the more reliable estimates of TOA fluxes. So no definitive evidence, I’m afraid.
CO2 does not produce a linear increase in temperature. It is a Log function.
I understand that increasing CO2 bandwidth is retaining more heat.
It is being claimed that the current increase in temperature is from CO2. The net flux is declining. Increasing bandwidth from CO2 is not resulting in a net increase in DWIR. That means it can’t be responsible for an increase in temperatures. I don’t see anything in your reply that refutes this.
This seems interesting. Solid state battery with glass electrolyte. Claims 3x energy density and fast recharge time.
https://news.utexas.edu/2017/02/28/goodenough-introduces-new-battery-technology
I’m impressed that Professor Goodenough, the co-inventor of the lithium-ion battery, was the lead scientist and must have been working on that glass solid state design for years. At age 94 the guy isn’t in it for the money, Respect.
Dr. Braga (Portugal) was the European scientist who added the novel idea of silicone/sodium electrolyte solid structure.
UT @ Austin got the patent (worth billions if they can license it).
Impressive lab specs of 1200 cycles, sub zero operation, fast charging, cheap materials.
In addition to working in nuclear safety analysis I have also been a beekeeper since I was 15. I first became a beekeeper in about 1978. In 1984 I joined the service and during the next 7 years or so I did not have bees. Between 1983 and 1990 varroa and tracheal mites were introduced into the USA. They devastated US beekeepers both professional and hobbyist.
As a teenager in Michigan I would typically lose 1 or 2 hives out of six each winter. Now it is difficult to bring a hive through a Michigan winter. Professionals typically move their hives south and make divides and then come back to Michigan in the spring.
I suspected early on that CCD was another environmental scare story. For example, activists would make a big deal about 35% of US colonies being lost in a given winter, even though this is not that much of a deviation from the norm. It is also why I became skeptical of stories in the media. I can reasonably be considered very knowledgeable in two areas; nuclear power and beekeeping. When I read stories on either subject it seems obvious I should be skeptical of anything I hear from the media.
I think CCD was a propaganda campaign by the professional bee keeper industry. Since 2007 the price of honey has doubled but as you correctly point out there has been no decline in commercial bee populations.
I suspect the same thing is true about bats and the White-nose Syndrome is just a cover up by the wind power industry.
The price of honey made a step change when the US instituted tariffs against imports in 2001. When I was a kid the wholesale prices was about 65c/pound. In 1991 when I started back it was about 55c/pound. It more than doubled after the tariffs were instituted. I don’t really follow the wholesale market anymore so I don’t know what drives it now.
I have seen some sensible people point out that beekeepers would be wise to not allow honey to go the way of maple syrup. In some applications it is already headed there.
Leave it to the republicans in 2001 to slap outrageous tariffs on free market honey.
Check out this chart,
http://www.honey.com/honey-industry/honey-industry-statistics/unit-honey-prices-by-month-retail/
The price now is 10 TIMES what you quoted!
It’s a scam!
I quoted wholesale not retail. Buy it in 55 gallon drums and you’ll get it a lot cheaper lol.
Doug
If you sell your honey locally you could display your green credentials with the slogan:
‘Buy Doug’s Local Honey- where the only air miles are flown by the bees….’
Tonyb
“Assessing temperature patterns projections made in 1989 [link]”
It would have been helpful for me to see a graph for comparison of predicted to observed for 1989. There did not appear to be data points to graph my own. Using various colors to represent the data did not impress me. Maybe or maybe not their illustrations represent what they say they represent. No way as far as I can tell for me to reproduce what is claimed.
Yes, I noticed the same thing. Colorful fluid dynamics and “quantitative” agreement is not very strong a claim.
Comparing the observed change with the model projections, one notes that the land areas warm faster than adjacent ocean areas in both the model and in the observations. …
Check.
The warming tends to be largest in high northern latitudes due
mainly to the positive albedo feedback of snow and sea ice. …
Check.
In the model results, warming is a minimum in the northern North Atlantic: this is not so pronounced in the observations. In the model, this minimum is attributable not only to deep, convective mixing of heat but also to the weakening of the Atlantic meridional overturning circulation. …
Climatologists say Labrador Sea could cool within a decade before end of this century, leading to unprecedented disruption
In sharp contrast to most of the high northern latitudes, temperature change is small in the Southern Ocean in the model results.
The area of small temperature change is also seen in the observations, it will confirm this surprising early model finding. …
Check.
In other words, the projections shown here were made before the observations confirmed them as being correct, striking at the heart of the argument that modellers tune their models to yield the correct climate change results. …
Check.
https://i.imgsafe.org/c565b39087.jpg
These are the spatial trends of GISS up to 1988. On this trends (up to the year the paper was written) the model must have been “tuned” because tuning is necessary to match a model to the real world. Let’s check:
1. Comparing the observed change with the model projections, one notes that the land areas warm faster than adjacent ocean areas in both the model and in the observations. …
This is very simle physics due to the inertia and tirivial. No need to check.
2. The warming tends to be largest in high northern latitudes due
mainly to the positive albedo feedback of snow and sea ice. …
check (with trends to 1988) and also simple physics.
3. In the model results, warming is a minimum in the northern North Atlantic: this is not so pronounced in the observations. In the model, this minimum is attributable not only to deep, convective mixing of heat …
Also simple physics, the deep mixed layer in the SPG was known before 1988 … and clearly visible in the obs. up to this year.
4. In other words, the projections shown here were made before the observations confirmed them as being correct, striking at the heart of the argument that modellers tune their models to yield the correct climate change results. …
NO, as shown above.
And: The climate sensivity given in this model descritption http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442(1991)0042.0.CO%3B2 is about 2 times too high. You should recalculate it before touting.
frankclimate, the model results also scale with CO2 the same way as observations, just like future projections. You would again say that this is obvious physics, but I think some skeptics here don’t think it is as obvious as you that warming scales with CO2. In those 25 years we have had 20% of a doubling, while the model results were for a doubling. They showed a scale factor of 5 fits well, so their model had the right sensitivity too. The obvious physics that you talk about extends to the sensitivity itself.
Jim D
You say: ” In those 25 years we have had 20% of a doubling, while the model results were for a doubling. They showed a scale factor of 5 fits well, so their model had the right sensitivity too.”
Not so. Per AR5, the change in forcing from 1961-90 to 1991-2015 averages was 1.1 W/m2, or 30% of that for a doubling of CO2. So the model GMST would have increased by about 0.68 K, based on its TCR (which is 2.3 K). In By contrast, he GMST increase was about 0.43 K (per NOAAv4.0; or 0.42 K per HadCRUT4v5). So the model sensitivity was nearly 60% too high to match the observed warming – which implies a TCR of about 1.45 K, not 2.3 K. However, by choice I would not estimate TCR using periods that are close together and have natural variability and forcings rather poorly matched.
In 25 years, the CO2 level went from 350 ppm to 400 ppm, which you can work out is 20% of a doubling. You can bring in other forcings which may or mostly cancel, but the CO2 part is dominant and scales, and their experiment was only a CO2 doubling with no other forcing change.
Jim D: of course the warming scales on GHG at most and also one has to consider the natural variability as niclewis points it out. So I think the paeans on the models in the paper are a little bit… prematured? ;-)
http://www.annualreviews.org/doi/abs/10.1146/annurev-earth-060614-105156
No wonder “observational” has become a lukewarm buzzword.
Willard: “No wonder “observational” has become a lukewarm buzzword.”
I’m so sorry that observations are a buzzword for you. For me it’s the fundament of every physics as long as climate science is also physics for you? Or do you prefer a post-modern physics uncoupled of obs.? Perhaps than you are right in “some kind of climate research”…
Observations are up, so get ready for crickets.
> I’m so sorry that observations are a buzzword for you.
“Observational” is a buzzword, Frank. Observations are not words in physics.
Speaking of words, perhaps I should have emphasized These studies employ observations but still require an element of modeling to infer ECS, with the bold on the relevant part too?
Pray tell me more about modulz as fundaments of physics.
JCH, Just as I said, qualitative statements. Not convincing to me.
Pingback: Week in review – science edition - Feediu.Com
Someone had noticed the sharp recent fall in North Atlantic ocean temperatures, as reported in the Guardian, linked above,
https://www.theguardian.com/environment/2017/feb/24/drastic-cooling-north-atlantic-beyond-worst-fears-scientists-warn
Are we back to ’70s ice age alarmism already?
It’s amusing that with so many uber-inductive climate models, now they write computer models to study the computer models.
Climate “Science” on Trial; Clear-Cutting Forests to Save the Trees
https://co2islife.wordpress.com/2017/02/25/climate-science-on-trial-clear-cutting-forests-to-save-the-trees/
The latest thing in The news today is that we need to eat more salt, not less. This comes On top of a string of food related research that oscillates between such things as coffee, wine, dairy, fat etc etc being good or bad for you.
Therefore I take the idea that we can know the temperature of pacific water back 10,000 years with a large pinch of salt, or whatever is today considered the appropriate thing to take a pinch of
Tonyb
Climate “Science” on Trial; How Does Ice Melt In Sub-Zero Temperatures?
https://co2islife.wordpress.com/2017/03/05/climate-science-on-trial-how-does-ice-melt-in-sub-zero-temperatures/
Dear Ms Curry, This is the most informative source on climate, science and related topics I am aware of. I am willing to pay a modest subscription fee to support your work. Do you have a funding mechanism in place? Regards. Chris Scanlon
Sent from my iPhone
>
Chris
I agree – this review alone contains several items that by themselves are almost worth a whole post to themselves, such as:
Pacific Ocean Heat Content During the Past 10,000 Years [link] … Important research that Medieval warming Period was global.
Don’t expect media focus when comprehensive analysis of sea-bed methane release points firmly away from alarm. [link] …
New temperature record from China gives no hint of any recent warming that can be attributed to atmospheric CO2 [link]
Climatologists say Labrador Sea could cool within a decade before end of this century, leading to unprecedented disruption [link]
The ‘bootstrap’ philosophy of nature: Physicists have found evidence of a mathematical structure that encompasses all quantum theories: [link]
(“Links” here won’t work – please go above to main article)
It’s an embarrassment of riches!
I think there are plenty of folks who would pay to make sure that this blog continues at its current level of quality. There probably is not a lot of money in it for Prof. Curry but maybe she could hire someone to moderate and vet subjects for Week in Review. “Dues” could help defray the cost.
I’m concerned about the declining volume of content though I certainly understand the reasons why.
We need a blog or some vehicle that’s something other than an echo chamber. Personally, I learn the most from this blog because of commenters who challenge the skeptical view. On WUWT there are too many comments that come from the same place, and that blog becomes a bore.
Sometimes they write papers afterwards that embarrass the embarrassment of riches…
The only person who actually stays here and endures the completely moderation-free mountains of personal ridicule and insult is Jim D. This place is a vile cesspool. Climate skepticism is a CargoCult. It’s members are physics spoon benders.
Where’s the stadium wave? Will you get a prayer cloth with a donation?
Hi Chris, I am starting to get more requests like this. I am considering some options:
• tip jar (such as at climate audit) or subscription (like bishop hill)
• subscription for special reports
Before doing anything like this, i need to get to a point where I can spend more time writing content for the blog. This topic would be good for a discussion thread on the blog at some point
NASA study improves forecasts of summer Arctic sea ice [link]
I await the testing with new actual “forecasts”. This is reminiscent of the repeated improvements of GDP predictors which miraculously never improve actual GDP forecasts.
Paraphrasing Milton Friedman: “Some lessons are never learned”. He was referring specifically to economists who never seem to learn about regression to the mean. Here I refer to the necessity that all claims of improved forecasting need to be tested with new actual forecasts, not improved post-diction.
The jump-link to the NYT is a funny journey
Not only is the article oblivious to the numerous instances where singe righteous persons stood alone against the shared ignorance of the majority, it also is seemingly unconscious of the fact that the title of the article — Why We Believe Obvious Untruths — could just as easily have been, Why We Believe Obama or AGW or Castro was Good for Cuba?
Revolutionary Power Plant Captures All Its Carbon Emissions, At No Extra Cost [link]
Removing CO2 from gas burning is analagous to creating a pill that removes orgasms from sex. Yes, clever technical stuff, which gratifies prohibitionist religious sentiment, but worse than pointless.
CO2 is good. No need to be pruriently hung-up about it.
Let nature take it’s course.
Hominids burn stuff, it’s OK, just part of Gaia.
Like all other animal and plant activities, ultimately it will benefit the environment. Make CO2, not war.
I thought the article was bizarre in claiming it had no CO2 emissions. It looks like the CO2 is merely piped away to some other location for disposal/use. No mention of how the CO2 is separated from the N2 etc.
No real description of the process. Gas turbines produce a lot of water vapor, plus small amounts of other stuff- N-0, N-02. No mention of how the CO2 is separated and recycled. High pressure pipelines would be enough to keep is supercritical at reasonable temperatures. Perhaps they’ve found a high pressure membrane or molecular sieve. Water vapor would tend to pass through the appropriate material much faster than CO2, or vice-versa, depending on the material.
Assessing temperature patterns projections made in 1989 [link]
According to Figure 1 and the text describing it, the predicted changes were about 5 times as great as the measured changes. Did I read that right? I am unable to cut and paste. The text is on p 163, upper rightmost column.
They are just comparing patterns of change. The timeframes were 75 years for the model and 25 years for the observations. This explains the magnitude difference. The interesting thing is that these are the original authors back in 1989 returning 25 years later to assess how their model did, and the patterns of change are as predicted.
Their experiment was a 1% increase in CO2 per year, typical of a doubling experiment where it doubles in 70 years. That is about twice the actual rate of growth in the last 25 years. The factor of 5 accounts for the difference in CO2 levels quite accurately.
Jim D: The factor of 5 accounts for the difference in CO2 levels quite accurately.
Thank you for your comments. They were spot on.
I think you mean that the factor of 5 accounts simultaneously for the difference in CO2 levels and the difference in time frames. iiuyc
Most of the pattern agreement is the water/land contrast. That’s something.
Yes, climate change appears to scale with CO2 amount regardless of whether it is increasing twice as fast or not.
Pingback: Week in review – science edition | privateclientweb
Revolutionary power plant. Read the linked Forbes article which was not that informative, then did a bit of research on the Allam cycle. Is potentially real.
There are three tricks. 1. Oxyfuel (pure oxygen used for combustion rather than air) combustion. 2. Supercritical CO2 as the main working fluid for the turbine. 3. Heat exchanger separation of the water in the exhaust stream leaving pure CO2 to feed back as working fluid or for CCS. 2 and 3 are just engineering. For the 50 MW Exelon pilot plant now under construction, Toshiba is doing the turbine and Bechtel the heat exchanger and plumbing.
1 is the tricky trick. Pure oxygen is ordinarily produced by cryogenic fractional distillation. Very energy intensive; would make the scheme technically unviable. BUT in 2012 MIT developed ceramic membranes that separate oxygen via ion transport; no nitrogen reaches the ‘far side’. (MIT tech review article on the lab scale functioning setup). The process uses just atmospheric pressure so long as the oxygen on the far side is removed to establish a partial pressure gradient across the membrane (by combustion). The ceramic membranes function at the temperature of combustion ~1000C. So that is the real technology breakthrough. The CCS part is silly, but putting these power plants where tertiary CO2 oil recovery is desirable makes a lot of sense. Ship electricity, not CO2. Tertiary recovery CO2 now comes from amine process CO2 stripping of natural gas before it goes into major pipelines. Amine process is expensive and uses energy, and only produces CO2 where there is a natural gas field high in CO2.
Something to keep an eye on.
Rud
Thanks for providing this helpful additional information. As you say, definitely something to keep an eye on – potentially this could make CCS much more economic and hence attractive.
NL, I enjoy keeping up with the bleeding edge of technology, especially in energy since is a major concern highlighted in my ebooks. Been doing that now since 1976. Was unaware of this development until Judith’s link. But IF scales commercially (lifetime is always an issue with any seiving membrane (think reverse osmosis desal, fuel cells…) then is a potential gamechanger. Some big respected commercial names were impressed enough to give it a go. Always a good sign. A 50 MW plant isn’t ruinous if it fails, but is still a financial commitment positive.
That makes more sense. I thought they were sing air for combustion.
ristvan: Something to keep an eye on.
I did as you did, and I agree that it is a story to follow.
CO2 and Jim D, I ask a favor. Take your politics outside here to some ther parking lot where you can duke it out. This is mostly a science/science policy blog. See my immediately upthread comment on some interesting possible technology enabled by science for an example of what you could but don’t comment on. Knock it off, already.
Yep, Allam Cycle looks pretty promising.
And direct reduction of iron using all that cheap natgas could be fun also.
DRI is already commercial some places. Cheap nat gas is the key ingredient.
The pernicious notion that density-driven MOC is somehow “behind…the Gulf Stream,” instead of merely being a weak adjunct of the wind-driven surface circulation, is what makes all the speculation about “unprecedented disruption” by climate modelers dynamically implausible. And It’s totally ludicrous to pretend that any “warmth” is carried from the depths to the surface in a persistently stratified ocean. Such physically nonsensical conceptions cannot be taken seriously.
Something to smile about at last.
https://youtu.be/Dts7up5B-Y0
This was of course implicit in Poincare’s three body problem. It might more commonly be regarded as ergodic – returning to states over a long enough period – rather than visiting new state spaces in which we are all doomed. But at any rate chaotic destabilisation in billions of years is probably not high on anyone’s agenda.
One idea is that these chaotic orbits drive changes in the solar magneto – it is presumed to impose some order on chaotic solar turbulence.
I cannot believe that the interview with Tesla in 1899 was real. In that interview Tesla told that Einstein’s theory of special relativity was wrong.
But Einstein published his theory in 1905. In 1899 no one knew Einstein.
Was Tesla a prophet?
The “interview” is a play written by a Serbian poet.
non pay-walled version of study on
“Motivated Responding” vs. ‘Motivated Learning…
From the article:
=={ Like studies of stored cognitions, studies of learning may overstate bias if they do not account for motivated responding . }==
This touches on a big problem I have with a lot of the studies based on opinion polling. For example, the studies of the impact of “Climategate” where “libertarian” were more likely to “report” that they “learned” about the unreliability of climate science by virtue of leaked emails. In other words, they “reported” that “Climategate” reduced their concern that ACO2 might pose risks.
My feeling has long been that while that “reporting” may have been accurate in some cases, in other cases it may well have been that “skeptics” merely “reported” such “learning” because it fit with a preexisting orientation. W/o a pre-(“Climategate”) test/post (“Climategate”) -test analysis of their views, there is no way to determine if their “report” actually coincided with what they “learned.”
Of course, reinforcing my question is the observation that the congruence between “learning” and “reporting” ran in the other direction in reaction to “Climategate” with people who had a more left-leaning ideological predisposition. Not surprisingly, what they “reported” “learning” from “Climategate” was that we should be even more concerned about ACO2 emissions.
It’s getting late so I didn’t read the whole thing. But the first example, changing headings on a 2×2 matrix purporting to compare the biases resulting from headings about banning or not banning concealed carry and crime statistics. Individual subjects were apparently randomly shown one of the matrices and whether how they judged what the “data” showed compared with their political party.
My first question in a loaded situation like that would be “Where did the data come from?”. I think most people understand that statistical surveys of any kind often are highly variable, that includes opinion data, sample size, completeness of the data being shown, and all sorts of other variables that affect the “data”.
While people may have a bias towards data that is congenial to their politics, it may also reflect that they have learned that the data may be biased against their politics and not report or learn what is shown but simply go with what they already have learned is true.
I’d love scientists’ opinions of this straightforward intro-college-stats writeup of two key facets of climate change stats: rate of Arctic ice melting, and also how we infer the role of carbon dioxide in warming temps. http://ww2.amstat.org/publications/jse/v21n1/witt.pdf
Basically green garbage. No one claims that direct solar irradiance explained GW but the lesson says it is either that or CO2, which is nonsense. The surface temperature statistics are also junk but they are accepted as measurements.
Thank you for taking a look! Wondering about the Arctic sea ice part, if you had time to look at that …
Also, I think people did used to wonder whether solar irradiance was a main culprit?
The surface temperature statistics are also junk but they are accepted as measurements.
Abject nonsense.
JCH, the nonsense is the claim that these surface statistical models are accurate to a hundredth of a degree (or even a tenth). See uncertainties listed below in my modest proposal to reform NOAA.
Reforming NOAA research
In addition to budget cuts we need to refocus climate research. Here is my proposal for NOAA global and regional temperature estimates. The first step is a white paper elaborating on these needs in some detail.
A needed NOAA temperature research program
NOAA’s global and US temperature estimates have become highly controversial. The core issue is accuracy. These estimates are sensitive to a number of factors, but the magnitude of sensitivity for each factor is unknown. NOAA’s present practice of stating temperatures to a hundredth of a degree is clearly untenable, because it ignores these significant uncertainties.
Thus we need a focused research program to try to determine the accuracy range of these temperature estimates. Here is a brief outline of the factors to be explored. The goal is to attempt to estimate the uncertainty each contributes to the temperature estimates.
Research question: How much uncertainty does any of the following factors contribute to specific global and regional temperature estimates. Each can be explored independently.
1. The urban heat island effect (UHI). This is known to exist but its specific effect on the temperature recording stations at any given time and place is uncertain.
2. Local heat contamination or cooling of temperature readings. Extensive investigation has shown that this is a widespread problem. Its overall extent and effect is highly uncertain.
3. Other temperature recording station factors, to be identified and explored.
4. Adjustments to temperature data, to be identified and explored. There are numerous adjustments made to the raw temperature data. These need to be cataloged, then analyzed for uncertainty.
5. Homogenization, which assumes that temperature change is uniform over large areas, is a particularly troubling adjustment deserving of special attention.
6. The use of sea surface temperature (SST) proxies in global temperature estimates. Proxies always add significant uncertainty. In the global case the majority of the surface is oceanic.
7. The use of an availability sample rather than a random sample. It is a canon of statistical sampling theory that availability samples are unreliable. How much uncertainty this creates in the temperature estimates is a major issue.
8. Area averaging. This is the basic method used in the surface temperature estimating model and it is a nonstandard statistical method, which creates its own uncertainties.
9. Interpolation or in-fill. Many of the area averaging grid cells do not have good temperature data, so interpolation is used to fill them in. This can be done in many different ways, which creates another major uncertainty.
10. Other factors, to be identified and explored.
To the extent that the uncertainty range contributed by each factor can be quantified, these ranges can then be combined and added into the statistical temperature model. How to do this is itself a research need.
Note that it is not a matter of adjusting the estimate, which is what is presently done. One cannot adjust away an uncertainty. The resulting temperature estimates will at best be in the form of a likely range, not a specific value as is now done.
Most of this research will also be applicable to the other surface temperature estimation models, such as HadCRU, GISS and BEST.
Eliza, many scientists have argued that solar variability explains global warming, but not via direct irradiance, at least not for some time.
DW, Berkeley Earth addresses all of your points, It has already been done. If you think you know a better way than what they did, state it, and how much effect you think it would have.
Eliza, the Arctic ice math is okay as far as it goes (stopping in 2012 is misleading). That Arctic ice has deceased since the 1970s is well known. But the narrative is pure alarmism and only alarmists are cited. Not mentioned is the evidence that Arctic ice extent is cyclic, with previous low levels. Plus this is just a region, not a canary.
Jim D, if BEST has estimated these various ranges what is their total range estimate? Can you point me to it, or if not to the separate range estimates? I am especially curious about their estimate of the uncertainty due to the use of an availability (or convenience) sample, as I know of no way to estimate that.
Jim D, I seriously doubt that BEST has done what you claim, namely covered all of my uncertainties. They may have touched on 3 or 4, at best. But I do not claim to know a “better way.” What I am proposing is a NOAA research program to address these uncertainties.
Perhaps you have not looked at BEST because they do also give uncertainties. It was an effort that started from scratch like you are saying NOAA should. You have stated no valid criticism of it.
1. The urban heat island effect (UHI). This is known to exist but its specific effect on the temperature recording stations at any given time and place is uncertain.
A) This has been studied 6 ways from sunday using multiple
muliple methods, using multiple definitions of urban/rural.
Using adjusted and raw data, using only rural stations,
using only pristine stations.. And the answer is the same.
The UHI effect is not large enough on a global scale
to get out of the noise. OF COURSE it’s uncertain, but
we can say the warming we see is not due to UHI.
B I’m in the middle of studying it yet again with several
new datasets. Still finding nothing.
2. Local heat contamination or cooling of temperature readings. Extensive investigation has shown that this is a widespread problem. Its overall extent and effect is highly uncertain.
A) actually both studies on Micro site found NO effect.
B) What is missing is a Field study of micro site. There is
no evidence ( controlled tests) that micro site
is anything to be concerned about. Lots of speculation
about pavement and air conditioners. but NO field study
3. Other temperature recording station factors, to be identified and explored.
A) Unicorns
4. Adjustments to temperature data, to be identified and explored. There are numerous adjustments made to the raw temperature data. These need to be cataloged, then analyzed for uncertainty.
A) This has been studied to death.
B) go read the literature
5. Homogenization, which assumes that temperature change is uniform over large areas, is a particularly troubling adjustment deserving of special attention.
A) You misuse the term.
B) We empirically know that temperature change is uniform out to
certain distances. Heck, ask tonyb about WHY you can use CET
as a global proxity.
6. The use of sea surface temperature (SST) proxies in global temperature estimates. Proxies always add significant uncertainty. In the global case the majority of the surface is oceanic.
A) use any ocean measure you like. The answer is the same
B) SST is not used as a “proxy” In global indexes
7. The use of an availability sample rather than a random sample. It is a canon of statistical sampling theory that availability samples are unreliable. How much uncertainty this creates in the temperature estimates is a major issue.
A) 90% of the variance in temperature is captured by latitude
and elevation.
B) You get the same answer whether you use all stations, or a random sample of stations.
C) Bone up on geostats.
8. Area averaging. This is the basic method used in the surface temperature estimating model and it is a nonstandard statistical method, which creates its own uncertainties.
A) there are three standard methods
B) IDW used by giss and hadcrut
C) thin plane splines used by CRU and forthcoming
D) Kriging used by BE and CW
“area averaging” is how we refer to it for folks like you who dont
understand the math.
9. Interpolation or in-fill. Many of the area averaging grid cells do not have good temperature data, so interpolation is used to fill them in. This can be done in many different ways, which creates another major uncertainty.
A) Wrong. you dont understand interpolation
B) we test many ways of interpolating.. answers dont change much
10. Other factors, to be identified and explored.
A) Unicorns
last q: “Plus this is just a region, not a canary.” I don’t really understand what this means — if Arctic ice were to be diminishing, wouldn’t that have (known or unknown) effects on weather elsewhere? And, also, isn’t it worth knowing what’s happening there in any case?
Jim D & Mosher, what is the total estimated uncertainty range and the components thereof, for the uncertainties that I have listed? What are the specific numbers? Hand waving is not an answer.
Mosher, your answers are silly so I guess you will not be applying for a research grant under the new program, if I get it.
Eliza, the Arctic piece starts off with the standard warmer’s canary analog, to the effect that Arctic warming is somehow a harbinger of global warming. It is not. It is just a region.
Eliza,
The first example on Arctic Ice is not bad. It would be better if it noted the limitations on what can and cannot be concluded from the statistical inference.
The second example is appalling from a number of perspectives. (a) You cannot infer causation from correlation. This is especially true for time series. Any two time series which are roughly monotonic will show a high Pearson correlation coefficient. If you were, for example, to cross-plot the the nominal price of cabbages against temperature over the same time frame you might find a higher correlation coefficient than against CO2, but you could infer nothing from this. In some circumstances, you can infer something from the ABSENCE of correlation, when a model predicts that such correlation should exist. (b) The underlying physics do not, repeat not, predict a correlation between CO2 and temperature. The forcing associated with increasing CO2 varies with the log of the atmospheric concentration, but even a correlation against ln CO2 would not be correct here, because the relationship is defined via an integral equation and is generally non-linear. It is silly to the point of being irresponsible in a teaching module to set up an erroneous physics model for students. (c) (c) The annual temperature data is auto-correlated and tests positive for a unit root. This requires sophisticated stats tools to avoid spurious correlation. To include a time series with a unit root as an example in a very low level stats module is once again irresponsible IMO.
I hope this helps.
Oh my goodness. I was thinking of dropping a note to the author of this module with some helpful pro bono advice.
http://www.fox.temple.edu/cms/wp-content/uploads/2014/06/Gary-Witt-Resume.pdf
http://www.debatepolitics.com/archives/36942-man-wrecked-wall-street-and-our-economy-gary-witt.html
Thank you for looking, too! Two parts of your explanation I did not get:, w/in this part: “but even a correlation against ln CO2 would not be correct here, because the relationship is defined via an integral equation and is generally non-linear.”
1) I don’t know what “a correlation against ln CO2” means, and
2) I don’t know what “via an integral equation and is generally non-linear” means.
Thanks in advance if you have time to answer.
Eliza
I don’t know the full extent of your interest in the Arctic. but to get a complete picture a review of scientific papers on historical warming periods in that region is necessary. Google scholarly articles on Arctic historical warming or any variant. Oscillations affecting the Arctic is also an interesting topic.
Climatereason, aka, Tony Brown has written posts here in the past on this subject. Very well researched and an enjoyable read.
Hi Eliza,
It would help me a bit to know whether you are trying to learn some statistics, trying to learn some climate science or trying to evaluate the module paper from a professional perspective…
But here goes:-
Whenever someone runs a simple linear “Y on X” regression, he/she is accepting implicitly the validity of certain assumptions. Specifically, he assumes (a) that the X values are independently sampled from a normal distribution and (b) that the Y values are not dependent on previous Y values (autocorrelation). He hypothesises and tests that the variation in the Y values can be explained by a simple linear relationship between the Y and X values, leaving error terms which are normally distributed.
When dealing with regression between two time series, very often either assumption (a) or assumption (b) are very often not both valid, but they can and should be tested before proceeding to a simple Y on X regression, since this makes a big difference to any statistical inference drawn. (Google “autocorrelation in regression analysis” and “spurious correlation”, for examples). This doesn’t happen in the stats module. It is irresponsible in my mind for a stats module to illustrate how to make a dog’s dinner of what should be a simple example of a useful statistical application!
For the specific example, the physics does not say that temperature should vary linearly with atmospheric CO2 concentration. It says (instead) that the temperature should vary (in a complex way) with the change in the radiative FORCING associated with the change in atmospheric concentration. Numerous empirical numerical experiments have been carried out, which involve varying the well-mixed concentration of CO2 on the globe and then calculating the change in net flux at the top of the atmosphere. The summation (integration) of all of the net flux changes over the globe gives the total instantaneous change in net flux associated with the change in CO2 concentration (the “CO2 forcing”). This then allows the cross-plot of CO2 forcing against CO2 atmospheric concentration. These experiments indicate that the total CO2 forcing varies logarithmically with CO2 concentration. So we do not expect that temperature will vary linearly with CO2 concentration; we expect that it will vary in a more complex way with the logarithm of concentration i.e. with ln(CO2 concentration).
So why can’t we run a regression of ln(CO2 concentration) against temperature? Even under the simplest of assumptions – single body model with constant feedback term – the heating model (or energy balance model) says that the temperature series over time, T(t), takes the form:-
T(t) = g(t) * Integral of (F(t) * h(t)) with respect to t, where F(t) in this instance is the variation of the CO2 forcing with time {i.e. varying with ln(CO2 concentration)} and g(t) and h(t) are parameterised exponential functions of time.
For more complicated heating models, the expression becomes an even more complicated convolution integral. This is what I meant when I used the term “via an integral equation”. It is just wrong in physics and mathematics to believe that this complexity can be simplified to the simplified rehression equation which states that T = a*CO2 concentration + b or even that T = a*ln(CO2 concentration) + b. Outwith some very specific circumstances, (specifically a linearly increasing forcing with time) the above integral equation(s) cannot be approximated or simplified to these simple linear regression models.
Hello, kribaez and cerescokid,
I thank you both so much. Regarding, what am I trying to learn about: I’m trying to get, I guess, to the heart of the science, for my own edification, and so I can think clearly about climate science (and policies). So I think I need to understand basic statistics better — and your response, kribaez, is most helpful. Thanks again. And regarding the Arctic, I just think it’s a good place to start to learn about climate work, and I’m fascinated by polar regions!
Eliza
I applaud your interest in this issue. I think as you dig deeper with your own independent research you will find that it is a lot more complex and nuanced than is generally believed. That is why I started to follow it 8 years ago. Every time I read about a claim, I would dig out as much as I could and often times learned there were many more questions than answers. That applies to the polar regions, sea level rise, droughts, historical temperatures (historical defined as going back before Cher’s first album), tornadoes, etc. Often times the hysterical headlines are not supported when looking at the scientific papers. As of today, the unknowns outweigh the knowns significantly.
A little circumspection never hurts when dealing with this subject, thus my suggestion to learn about previous warm periods in the Arctic.
EPA chief Pruitt says CO2 is not a major factor in global warming.
http://www.cnbc.com/2017/03/09/epa-chief-scott-pruitt.html
Woohoo!
He might be right but he knows he can’t prove it. But I’m 100% certain he will be not be held accountable if he’s wrong. He’s an evangelical christian and as we know all sins are forgiven and washed clean in the blood of christ.
Pruitt is himself a staunchly committed, conservative Christian. He is a deacon at First Baptist Church in Broken Arrow, Oklahoma, and on the board of trustees of the Southern Baptist Theological Seminary, part of the conservative Southern Baptist denomination.
Nor will the alarmists be held accountable for being wrong, which is much more likely. Religion has nothing to do with it. It is wonderful to see a skeptic running EPA. Note that the article cites an alarmist EPA website. That needs to go away or be rewritten.
Ridiculing Christians, or practitioners of any other group that believes in God, is a sign of profound ignorance. This makes you a bigot, not a rational critic.
I don’t know much about Scott Pruitt, but I’ve seen no sign that being a Christian (assuming that he is) makes him less qualified to be EPA Administrator. From what I can see, he’s not an idealogue and he’s made his reputation defending his native State from an overbearing EPA.
If you want to be critical of Trumpites, there’s plenty of ripe material. Attacking their religion is stupid.
As a fan of irony:
scraft1,
Civilization and technology marches forward but for some reason we can’t shake our crutch on some mystical deities invented before the bronze age.
I think all religions are scams, hoaxes or conspiracies with the exception of Buddhism and Confucianism. I also believe most religions can serve a useful function in establishing a moral framework that can promote compassion and empathy which can suppress some less desirable features of human behavior like stealing, murder and lying. It’s not a black and white issue but should not influence the science of physics. Pruitt might make a good administrator for the GSA or the VA but is the wrong man to look after the environment.
I thought he said something more along the lines of “too soon to tell,” and also he didn’t really explain much else (not that this interview was a good forum for that).
I think he says both, allowing for error, which is sensible in science.
It may be too soon to tell for him, but every major scientific society, government and industry have recognized that emissions need to slow down, and that is why we have Paris. Even Exxon has statements on emissions that are far ahead of Pruitt. When a person even trails the fossil fuel industry on climate change, that is something.
http://corporate.exxonmobil.com/en/current-issues/climate-policy/climate-perspectives/our-position
“The risk of climate change is clear and the risk warrants action. Increasing carbon emissions in the atmosphere are having a warming effect. There is a broad scientific and policy consensus that action must be taken to further quantify and assess the risks.”
Jim D, this is a corporate political statement speaking to a party that is no longer in power. Ah Paris.
“every major scientific society, government and industry have recognized that emissions need to slow down”\
“Recognized” in this instance means “pays lip service to – for the time being”.
You need to learn to understand the subtle nuances, Jimbo.
Once the USA kicks the whole nasty expensive industry-crippling scam into touch, the rush to leave will be something to behold.
The game’s over.
Live with it.
http://www.bp.com/en/global/corporate/bp-magazine/conversations/chief-economist-on-energy-outlook.html
See that little orange bit at the right hand end?
That’s the ‘Unreliables’.
http://www.bp.com/content/bp/en/global/corporate/bp-magazine/conversations/chief-economist-on-energy-outlook/jcr:content/article_dropzone/image.img.840.high.jpg/1442252599538.jpg
Pingback: Keskiajalla oli globaalisti lämmintä | Roskasaitti
Mosher was kind enough to respond to my proposed NOAA research program on surface temperature statistics, unicorns and all. See his https://judithcurry.com/2017/03/04/week-in-review-science-edition-63/#comment-841372. There is far too much there to respond to so I thought to do it item by item, beginning with #7, as follows.
Wojick says: 7. The use of an availability sample rather than a random sample. It is a canon of statistical sampling theory that availability samples are unreliable. How much uncertainty this creates in the temperature estimates is a major issue.
Mosher says: A) 90% of the variance in temperature is captured by latitude
and elevation.
B) You get the same answer whether you use all stations, or a random sample of stations.
C) Bone up on geostats.
Wojick responds: Your response has nothing to do with my point. (I note in passing that if there is any variance in the changes among the stations, which there certainly is, then getting the same result from random samples of these stations is statistically impossible. Random samples of stations should give you a distribution around the all station value.)
Here are some examples of what statistical science websites have to say about convenience samples such as NOAA and the other surface statistical models use, including BEST. In this case the population being sampled is the Earth’s actual temperature at all points, or the contiguous US, the spatial average of which is what is being estimated. (This is something I recently wrote elsewhere.)
Convenience sampling
Note that the collection of thermometer readings which are used are what is called in statistical science a “convenience sample” or an “availability sample.” This means that the data was not designed as a representative sample of the surface in question; rather it is just what was available.
Statistical science is very clear that a convenience sample does not provide an accurate estimate. Here are some examples from several statistical science websites:
A. “Research Methodology” says this:
“Disadvantages of Convenience Sampling
Highly vulnerable to selection bias and influences beyond the control of the researcher
High level of sampling error
Studies that use convenience sampling have little credibility due to reasons above”
http://research-methodology.net/sampling-in-primary-data-collection/convenience-sampling/
B. “Statistics.about.com” says this:
“Problems with Convenience Samples
As indicated by their name, convenience samples are definitely easy to obtain. There is virtually no difficulty in selecting members of the population for a convenience sample. However, there is a price to pay for this lack of effort: convenience samples are virtually worthless in statistics.”
https://www.thoughtco.com/what-is-a-convenience-sample-3126358
C. “Conveniencesampling.net” says this:
“Because of the flaws found in this form of sampling, scientists cannot draw concrete conclusions from their data.”
http://www.conveniencesampling.net/Convenience-Sampling-Pros-and-Cons.html
Summary: As you can see this is nothing like a normal average. The statistical model is very complex, with many alternative possible ways of taking each step. The data is sparse, often of a proxy nature and a convenience sample. This is actually a crude estimating technique not a statistical sampling method. In no case is it an accurate measurement of global or US temperature.