by Judith Curry
[O]ur results suggest that the recent record temperature years are are roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic than in its absence. – Mann et al.
Recently published in Nature [link; full paper available]:
The Likelihood of Recent Record Warmth
M.E. Mann, S. Rahmstorf, B.A. Steinman, M. Tingley, and S.K. Miller
Abstract. 2014 was nominally the warmest year on record for both the globe and northern hemisphere based on historical records spanning the past one and a half centuries1,2. It was the latest in a recent run of record temperatures spanning the past decade and a half. Press accounts reported odds as low as one-in-650 million that the observed run of global temperature records would be expected to occur in the absence of human-caused global warming. Press reports notwithstanding, the question of how likely observed temperature records may have have been both with and without human influence is interesting in its own right. Here we attempt to address that question using a semi-empirical approach that combines the latest (CMIP53) climate model simulations with observations of global and hemispheric mean temperature. We find that individual record years and the observed runs of record-setting temperatures were extremely unlikely to have occurred in the absence of human-caused climate change, though not nearly as unlikely as press reports have suggested. These same record temperatures were, by contrast, quite likely to have occurred in the presence of anthropogenic climate forcing.
The paper is getting some media attention, here are two articles showing the range:
JC comments
My comments were solicited for the Examiner article, here is what I sent the reporter:
The analysis of Mann et al. glosses over 3 major disputes in climate research:
i) Errors and uncertainty in the temperature record, and reconciling the surface temperature record (which shows some warming in the recent decades) against the global satellite record (which shows essentially no warming for the past 18 years).
ii) Climate models that are running significantly too hot. For the past decade, global average surface temperatures have been at the bottom of the envelope of climate model simulations. [link] Even the very warm year 2015 (anomalously warm owing to a very strong El Nino) is cooler than the multi-model ensemble mean prediction.
iii) How to separate out human-caused climate variability from natural climate variability remains a challenging and unsolved problem. Mann et al. use the method of Steinmann et al. to infer the forced variability (e.g. CO2, solar, volcanoes), calculating the internal variability (e.g. from ocean circulations) as a residual. In effect, the multi-model ensemble used by Steinmann et al. assume that all of the recent warming is forced by CO2I and my colleagues, led by Sergey Kravtsov, recently published a paper in Science [link; see also this blog post] arguing that the method of Steinman et al. is flawed, resulting in a substantial underestimate of the internal variability from large scale, multi-decadal ocean oscillations.
Global temperatures have overall been increasing for more than 200 years. Human caused CO2 emissions does not explain a significant amount of this warming prior to 1950. How to attribute the recent variations in global temperature remains an issue associated with substantial uncertainty. The IPCC assessment reports conclude ‘more than half’ of the warming since 1950 is caused by humans, with more than half implying >50% [who knows what this actually implies; see my disagreement with Gavin]. This assessment acknowledges uncertainties in climate models, which find that all of the warming since 1950 is caused by humans. The Mann et al. paper is assuming that all of the warming has been caused by humans, which given our current state of knowledge is an unwarranted assumption.
———–
An additional comment, too technical to send to the Examiner:
iv) the use of the multi-model ensemble in this way is simply inappropriate from a statistical perspective. See my previous post How should we interpret an ensemble of climate models? Excerpts:
Given the inadequacies of current climate models, how should we interpret the multi-model ensemble simulations of the 21st century climate used in the IPCC assessment reports? This ensemble-of-opportunity is comprised of models with generally similar structures but different parameter choices and calibration histories. McWilliams (2007) and Parker (2010) argue that current climate model ensembles are not designed to sample representational uncertainty in a thorough or strategic way.
Stainforth et al. (2007) argue that model inadequacy and an insufficient number of simulations in the ensemble preclude producing meaningful probability distributions from the frequency of model outcomes of future climate. Stainforth et al. state: “[G]iven nonlinear models with large systematic errors under current conditions, no connection has been even remotely established for relating the distribution of model states under altered conditions to decision-relevant probability distributions. . . Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs, imply much greater confidence than the underlying assumptions justify.”
Nic Lewis’ comments
I solicited comments from Nic Lewis on the paper, he sent some quick initial comments, the revised version is included below:
Hi Judy, It is a paper that would be of very little scientific value even if it were 100% correct. I have some specific comments:
1. They say “It is appropriate to define a stationary stochastic time series model for using parameters estimated from the residual series.” This is an unsupported assertion. On the contrary, this method of using residuals between the recorded and model-simulated temperature changes to estimate internal variability appears unsatisfactory; the observed record is too short to fully sample internal variability and there is only one instance of it. Moreover the model parameters and their forcing strengths have veryquite likely been tuned so that model simulations of the historical period provide a good match to observed temperature changes (e.g. by using strongly negative aerosol forcing changes in the third quarter of the 20th century to provide a better match to the ‘grand hiatus’). Doing so artificially reduces the residuals. Using estimates based on long period AOGCM unforced control runs, as routinely done in detection and attribution studies, is a much less unsatisfactory method albeit far from perfect.
2. The proper way of separating anthropogenically forced from naturally forced climate changes and from internal variability is to perform a multimodel detection and attribution analysis, using gridded data not just global or NH means. Two thorough recent studies that did so were used in the IPCC AR5 report to reach their anthropogenic attribution statements. This study does nothing of comparable sophistication and does not enable any stronger statements to be made than in AR5.
3. They say that a long range dependence (long memory) noise process is not supported: ‘Some researchers have argued that climate noise may exhibit first order non-stationary behaviour, i.e. so-called “long-range dependence”. Analyses of both modern and paleoclimate observations, however, support the conclusion that it is only the anthropogenic climate change signal that displays first-order non-stationarity behavior, with climatic noise best described by a stationary noise model. We have nonetheless considered the additional case of (3) ‘persistent’ red noise wherein the noise model is fit to the raw observational series’
Three problems there.
a) The “analyses” they cite is an editorial comment by Michael Mann.
b) Long-range dependency does NOT in general involve first-order non-stationarity behavior. A classical case of long-range dependence is a fractional difference model, which is not first-order non-stationary provided that the difference parameter is under 0.5. Such a model is considered a physically plausible simple one-adjustable-parameter characterisation of climate internal variability, just as is the first-order autoregressive (AR(1)), short range dependency, model that they use. (Hasselmann 1979; Vyushin and Kushner 2009). Imbers et al (2013) found that both models adequately fitted climate internal variability in GMST over the historical period, but that using the long-range dependency model the uncertainty ranges were larger.
c) Their ‘persistent’ red noise model does NOT have any long range dependence at all (it is an AR(1) model with a differently estimated autocorrelation parameter), so it provides little or no test of the effects of the true internal variability process having long range dependency.
4. Nothing in this study considers the probability of the high recent recorded temperatures having arisen in the case where there is an anthropogenic component but it is less strong than that simulated by the CMIP5 models, e.g. because they are too sensitive. Moreover, simple models involving lower sensitivity to greenhouse gas forcing, but also less aerosol cooling, than in most CMIP5 models can provide as good or better a match to the historical record as the CMIP5 multimodel mean – better if known natural multidecadal variability (AMO) is taken into account. This isThese are really the key questions – is all the warming over the historical period anthropogenic; and even assuming it is can it be accounted for by models that are less sensitive to increasing greenhouse gas concentrations? Few serious people these days argue that no part of the warming over the historical period has an anthropogenic cause, which is all that Mann’s method can seek to rule out.
5. I think that their extension of the CMIP5 Historical simulations from 2005 to 2014 is highly questionable. They say “We extend the CMIP5 series through 2014 using the estimates provided by ref. 13 (Supplementary Information).” That means, I think, that they reduce the CMIP5 model temperature trends after 2005 to account for supposedly lower actual than modelled forcing. In the SI it refers instead to ref. 12. Neither ref. 12 nor ref 13. appears to provide any such forcing estimates. Ref 14, which in their reference list bears the title of a different paper (the correct title is ‘Reconciling warming trends’) is highly likely wrong in its conclusion that forcings in CMIP5 models have been overestimated over the last decade or so. They only considered forcings that they thought were changed too positively in models. The issue was properly investigated in a more recent paper, Outen et al 2015, which considered all forcings and concluded that there was “no evidence that forcing errors play a significant role in explaining the hiatus” – they found a negligible difference in forcing when substituting recent observational estimates for those used in a CMIP5 model.
6. They use modelled SST (TOS) rather than 2m height air temperature (TAS) over the ocean. In principle this is appropriate when comparing with HadCRUT4 and old versions of GISTEMP, but not with the latest version of GISTEMP or with the new NOAA (Karl et al – MLOST?) record, as that adjusts SST to match near-surface air temperature on a decadal and multidecadal timescale.
Hope this helps. Nic
JC Conclusion
The Mann et al. paper certainly provides a punchy headline, and it is a challenge to communicate the problems with the paper to the public.
As I see it, this paper is a giant exercise in circular reasoning:
- Assume that the global surface temperature estimates are accurate; ignore the differences with the satellite atmospheric temperatures
- Assume that the CMIP5 multi-model ensemble can be used to accurately portray probabilities
- Assume that the CMIP5 models adequately simulate internal variability
- Assume that external forcing data is sufficiently certain
- Assume that the climate models are correct in explaining essentially 100% of the recent warming from CO2
In order for Mann et al.’s analysis to work, you have to buy each of these 5 assumptions; each of these is questionable to varying degrees.
” results suggest that the recent record temperature years are are roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic than in its absence.”
except that there is no correlation between emissions and warming
https://www.youtube.com/watch?v=vUvLoE5v0yQ
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2662870
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2642639
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2654191
This paper describes the mental defects in those who doubt record warmth:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0147905
Who needs description when the comments above so vividly evidence them?
Dr. Munshi,
These papers are very interesting and to someone with a decent, but not exhaustive knowledge of statistics, the methodology seems pretty straight forward.
Your results indicate the lack of any significant statistical correlation between fossil fuel emissions and surface temperatures as well as between fossil fuel emissions and atmospheric CO2 content. You also show some interesting results re. the carbon budget, fossil fuel emissions and mass balance. All of these topics are more than germaine to the climate change debate.
Have you published (or attempted to publish) these papers in any peer reviewed venues?
Unless I am missing something obvious, your work would be an excellent guest post here at CE. I encourage you and Dr. Curry to consider this.
Thank you for the heads-up.
Seems like a tempest in a teapot: Mann et al. do not reach any controversial conclusion — they say “These same record temperatures were, by contrast, quite likely to have occurred in the presence of anthropogenic climate forcing.”
Is there anyone who disputes that there is _some_ level of anthropogenic forcing?
Mann et al. are saying it is very likely that there is some anthropogenic component to temperature increase.
So?
Why did Nature publish this?
Is this controversial or important or newsworthy finding?
I thought Nature likes to publish “timely” and “important” work.
I’m not saying Mann et al.’s analysis is not flawed — it may be flawed but their conclusion is really no B.F.D.
(PS: There are bunch of typos in Nic’s write-up that you wish to edit.)
*may wish to edit
:)
No, the implication (and reporting) is that the record temps are due to a great deal of anthro forcing, not just a vague “some.” This is a probabilistic variant of the long standing argument the the models can only produce the warming by including a lot of anthro forcing.
DW, yes. It is another form of ‘pause rebuttal’. The warmunists will be in bigger trouble when Arctic ice rebounds, because that was an even clearer ‘polar amplification’ prediction. We just need a few more years on that one.
It produces a headline that the MSM can then use in doing its part to help with the indoctrination process of the mostly disinterested public that now mostly believes that climate change can be blamed on humans, but they don’t really care. The only ones who care are those truly caring politicians who want to help the masses by taking control of our energy sector and letting us all know what is best for us as we shiver in the cold or sweat in the heat. There really is not more to it than that.
Mr. Barnes has hit the nail (and little yimmy) on the head. The folks don’t care. They ain’t scared. What we need to do is change the politicians. Throw the bums out. Go Donald! New people in charge of EPA, NASA, NOAA, and DOJ next year. Green subsidies and funding of BS WARMEST YEAR EVAH! papers will cease and desist. Alarmist political hacks and minions will be hacking for uber. Go Donald!
Careful Donni, the Geritol has gone to your head. You need to be aware of the contraindications with Metamucil. Sleep tight, sport.
Could not agree more Don. One other conclusion that seems obvious to anyone except the warmunistas is that a “heavyweight” like Mann can fabricate anything and have it published in a “respectable” journal as long as the punch line blames humans for global warming, climate change/disruption, etc.
Yes Mr. Barnes, Dr. Stephen Schneider is gone but the Schneider Principle lives on. The climate science is exempt from the ethical strictures of the scientific method. They have to save the planet from the humans, by any means necessary. Deliberate confirmation bias and Alinskyite community organizing on a global scale serving a noble cause. What could possibly be wrong with that?
*uninterested
So instead of being stupendously wrong they are being stupendously frivolous? I like that!
(People, can we please have fewer links to Nature for a while? Can’t stand that silly rag. It’s like HuffPo for the slightly numerate.)
Pingback: On the likelihood of recent record warmth – Enjeux énergies et environnement
The “600 to 130,000 more likely” phrase would be the first clue that this lacks some scientific rigor and the piece is off into the propaganda direction. Aren’t these the same folks who would have us believe that 0.02±0.1°C is statistically significant?
More likely than what? How can one give a multiple of “more likely”?
Is the base likelihood 1.0?
And his range of multiplier is huge. That in itself give me great doubt the multiplier.
Given P(A) = 0.0001, and P(~A) = 0.9999, P(~A) is 10,000 times more likely. However, the statement is johncookish at best. May I ask what is it in Hiroshima bombs?
I know what a Hiroshima bomb is, and can relate to that as a metric. I don’t know what a “likely” is, or how to evaluate something that is 600 times more “likely”.
In all likelihood these objections will l generate more cartoons than op-eds.
Russell Seitz, it’s not pretty. It’s petty.
You are another website trying to pass itself off as the Watts Up With That blog, in an attempt to try and steal internet traffic.
Ken Rice, aka “And Then There’s Physics” also did that that with his “Wotts Up” blog. He hasn’t apologized yet, as far as I know, but did at least change the name. Whether Anthony Watts had to use a lawyer or a cattle prod, I don’t know.
You are just making yourself look bitter and malicious with such a blog title.
Don’t forget to copy your complaint to NBC’s Executive Vice President for Fair Usage and Cattle Prods.
It is scandalous that “What’s Up with That?” a perennial feature of the Really Famous television series Saturday Night Live should steal the thunder of a climate blog !
Do you think Kenan Thompson could get Al Gore to come back on for the fourth or fifth time to play Tony in a send-up sketch , or should he reach out to Tony to play Al?
Yeah, Creepy Ken and vvussell are all over the Bish like a bloody rash these days, carping, sniping, trolling and adding not a jot nor a tittle to any debate. Overpaid schoolkids with too much time on their hands. ATTP posts all the time during working hours. Like Mann, antisocial media seems to be where most of their “work” is done.
Pretty funny Russell
Got a laugh from me !!!
“…results suggest that the recent record temperature years are are roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic than in its absence. – Mann et al.”
I’m offering odds of 600-to-one or 130,000-to-one that this will be proven accurate. I’ll let you know which odds will apply when your bets are in. I’m selling real sizzle here, not mere steak.
Also, I’ll let you know what constitutes a “record” if I have to…but I’d rather let the emotion exude from it, as from words like “unprecedented” and “ever”. Because we don’t want to get all precise and Vulcan about this. This is about flesh and blood, about hearts and feelings.
[Please note the use of “suggest” and “roughly”, in case of continued Vulcan carping. Or in case we want to change the odds back to 650 million-to-one.]
– And Then There’s Cloud
Indeed a 217-fold uncertainty in their confidence level.
Another way to say what Mann et al are saying: “it is, like, super-likely that humans have influenced temperature at some level.”
Woo Hoo.
Let’s publish.
May I place a bet against Mann’s favorite? $10 on the field.
bts.
10…600…130,000…650,000,000…
Whatever!
Speaking of the Mannster, does anyone know what the current status is in Mann vs. Steyn vs. Mann? Clearly, Steyn’s calling the D.C. justice system a clogged toilet is proving more than apt…There’s so much wrong with out country it’s hard to keep track of it all…
(aka pg)
It is a robust 10,000, give or take a factor of 130. Astronomically it can be considered good, admitted error only +- two orders of magnitude.
Now just what’s the purpose? We all agree there is an anthro effect.
Mosomoso
I am currently estimating CET for the 13th century and would appreciate your expertise on dealing with the possible margin of error of some 600 to 130000 per cent.
How do I express that in the error bars without it looking just a trifle silly and possibly even unscientific?
tonybrown
tonyb
Easy. Get published in Nature. But go the whole enchilada and make your upper margin 650 million.
De l’audace!
Tony, doncha’ know ‘audacity rules’. Case study
1: One – tree – Yamal … Case study 2: Splicing ter
hide -the -decline … Case study 3: Lewandowski,
‘and Cook, sich recursive fury ) … oh, ‘n et Al …
Oh, and get one of those push polls where you get nearly everyone agreeing with you. I think they call ’em surveys, or studies, or something like that. You ask questions like: “How important is it to allow for extensive margins of error when there is considerable uncertainty? Critical? Very important? Fairly important? Not important?” This translates to “Can I just make stuff up?” but you must never say that. The type of expert who responds most eagerly will be perfectly accustomed to True/False and Tick-box examinations, so you really shouldn’t have any probs if you word things right.
Really! Do I have to tell you everything about climate science?
“I am currently estimating CET for the 13th century and would appreciate your expertise on dealing with the possible margin of error of some 600 to 130000 per cent.”
I believe that the cool kids often use logarithms. How does 2.5 to 6.1 sound to you? (Is the base 10 log of percentage meaningful even in climate science?)
Don
Thanks for your help. That sounds suitably scientific for my forthcoming paper on CET.
. Now, where’s the address for Nature…?
Tonyb
I believe Nature’s new offices are at the North Pole between Santa’s workshop (looks a lot like Auswich) and the palatial reindeer stable built with Rudolph’s royalties. If you send your submittal to Santa he’ll probably pass it along during the daily cocktail hour.
The Medieval Quiet Period
Ah, let em have it. What difference does it make?
Until they adjust out of existence the Minoan, Roman and Medieval Warm Periods, the proper odds are the same as those offered at post time for Secretariat winning the Belmont.
That was not a random walk and neither is this.
You’d really think that, with all this scary SLR, you’d at least be able to sail a boat up to Pevensey Castle, the former Roman fort where William the Conqueror came ashore in 1066. Even allowing for siltation of the harbour/lagoon etc, you’d think our modern climate disaster could manage a canoe approach where Romans could park an entire “classis” or fleet.
But no. It’s a non-random walk a mile inland. Even the bloody medievals could float there!
When someone is in love with themselves, as Mann is in love with his model progeny, rapture and speaking in an effuse manner, are all part and parcel of his public pronouncements.
Mann is to be viewed in the manner in which he presents himself: an exuberant carnival barker, a source of amusement, not someone to be taken seriously.
“Here we attempt to address that question (human caused climate warming) using a semi-empirical approach that combines the latest (CMIP53) climate model simulations with observations of global and hemispheric mean temperature.”
Again; Climate models are not experiments, rather, such models are an attempt to assess one’s capricious ideas of which Mann appears to have many; i.e., like a bad dream, induced by stomach indigestion.
I only see modesty:
http://www.meteo.psu.edu/holocene/public_html/Mann/album/album.php
nickels
I like photo #5 of Mann with Bill Clinton best: birds of a feather and all that.
My reanalysis of planetary motions indicate it’s roughly 600 to 130,000 times more likely that the Earth is round than that it’s flat.
Writing up draft for Nature.
I’m confident they’ll publish it.
The difference between the way Reuters and the Examiner report the study is stunning.
The media have gone from reporting the story, to being the story.
I think what Mann essentially concludes is that if some process werecausing the planet to warm, then a warming pattern would have a relatively high probability. But, if no process is causing the planet to warm, then a warming pattern would have a relatively low probability. But, you can’t go backwards without some initiatial assumptions about the probability that there’s some process warming the planet. Furthermore, this says nothing about what process(es) might be warming the planet.
“Press reports notwithstanding, the question of how likely observed temperature records may have have been both with and without human influence is interesting in its own right.”
Really? Interesting in its own right?
With science not being able to say what temperatures “may have been…without” it ain’t science.
And Mann et al finds such a conundrum to only be “interesting”?
Why is Mann selling out Third World and developing countries who need more energy to improve the lives of the people there who have every right to aspire to live as well as any global warming government scientist and petty bureaucrat in the West?
William Briggs has a good, breezy piece on the statistical errors in the Mann paper, at
file:///Users/donaitkin/Desktop/The%20Four%20Errors%20in%20Mann%20et%20al’s%20“The%20Likelihood%20of%20Recent%20Record%20Warmth”%20%7C.webarchive
Yes. His conclusions are damning. This is a worthless paper.
Actually, this link is better:
http://wmbriggs.com
The Four Errors in Mann et al’s “The Likelihood of Recent Record Warmth” W.M. Briggs Jan 26, 2016. Extracts:
Man and turtle …
http://imgs.xkcd.com/comics/pixels.png
One can condense William Briggs excellent review down to one sentence:
Mann’s latest paper is a textbook example of mathturbation.
Where the data is “tortured” to fit the politics/model.
Dr. Curry wrote;
Five steps of circular reasoning;
1) Assume that the global surface temperature estimates are accurate; ignore the differences with the satellite atmospheric temperatures. THE DATA IS NOT ACCURATE OR FIT FOR PURPOSE….
2) Assume that the CMIP5 multi-model ensemble can be used to accurately portray probabilities. NEVER VERIFIED…..
3) Assume that the CMIP5 models adequately simulate internal variability. NEVER FULLY ADDRESSED…..
4) Assume that external forcing data is sufficiently certain. STILL JUST A HYPOTHESIS…..
5) Assume that the climate models are correct in explaining essentially 100% of the recent warming from CO2. HA HA, WHAT WARMING ????
Cr-p…., this is not even circular reasoning, it is Mobius strip reasoning….
Assume, assume, assume, assume, assume and then CONCLUDE that the chances of a couple years of supposedly abnormal warm weather in a row MUST be caused by humans…
What a crock…
How about; form a hypothesis, observe predicted results, modify hypothesis, observe again, modify hypothesis…..
Maybe throw in a little VERIFY along the way and there might be some progress…
Lies, Damn Lies, and statistics……
Cheers, KevinK.
Persistence/Hurst – Kolmogorov Dynamics
I do not see that Mann et al. have any real recognition of climate persistence aka Hurst Kolmogorov Dynamics, mentioning neither Hurst nor Kolmogorov nor the leading international expert/researcher D. Koutsoyiannis. See especially Fig. 9 in:
Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.
Dimitriadis, P., and D. Koutsoyiannis, Climacogram versus autocovariance and power spectrum in stochastic modelling for Markovian and Hurst–Kolmogorov processes, Stochastic Environmental Research & Risk Assessment, 29 (6), 1649–1669, doi:10.1007/s00477-015-1023-7, 2015.
Compare the mentions of “persistence” in Mann et al. Emphasizing persistence due to anthropogenic warming, but NOT natural climatic persistence. I do not see that they recognize that the standard deviation of natural climate persistence – Hurst Kolmogorov Dynamics is about twice that of Markov processes.
Perhaps an expert in HK Dynamics and statistics can address this further.
Satellite/Ground Temperature Divergence, Climate Persistence
See: Problematic Adjustments And Divergences (Now Includes June Data)
Mann et al., apparently do not recognize that the surface record adjustments very highly correlate with CO2 (R2=0.9867) rather than being random! Ie the adjustments themselves are “anthroprogenic warming” apart from any natural or CO2 driven warming.
Physicist Robert Brown at Duke (aka rgbatduke) addressed this and the consequent massive divergence between the satellite and surface temperature records.
Until these obvious human “anthropogenic warming” adjustments, divergence between surface and satellite records, and the divergence between climate models and “the pause” are reconciled, all appeals by Mann et al., etc. have no validity by reason of massive unacknowledged Type B error.
The persistent refusal of the Climate warming community to even acknowledge Type B errors let alone incorporate the international standards on treating uncertainties further amplifies the runaway lemming herd divergence away from the scientific method and reality.
See GUM: Guide the the Expression of Uncertainty in Measurement, BIPM
1998-2015 Temperature Divergence
Werner Brozek clearly shows the temperature “divergence”
Final 2015 Statistics (Now Includes December Data)
By contrast,
Until the serious Type B errors are quantified, most probably in the surface records, the surface trends and “records” are useless for public policy determination, and a major contributor to the errors in Mann et al.
“Here we attempt to address that question { observed
temps w/ and w/out human influence,] using a semi-
empirical approach.”
… Is this what he means?
https://climateaudit.files.wordpress.com/2013/09/figure-1-4-models-vs-observations-annotated.png
Its simple: Nature conforms to the models.
All the differences are from anthropogenic causes!
And yet it cools.
And Earth been cooling for 8000 years since the Holocene Optimum.
It is now colder than during the Samarra, Naqada, Egyptian, Minoan, Caral, Macedon, Roman, Hittite, Pueblo/Anasazi, Arawak/Paredao, Mayan and Islamic periods! How inconvenient!
What is missing is for the skeptics to have their own evaluation of the probability. They have the Lewis and Curry paper from which to get some idea of the natural variation around the forced change that they assumed leads to the warming trend. Instead they now decide to criticize others for making the same assumption about the presence of a dominant background forcing. It’s unfortunate that they don’t follow through with their own numbers, but it is also understandable because their circular reasoning argument also works against the Lewis and Curry methodology.
You just don’t get it.
Probability means nothing, when what you really want to know is “how much”, as opposed to “how likely”
And the result is shaky at best, and if true, it proves nothing I disagree with.
L&C say they know how much and what forcing went into the warming in their selected periods, and so from that they can infer natural variabilities as a residual, and do the same thing to come up with a probability.
They are not opposed. What you want is a probability distribution for the magnitude of the effect.
Jim D, to help you here, basically fitting probability models to the historic data can’t tell you anything about attribution one way or the other.
On the other hand what L&C attempt is just to estimate the value of variable from the historic record (with a fair swag of assumptions). That might help narrow down viable hypotheses for what is causing the climate to warm, but they would still need to be tested.
If L&C know the forced change, they also know the natural variability. All I am saying is to use that and do their own probability with it. Not much to ask.
Jim D; if L&C do know what the forced change is, and what the natural variability is, they are the only ones who believe that they know.
They published numbers, and the IPCC has too, but perhaps you are suggesting they made them up.
Jim D, do you understand the difference between attributing causality and estimating parameters.
If you need help understanding I’m happy to try and explain.
The values published for forcing and natural variability are derived from a model. That model may, or may not, be correct. More likely not.
So if L&C and yourself for that matter believe the numbers are true and correct, I can only feel sorry for you.
Well, if we use the pre-Mannian estimates of the MVP we are still in the “natural” range.
Mann didn’t melt the permafrost where the Vikings used to grow grain.
Mann didn’t disappear the stumps found under retreating glaciers.
https://www.deepdyve.com/lp/elsevier/medieval-climate-warming-and-aridity-as-indicated-by-multiproxy-CKMFTb0kx8
https://www.researchgate.net/publication/236135155_Medieval_climate_warming_recorded_by_radiocarbon_dated_alpine_tree-line_shift_on_the_Kola_PeninsulaRussia
The tree line in Siberia in the MWP was 100-140 meters above the current tree line. Mann didn’t move that either.
Jim D: It’s unfortunate that they don’t follow through with their own numbers, but it is also understandable because their circular reasoning argument also works against the Lewis and Curry methodology.
The circularity (or model dependence) makes it an essentially meaningless calculation, so there is no good reason to make the calculation.
Indeed, which is why fingerprints are used: the ocean heat content increase, satellite and surface detection of CO2 radiation effects, stratospheric cooling, land warming twice as fast as ocean. There are enough independent pieces of information that it is well known what the main reason for the warming is, and explains why L&C assumed it unquestioningly in their sensitivity study. Nothing else has all these fingerprints.
I wish the stratosphere was cooling. I’d be 20 years younger.
“I wish the stratosphere was cooling. I’d be 20 years younger.”
Ah, the good old days :) Now you have to squint pretty hard to see any.
Jim D | January 27, 2016 at 6:39 pm |
Indeed, which is why fingerprints are used:
This is an invalid comparison. Real fingerprints let you identify a suspect. Your fingerprints just tell you that someone was in the room.
The only real measurement of forcing is 0.2 W/m2 for 22 PPM or about 0.64 °C for a quasi-TCR measurement. That isn’t good. CAGW requires more. The IPCC thinks the TCR is 2.0.
That means about 2/3rds of the warming is unrelated to GHG.
The 21st century has the highest 15 year CO2 emissions growth of any time in history. Carbon emissions from 1986-2000 were 94 GT, from 2001-2915 136 GT which is 44.7% more.
The rise in CO2 for the 15 year periods was 23.48 (pre 2000) vs 31.31 (post 2000) PPM (33% more). Please note it is getting increasingly hard to raise the CO2 level with human emissions.
Anyway 1986-2000 had about 0.23 W/m2 forcing increase (using the measured value for forcing) and since 2000 the increase has been 0.28 W/m2.
This is a discrepancy with reported data trend that needs to be explained.
Globally it looks more like this.
http://www.woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.25
The scaling is 100 ppm per deg C, equivalent to 2.4 C per doubling which matches the last 60 years well, a period with 75% of the emissions and 75% of the temperature rise. Yes, this is a headscratcher for the skeptics because they can’t think of any reason why these two lines of data should be going along together like this, but physics says that is what is expected.
Jim D, you also need to learn a bit more about what it means to say the “physics says that is what is expected”.
That needs a forecast/prediction/projection based on the physics with no knowledge of the system state during the forecast period and the actual observations over that period.
JimD You are referring to “multiple lines of evidence” a tried and true method in Geology. That’s only half the story.
Below is an actual fingerprint and it ain’t CO2
http://sciencenordic.com/sites/default/files/2014_08_24_142811_crevasse_and_dark_ice_at_qas_u_0.jpg
Below is a smoking gun
https://i.ytimg.com/vi/oDCGvAyv78Y/maxresdefault.jpg
Jim D: Indeed,
You are agreeing with me that the calculation is useless? After recommending that L & C do one of their own?
Horst Graben,
those pictures have a little downside: they don’t tell where they come from. The glacier terminus is self-evidently dirty. Where I was born, we have a mountain which is all continental glacier melt dirt.
See pictures here:
http://www.slate.com/blogs/future_tense/2014/09/16/jason_box_s_research_into_greenland_s_dark_snow_raises_more_concerns_about.html
Jason Box has taken good pictures, but they don’t tell the truth. They tell a story.
Look, at human scale the models vastly overestimate warming and temperature appears to explain the variation in CO2 but not the trend in its atmospheric accumulation. At Paleogene/Neogene (Tertiary) scale CO2 is the slave of temperature. At Phanerozoic scale, there is no discernable correlation.
We need to find the inflection points. Ain’t been done yet. Not looking good for any time soon.
Until such time, any appraisal of likelihood is sheer hubris.
Until such time as there is a viable explanation for why the planet plunges into oscillating glacial conditions every few hundred million years, and then recovers, any appraisal of likelihood is madness.
120,000 year cycles
I have speculated that the glaciation-to-glaciation period is determined by the time required to dissipate the heat collected in the oceans by transferring the top 100 or so meters of water to polar and land regions. The trigger must occur when the oceans reach a certain thermal load and snows are falling on the deposition areas and return to the oceans is insufficient to stop the process. The deposition areas will now have surface temperatures =<0C. The process continues slowly (over 80,000+ years) until the transfer mechanism no longer operates: i.e., when the seas have reach a sufficiently low temp (~the same as the land). Oceans freezing is not necessary as they will be quite a bit more saline than now. Since there is a finite amount of ocean water, the time for all of this to occur generates ~constant glacial periods (all other parameters apparently only making minor fluctuations along the way). When the oceans have dumped their “excess” (deltaT~0C vs land) energy, transfer ceases, the glaciers begin to slowly melt with water returning to the oceans. The “slow” melting process occurring over a few thousand years compared to the “very slow” ocean-to-land transfer process.
the question of how likely observed temperature records may have have been both with and without human influence is interesting in its own right.
Human-caused climate change
How do humans cause climate change?
1. Energy production The amount of energy produced in a year by human activity is minuscule, less than 1 second of sunshine a year.
2. Terra-forming Amount done is minuscule and accidental and not considered relevant currently
3. Aerosol production Increases albedo in air, reduces albedo if deposits on ice/snow. Pulled out of the hat to explain all the times observed readings do not match model estimates. So has high “unnatural” variability.
4. CO2 increase. Known to have a GHG effect that considered on its own, could raise global temperature.
There can be no climate change without CO2 increasing the temperature first.
All climate change due to human activity is predicated on the temperature increase expected with a human caused CO2 rise.
The fact that CO2 increases can be due to non human causes with great variability [possibly greater than human caused increases] is ignored.
The fact that feedbacks may virtually eliminate the effect of the CO2 rise, especially by extra cloud formation and albedo increase is ignored.
If the CO2 levels have gone up due to human input [1.]
and the temperatures have gone up [2.]
then there is unequivocal human caused climate change.
1. is probably right
2.has not occurred
There is no correlation between the CO2 rise and the temperature on satellite or earth surface measurements
It is impossible to give odds on the magnitude of human causation because it can only happen, or not.
If you assign to all of the 5 questionable points a probability of 0.5 the probability of all being true were 0.03125.
And that leaves out the time series issue. See e.g.
https://www.itia.ntua.gr/getfile/849/3/documents/2008EGU_HurstClimatePrSm.pdf
In general, when Mann publishes anything using statistics, it is important to remember that he has a long history of freshman-level statistics errors in his publications.
His bias is so great that all the conclusions of all his papers could have been predicted before he even began the research on them. Such a record invites a great deal of skepticism.
As I understand McIntyre, Mann was worse than freshman level stats mistakes. He invented new ‘statistical’ methods without checking whether they had any validity– his centered PCR automatically produces hockey sticks out of red and pink noise.
Mann is clueless about climatic variation and continues to use entirely inappropriate statistics based on normal Gaussian noise and not 1/f^n type noise as is present in climate.
And the result is that the long term variance increases (in theory toward infinity). So that if you take a short run – over the long run you expect on average most results to be outside the 99.9% confidence limits.
So, it is “Normal to be abnormal” (at least if you apply noddy stats like Mann does).
Here’s a short intro to the subject: http://scottishsceptic.co.uk/2014/12/09/statistics-of-1f-noise-implications-for-climate-forecast/
This issue of the extent of CO2 effect on global temperature will never be settled by scientists on their own because of the advocacy and current influence of the CAGW supporters. I see only one near term solution, that being a skeptic Republican such a Cruz being elected to President and his party retaining control of Congress. Only a political decision to clean up GISS and NOAA surface temperature records by a new set of people and pressure on HAD/CRU to do the same with theirs will force a re-evaluation of climate models. The adjustments to GISS/NOAA/HADCRUT towards climate model output simply has to be stopped. Cruz could denounce Paris and stop supporting IPCC financially until the remit be changed from proving catastrophic impact of CO2 to researching the original definition of Climate Change (i.e. not biased towards CO2 impact).
Also, I’m sick to death with these comments suggesting we have any idea whatsoever how much (if any) change is due to human causation. The reason is simple: we cannot know what is abnormal until we know what is abnormal.
And so if someone talks about a “century long” trend of human causation, the very minimum length of data we require to have any idea whatsoever of what is “normal” is two centuries – so that we can compare the third century with the “normal” seen in the previous two.
The usual rule of thumb is that we need around 10x as much data to determine “normality” as that we are comparing it with. This means that if we have 150 years of data – the longest trend where we have a reasonable confidence of normality is 15 years. So, even saying the trend 1970-2000 is “abnormal” would be going well beyond the normally accepted procedures. (And it’s BS as we see the say trend from 1910-1940).
If however we start using CET we have around 350 years. In which case we can then start saying whether periods of around 30-40 years are showing abnormal trends (and nothing in the last 116 years does show abnormal trends).
http://scottishsceptic.co.uk/2015/03/06/proof-recent-temperature-trends-are-not-abnormal/
Sorry – that is “we cannot know what is abnormal until we know what is normal”.
I just love the way Mann has the gall to whistle away the uncertainties in the temperature data, and then claim significant results where the minima is a factor of 200+ smaller than the maxima. I’d simply never have the balls to publish anything so outrageous.
It’s funny now, but in the long term the damage to science continues to grow.
It also just shows how far Nature has fallen to publish this nonsense.
A further thought: by ignoring the temperature data uncertainties, Mann et al are admitting that they don’t know how to do the mathematics required if the uncertainties are retained.
So it’s a self-confessed Mickey Mouse paper. How the hell did it ever get published? (Rhetorical question, there.)
You set records at the top of a cycle. You could take setting records as proof of a long cycle with the same probability as proof of a trend.
This is all because you can’t tell the two apart in principle. A cycle though can’t be man-caused.
Both the long cycle and the trend have the same probability of producing the data in your window.
Could be unicorns.
Steven Mosher: Could be unicorns.
Parsimony would recommend that they be the same unicorns as produced the Medieval Warm Period, the Roman Warm Period, the Minoan Warm Period, and so on.
rhhardin: Your argument, which is essentially that climate is a bounded and perturbed random walk, goes both ways. Perhaps the black carbon and acid rain started the climb out of the LIA which would have otherwise persisted for another 300-years. Then on top of the albedo effects, CO2 kicked in pulling us up even higher out of the LIA. That means that TCS could be higher than 2 and ECS at 5. Then the random walks us out of the LIA and we have a natural upcycle with an AGW kick.
rhardin’s argument is not convincing, Steven, but your response is completely inappropriate, because the basic point he is making is more or less correct, even though it doesn’t have much support.
Setting of records is indicative of a secular trend, not of the magnitude of that trend. On average, the Earth’s climate has been warming since the last ice age. As a result, if you start measuring temperatures at some random time during that secular trend, you will tend to see a lot of “records” broken. The records only tell you that the trend is generally positive, not whether it is catastrophically so.
The repeated emphasis on record temperatures by people like Mann reveals a deep intellectual dishonesty. I would have thought you would know better, Steven.
Such a light weight disingenuous operator like Mann should just be ignored. Paying attention to him is more than he deserves and gives him to much respect.
Mann is a heavyweight in the climate science community. He heads a well funded Institute at Penn State, publishes extensively in high ranking journals and is highly cited by other scientists. These are the three standard measures of importance in science.
I was just panning the tactic once tried by them. I was just being facetious.
Although as far as importance to science that probably remains to be seen.
Not to mention a literal heavyweight
Professor Mann thinks the AMOC is weakening. The eventual consequence of that will be a negative phase of the AMO and a drag on GMST. He should be insanely popular around here. He’s a kooler.
Well of course he’s a kooler JCH. They all know that it’s possible the oceans go into other phases and the Sun may go dim for awhile. Mann wrote at least one paper on Solar, that I read, back in the early millennial and that paper that he co-authored had a lot of sun data. I just don’t know how they are going to keep the global warming hype up if it goes cold for awhile. The public has a very (very) short attention span. As far as him being popular around here, now who’s being facetious.
My biggest complaint with the mainstream is that they choose to ignore satellites. Why spend billions on satellites and just ignore them like a useless tool. they can measure all the ocean and land spots they want, mosh them all they want and still not cover near what satellites do. An obvious bias there.
They do not ignore satellites. When the chief scientist of RSS wants to know the temperature in his backyard, he looks at a thermometer. That’s not ignoring satellites; that’s just exceedingly good commonsense.
David Wojick | January 27, 2016 at 6:42 am | Reply
Mann is a heavyweight in the climate science community. He heads a well funded Institute at Penn State, publishes extensively in high ranking journals and is highly cited by other scientists. These are the three standard measures of importance in science.
Yup, that is what is nice about the science field, you can be wrong and still be popular.
Can’t get away with that in engineering (unless you are selling to the government or jump ship a lot).
JCH, since I don’t have the paper to read I was just parroting what JC said in her conclusion;
“JC Conclusion
The Mann et al. paper certainly provides a punchy headline, and it is a challenge to communicate the problems with the paper to the public.
As I see it, this paper is a giant exercise in circular reasoning:
1. Assume that the global surface temperature estimates are accurate; ignore the differences with the satellite atmospheric temperatures”
I’m also referring to Gavin and his pals at Nasa putting out warmest year press releases and not a mention of RSS or UAH. That is not a case of going out to read the thermometer it’s a case of pretending you don’t have one and just guessing at the temperature.
Oh, sorry I see she did provide the link I somehow missed it. I’ll read it.
Dr Mann is such a heavy weight in the field that apparently his opinion pieces count as scientific fact. At least to the point where he can use them as a reference in a supposedly scientific research paper.
JCH,
“When the chief scientist of RSS wants to know the temperature in his backyard, he looks at a thermometer. ”
You left out the part about “When the chief scientist at NCAR wants to know the temperature in his backyard, he looks at his thermometer and then adjusts the number to a temperature more to his liking.”
Currently the satellites have nothing useful to say about the surface air temperature.
Timg56 – pointless, and, if you were right, which you are not, it would be easy for skeptics to demonstrate it in a paper. Instead they wave their white Italian flag… it’s too hard; we’ve got no science, so we’ll snipe from the cheap seats.
JCH,
“Currently the satellites have nothing useful to say about the surface air temperature.”
That’s just an arrogant dodge. Of course the satellites have nothing useful to say because they make Mann’s argument entirely useless.
JCH: They do not ignore satellites. When the chief scientist of RSS wants to know the temperature in his backyard, he looks at a thermometer.
So let’s be clear: you don’t know the difference between weather and climate. Thanks for making it obvious.
Regarding the five assumptions listed at the end, there is a sixth factor. In addition to natural forcing and internal variability, there is oscillation due to nonlinear negative feedback. The IPCC says that the climate is sufficiently chaotic that prediction is impossible. This implies that it will oscillate naturally under a constant forcing, such as constant solar input, simply because of nonlinear negative feedbacks. This feedback driven oscillation should be distinguished from internal variability, which may be due to other causes.
Supposed climate scientists making assumption 1: “Assume that the global surface temperature estimates are accurate; ignore the differences with the satellite atmospheric temperatures” continues to baffle me. How can they get away with this? In particular, the dolts at NASA/GISS who are supposed to be our satellite team. I have no idea why we continue to pay these people to keep their heads in the sand.
About a month or so ago I had an email exchange with Reto Ruedy who is listed on the GISS web site as the contact for those not satisfied with FAQ’s. I asked him about the apparent disregard of satellite temperatures by GISS. He sent me links to 2 papers circa 1995 and essentially blew smoke.
Can you repost exchange or link your old comments? I couldn’t find old comments.
I think it made the point that the nasa models were made to emulated the satellite data. Kind of funny that people want to ignore the satellite data.
aaron,
This is my email exchange with Reto Ruedy:
Does GISS use the atmospheric temperature data derived from satellite measurements to verify climate models. If so, how is this done and if not why not?
Mark Silbert
Dear Mr. Silbert,
Yes we do and have been doing so ever since those data became available and the computation of temperatures that correspond to the various satellite measurements have been a standard GISS model diagnostic ever since that time. http://pubs.giss.nasa.gov/abs/sh01000b.html shows the abstract and a link to one of the earlier papers about that topic.
Our paper http://pubs.giss.nasa.gov/docs/1995/1995_Hansen_etal_3.pdf discusses comparisons between Satellite data, surface temperature data, and model results.
Thank you for your interest,
Reto Ruedy
Dr Ruedy,
Thank you for your speedy response.
Can you provide me with something tangible I can reference or read that is more recent than 1995?
Regards,
Mark Silbert
Dr. Ruedy,
To be more specific in my question/request, could you provide me with information on:
How is satellite temperature data used in verification and development of GISS/NASA’s GCM?
Is there a systematic effort made to compare and understand differences between GISS surface temperatures and satellite temperatures?
Thank you and I look forward to your response.
Mark Silbert
Mr. Silbert,
As I tried to indicate in my previous email, those particular questions were asked and answered 20 years ago.
Reto Ruedy
Kind of feels like he avoided giving me a direct answer, but the implication I take from this exchange is that NASA/GISS basically don’t use the satellite temperature data for much if anything. As a US tax payer, it makes me wonder what we are really funding NASA for.
JCH,
And your point is?
My point is that NASA ignores their own satellite data every time they make a warmest year pronouncement. My question to Ruedy was why do they do this?
Yes we do and have been doing so ever since those data became available and the computation of temperatures that correspond to the various satellite measurements have been a standard GISS model diagnostic ever since that time. – Rudy (bold mine)
Go to Scholar and enter the original paper from 1995. Once you locate the original paper, click on cited by 23: meaning since publication n 1995 23 papers have cited the original work.
Even ATTP doesn’t seem to have the heart to defend Mann’s wittering’s.
I’ve been busy. Give me a chance to look at it first.
We all wait with bated breath. What will his verdict be?? The search for truth is exciting. What will the little red book reveal?
The paper is just that: more punchy headlines.
“Global temperatures have overall been increasing for more than 200 years. Human caused CO2 emissions does not explain a significant amount of this warming prior to 1950.”
But it is interesting that CO2 has also been increasing for more than 200 years. So are we sure that increases prior to 1950 were not human caused?
I am getting more and more dissatisfied with the mysterious “natural variability.” I can see natural variability in changes over short periods of time but not 200 years.
Aside from that, not really liking the Mann paper too much or defending it.
“I can see natural variability in changes over short periods of time but not 200 years.”
James, climate varies naturally on all timescales. Please read this:
http://judithcurry.com/2016/01/22/history-and-the-limits-of-of-the-climate-consensus/
Perhaps I didn’t phrase that correctly.
What I am referring to is natural variability arising from internal dynamics of the climate for which we cannot attribute a cause.
Of course over 100 thousand year time scales we can have big swings usually attributed to Milankovitch cycles, We can attribute some (maybe most) of the cause to orbital changes.
We can also see that some of the changes in Holocene climate may be caused by orbital changes too.
But let’s take the last 1000 years.
We MWP, LIA, and the last 200 years of warming. Are those changes arising simply from internal dynamics without external cause?
That is what I am saying is an unsatisfying explanation. Yet we are casting doubt on the last decades of warming on the basis of some 50% attribution to natural variability.
If you look at some of my comments on the post you linked to, I think a good part of the explanation of MWP, LIA and the last 200 years is human influence primarily through land use and agriculture. In other words, the natural internal variability may not be all that great.
@James Cross
“MWP, LIA, and the last 200 years of warming. Are those changes arising simply from internal dynamics without external cause?”
Since neither a natural nor human cause is known, my view is they could very well be due to internal dynamics. But I don’t think anyone can rule out a human cause at present. Much more effort needs to be put into understanding historical climate variability. Until this is done trying to attribute the cause of recent climate change, or project future AGW, is a waste of time.
Gareth
I can mostly agree with that.
Isn’t there some burden, however, to least hypothesize by what are the internal dynamics?
Let’s say, for example, we say it is ocean circulation.
I can see how ocean circulation might produce variability over several decades since we know there is historical pattern for actual circulation changes for this time frame that we can measure.
But to explain the MWP, LIA, and last 200 years seems far-fetched without at least some evidence circulation changes on the centuries time frame.
That was what I meant when I said: I can see natural (internal) variability in changes over short periods of time but not 200 years.
I used to lean more to the solar influence but I’ve grown disenchanted with that, although it may be playing a minor part. I’ve never been convinced we can explain the LIA with volcanoes either, although they may have had a role. But, if you accept that, then how do you explain the MWP? Absence of volcanoes?
There may be something else I’m not thinking of but if we try to explain variability of the last thousand years and eliminate solar, volcanic, and orbital changes as major causes then what are we left with.are human influence and the mysterious internal dynamics.
James,
Don’t forget the roman warm period. It makes sense to me there would be chaotic fluctuations on all time scales, from hourly to thousands of years. But chaos is deterministic, and therefore the mechanisms may be understood. The stadium wave is one theory of how multi-decadal cycles can arise. I don’t know if it is right, but I would like to see it seriously tested against the AGW-only explanation favoured by the ‘climate consensus’. I expect the true explanation of recent climate change to be a combination of AGW and natural cycles. Knowing the relative contributions could be quite important. But we are not going to progress as long as natural variability is characterised as “noise”.
Gareth
Once again mostly agree.
But the Ruddiman hypothesis is that the human impact goes back thousands of years mostly to the beginnings of agriculture. So even the Roman warm period and climatic optimum could have partial human influences.
I think we might eventually explain most of the Holocene major climate variations with human impact (agriculture), volcanoes, and a little bit of solar and orbital variations.
But I couldn’t agree more when you write this:
“I expect the true explanation of recent climate change to be a combination of AGW and natural cycles. Knowing the relative contributions could be quite important. But we are not going to progress as long as natural variability is characterized as “noise”.
And I certainly think most of mainstream climatologists are making too much of the cycle that ended early in this century before we entered into the pause. I have a feeling if we average out the late 20th century warming with the pause we will get a better idea of how much of it is human influence.
This paper makes basic errors of hypothesis testing methodology:
“In summary, our results suggest that the recent record temperature years are are roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic [warming?] than in its absence.”
This implies calculation of likelihood ratio (though Mann et all do not use the term). P(evidence|hypothesis)/P(evidence|alternative). I think this is the correct (Bayesian) way to do hypothesis testing and attribution, so +1 for that.
Alarm bells ring with the words “than in its absence”. To calculate probabilities we need a well-defined alternative hypothesis. They use as the “hypothesis” the CMIP5 model ensemble, which is fair enough, but what is the “alternative”?
“In modeling I, we invoke several different alternative null hypotheses (see Methods). These include (1) AR(1) ‘red noise’ (the simplest defensible null hypothesis[23]) and (2) a more general linear stationary ARMA(p,q) noise process (which accommodates greater noise structure).”
My first thought was that ARMA might not be a bad way to do this, since it can model cyclic behaviour. But it seems from the notes to Table 1 that they have used p=1, which can’t. (see e.g. ttp://robjhyndman.com/hyndsight/cyclicts).
So we are back the old trick of comparing AGW with a straw man. The realistic alternative hypothesis to AGW is of course natural long-period climate cycles, such as the stadium wave. This kind of hypothesis testing is only of value if you are going to compare with one or several realistic alternative hypotheses.
It seems to be the best rebuttal would be to repeat this kind of analysis using say the stadium wave as the alternative.
You know in advance that there’s no probility difference between a long cycle and a trend. It’s a mathematical fact about frequency uncertainty with a limited time window. No information can make it far outside your data window.
So the probabilities have to be the same.
@rhhardin
by long-period I mean decadal and multi-decadal. A centuries long cycle would appear as a linear trend within the measured temperature record, but we might see evidence of it from historical and archeological sources.
As I recall, the number of extrema per unit time is the square root of the ratio of the spectrum’s fourth moment to its second moment, with a pi thrown in somewhere.
For stationary gaussian random processes.
But that’s not quite records. It’s dominated by white noise.
Hi Judy – excellent discussion.
The CMIP5 runs have even more problems (when assessed in hindcast runs); e.g. see cites we report on in
http://pielkeclimatesci.files.wordpress.com/2013/05/b-18preface.pdf
http://pielkeclimatesci.files.wordpress.com/2014/05/rpt-86.pdf
That the media elects to highlight such studies says a lot of the lack of investigative journalism.
Roger Sr.
Here are some excerpts of CMIP3 and CMIP5 from
https://pielkeclimatesci.files.wordpress.com/2014/05/rpt-86.pdf
Fyfe et al. (2011) concluded that
“..for longer term decadal hindcasts a linear trend correction may be required if the model does not reproduce long-term trends. For this reason, we correct for systematic long-term trend biases.”
Xu and Yang (2012) find that without tuning from real world observations, the model predictions are in significant error. For example, they found that
“….the traditional dynamic downscaling (TDD) [i.e. without tuning) overestimates precipitation by 0.5- 1.5mmd-1…The 2-year return level of summer daily maximum temperature simulated by the TDD is
underestimated by 2-6 C over the central United States-Canada region.”
The paper van Oldenborgh et al. (2012) report just limited predictive skill in two regions of the oceans on the decadal time period, but no regional skill elsewhere, when they conclude that:
“A 4-model 12-member ensemble of 10-yr hindcasts has been analysed for skill in SST, 2m temperatureand precipitation. The main source of skill in temperature is the trend, which is primarily forced by greenhouse gases and aerosols. This trend contributes almost everywhere to the skill. Variation in the global mean temperature around the trend does not have any skill beyond the first year. However, regionally there appears to be skill beyond the trend in the two areas of well-known low-frequency variability: SST in parts of the North Atlantic and Pacific Oceans is predicted better than persistence. A comparison with the CMIP3 ensemble shows that the skill in the northern North Atlantic and eastern Pacific is most likely due to the initialisation, whereas the skill in the subtropical North Atlantic and
western North Pacific are probably due to the forcing.”
Anagnostopoulos et al. (2010) report that
“.. local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than [even] at the local scale.”
Stephens et al. (2010) wrote
“models produce precipitation approximately twice as often as that observed and make rainfall far too lightly…The differences in the character of model precipitation are systemic and have a number of important implications for modeling the coupled Earth system …little skill in precipitation [is] calculated
at individual grid points, and thus applications involving downscaling of grid point precipitation to yet even finer-scale resolution has little foundation and relevance to the real Earth system.”
van Haren et al. (2012) concluded from their study with respect to climate model predictions of precipitation that
“An investigation of precipitation trends in two multi-model ensembles including both global and regional climate models shows that these models fail to reproduce the observed trends… A quantitative understanding of the causes of these trends is needed so that climate model based projections of future climate can be corrected for these precipitation trend biases.. To conclude, modeled atmospheric circulation and SST trends over the past century are significantly different from the observed ones.”
Sun et al. (2012) found that
“.in global climate models, [t]he radiation sampling error due to infrequent radiation calculations is investigated …. It is found that.. errors are very large, exceeding 800Wm 2 at many non-radiation time steps due to ignoring the effects of clouds..”
There is an important summary of the limitations in multidecadal regional climate predictions in Kundzewicz and Stakhiv (2010) who succinctly conclude that
“Simply put, the current suite of climate models were not developed to provide the level of accuracy required for adaptation-type analysis.”
And I could go on, including the Stephens et al paper that you posted on in
http://judithcurry.com/2015/03/10/the-albedo-of-earth/
Roger Sr.
I’m surprised that model tuning hasn’t been automated. A Kalman filter will tune your model without human interventio to match any observations at all.
The model doesn’t have to be any good, either.
Predictive value is zero, but you can really match the past observations.
Is 600 to 130,000 times ‘more likely’ any sort of scientific measure?
surely this is many times the margin of error, or probability, or usefulness, that would unleash a hail of invective from Mosh if it emanated from ‘ my’ medieval monks.
Was 2015 the hottest everywhere in the world? Had any year in the modern era earned that accolade?
Tonyb
@climatereason
“Is 600 to 130,000 times ‘more likely’ any sort of scientific measure?”
Yes, it is a likelihood ratio. (Or weight-of-evidence, or Bayes factor)
Following Turing and Good, you can take logs in base 10 to get a measure of information, in units of Bans.
https://en.wikipedia.org/wiki/Ban_(unit)#Usage_as_a_unit_of_odds
But like any statistical measure, garbage in = garbage out. See my post above.
What’s the likelihood that earth would develop intelligent life? What’s the likelihood that I’d be reading Judith Curry’s blog today? What’s the likelihood that we understand what changes climate and can predict our earth’s future? What’s the likelihood that we can do anything about it? These are all more interesting question than the one Mann purports to answer.
Great work on the Skeptic side pointing out the flaws in using ‘statistics’ of models, which are essentially just the ‘statistics’ or random numerical integration errors, which are highly correlated and ABSOLUTELY not understood in any meaningful way.
On another note, why 600? Why not go straight to 6,000,000 if we are going to be doing some numerology?
Like this:
Extreme Weather Tied to Over 600,000 Deaths Over 2 Decades
http://www.nytimes.com/2015/11/24/world/europe/extreme-weather-disasters-united-nations-paris.html?_r=0
Time for a Philo diversion:
“And he says that the world was made in six days, not because the Creator stood in need of a length of time (for it is natural that God should do everything at once, not merely by uttering a command, but by even thinking of it); but because the things created required arrangement; and number is akin to arrangement; and, of all numbers, six is, by the laws of nature, the most productive : for of all the numbers, from the unit upwards, it is the first perfect one, being made equal to its parts, and being made complete by them; the number three being half of it, and the number two a third of it, and the unit a sixth of it, and, so to say, it is formed so as to be both male and female, and is made up of the power of both natures; for in existing things the odd number is the male, and the even number is the female; accordingly, of odd numbers the first is the number three, and of even numbers the first is two, and the two numbers multiplied together make six. (14) It was fitting therefore, that the world, being the most perfect of created things, should be made according to the perfect number, namely, six : and, as it was to have in it the causes of both, which arise from combination, that it should be formed according to a mixed number, the first combination of odd and even numbers, since it was to embrace the character both of the male who sows the seed, and of the female who receives it. (15) And he allotted each of the six days to one of the portions of the whole, taking out the first day, which he does not even call the first day, that it may not be numbered with the others, but entitling it one, he names it rightly, perceiving in it, and ascribing to it the nature and appellation of the limit.”
Sneaky.
Natural variability as a sprint to the finish line in the North Atlantic marathon
http://www.vukcevic.talktalk.net/NAm.gif
(as the data accuracy improves the demarcation gets clearer)
…but not with the latest version of GISTEMP or with the new NOAA (Karl et al – MLOST?) record, as that adjusts SST to match near-surface air temperature on a decadal and multidecadal timescale.
That sounds very unlikely and I can’t find any mention of this process in any documentation of ERSSTv4, nor Karl et al. 2015. Perhaps the confusion here concerns the use of NMAT (Nighttime Marine Air Temperature) data in detecting biases due to differing SST measurement equipment? That’s not at all the same thing as adjusting the SST to be SAT.
GISTEMP interpolates land temperatures over the Arctic (though that’s nothing new) which avoids some of the bias issues of HadCRUT4, but most of the SST/SAT bias occurs in the tropics and mid-latitudes, where no such interpolation occurs over ocean areas.
The calculation is fairly straightforward: warming since 1880 or so can be modeled as the sum of a stationary process (at least approximately stationary, glossing over the volcanoes for the time being) and a monotonic process; previously, they estimated the two processes (how they did so was discussed here); if only the stationary process is “real”, then the probability that the recent year is the warmest since the MWP is very low; if both processes are real, then the probability that the recent year is the warmest since the MWP is high; if you can not conceive of another cause of the monotonic process, then you conclude that the record warmth since the MWP is “man made”.
The authors avoid saying “warmest since the MWP”.
Whether they have accurately estimated the stationary and monotonic processes is not addressed in the paper at any lenght; whether there is another cause of the monotonic warming is also not discussed in the paper, other than the fact that it’s “inconceivable” to them. Their result stands or falls with their previous results.
It’s nice that the paper and the supporting information are available on line.
As I see it, this paper is a giant exercise in circular reasoning:
I agree, but: the model simulations can be compared to future data (including future CO2 and future volcanic action in the simulations) as a partial test of the set of assumptions. To date they have downplayed the importance and utility of model-data comparisons, but everyone else is free to note them.
Good comments by Prof Curry and Nic Lewis.
I think the arguments by the statistical “experts” are a magnificent effort to pick the fly-poo out of the pepper.
By eye and using mental arithmetic, without putting pen to paper or recourse to sophisticated stats (which I can also use to some extent), I can comment as follows:
In the past 60 years, we’ve had four relatively flat spots and four sharp upticks. Depending upon the very subjective decisions on what you pick as your end points, you can say that the climate has been warming at 0.01 deg. C. per year, or 0.06 deg. C. per year – a huge difference. Both are subject to wide error bands, which is to say you’d better not bet the farm on either one.
And, as I keep repeating, drowning in numbers doesn’t change the fact that there is not, and cannot ever be, a scientific proof that human activities are causing climate change. This is because the scientific method requires testing of any hypothesis (such that there is such a thing as AGW), and we cannot make any such test; we don’t have the power or the centuries of time to do so. Certainly we MAY be responsible, but with heavy emphasis on the MAY BE.
And as I further keep repeating, we need to reinterpret AGW. It isn’t Anthropological Global Warming, it’s Anthropological Global Wrecking, which we emphatically and provably are doing.
F.
good to see that continuing research is being produced for a settled science
do these guys have perpetual contracts?
as taxpayer, I don’t like continuing to pay for more proof of the already prooved
California’s current snowpack is the deepest it has been in five years — a modest, yet encouraging milestone in a period of prolonged drought.
http://www.latimes.com/local/lanow/la-me-ln-california-snowpack-increases-20160127-story.html
ordvic:
Overall, the state snowpack is deeper than it was 5 years ago at this time of year, Northern CA is 123% of average. Only SoCal is slightly lower than average.
http://cdec.water.ca.gov/cdecapp/snowapp/sweq.action
But it was good you pointed it out. I follow it a couple of times a week.
Thanks for the info
If the understanding of climate is defined by the so called 97% majority, then it follows that the climate is difined by the majority of climate models.
Have you any problems with that? :-)
I have only seen the site http://www.co2science.org/
that track investigations in model versus reality.
SF, you might try essays Models all the Way Down, Humidity is Still Wet, Cloudy Clouds, Hiding the Hiatus, Missing Heat, Sensitive Uncertainty, Northwest Passage, Last Cup of Coffee, and a few others in my ebook with the forward from Judith. Not everything examining climate models versus observations (‘reality) is blogged online.
No need to comment, other than to thank Judith, Nic, and RPSr for this thread. Another Mann paper ‘joke’. There is a whole thesis/book on cargo cult pseudoscience to be written just concerning Mann’s oeuvre, for some Ph.D history of science candidate.
[O]ur results suggest that the recent record temperature years are are roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic than in its absence.
Technical it is very likely that it wouldn’t be a warm as it would be without anthropogenic influence.
1. Is it an interglacial record? No.
2. Is it due to GHG… Not addressed.
He is claiming that recent warming after a mini-ice-age is higher than it would be without anthropogenic influence. I sort of agree.
But this “warming” doesn’t exceed previous natural warming.
Anthropogenic influence isn’t just GHG and reducing GHG will only fractionally impact Anthropogenic influence.
The GHG influence (0.2 = 22 PPM) is about 1 W.
GHG is about 1/3 of the warming (CGAGW is 1/3 of the warming as well, which is why the claimed warming is 4/3rds of the actual warming).
It sort of is what it is. We are warming the planet a little. It is unclear why more effort isn’t put into attribution for the other 2/3rds. We measured the CO2 warming, no need to put more money into CO2 science until we sort out the rest of it.
I have recently gone into this issue in some detail (paper in review). I show that an historical global average temperature time series (I used GISS) can be adequately described as a random walk plus white noise, i.e. as an ARIMA(0,1,1) process. The residuals are uncorrelated and satisfy the Ljung-Box test. There is no evidence whatever for any deterministic causality, anthropogenic or otherwise. Neither is it necessary to postulate a true random walk; a first order autoregressive process with a coefficient close to unity will do just as well.
Temperature observations are adequately described by this simple stochastic process; using a complex numerical model to prove otherwise simply demonstrates the inadequacy of the model.
Nature papers are usually behind a paywall; isn’t it odd that this one is freely available? A cynic might suggest that Nature’s editors are sympathetic to sensational, Warmist papers.
Pingback: Record Warmth | …and Then There's Physics
“How to separate out human-caused climate variability from natural climate variability remains a challenging and unsolved problem.
Increased greenhouse gases should increase positive North Atlantic and Arctic Oscillation, that won’t make for a warm AMO. While since the mid 1990’s negative NAO/AO has actually increased. That increase in negative NAO/AO has driven the AMO into its warm mode, thereby warming the Arctic, causing drying of continental interiors, and driven a fall in altitude of atmospheric water vapour. Lots of powerful negative feedbacks that are being misconstrued as forced surface warming. It’s not even internal variability, the NAO/AO led negative feedbacks are to a decline in solar forcing:
http://snag.gy/PrMAr.jpg
Hopefully journals like Nature that continue to publish questionable studies like this with so many problems will begin to be scrutinized and avoided by responsible scientists.
ATTP has put up a post on this on his blog. JCH has done some loving commentary. Another commentator there posted this.
“Michael says: January 28, 2016 at 5:47 am
The paper obviously has no scientific value, and is certainly subject to a range of uncertainties which can be attacked by those who wish to do so. But whether the real answer is 1 in 60,000, or 1 in 3,000, or 1 in 5,000,00 the general answer should be obvious to anyone with a scientific clue”
–
I replied,
To paraphrase Michael
” But whether the real answer is 1 in 3,000, or 1 in 60,000, or 1 in 5,000,00 the general answer should be obvious to anyone with a scientific clue,”
and hence Prof Mann who actually said
“roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic than in its absence”.
The problem is this paper is assigning a probability to a cause as opposed to an occurrence.
The occurrence is a run years of temperatures rising.
We can assign a probability to this. We have past temperature records.
Easily. Ask Tamino or Lucia.
and the probability will be in a very narrow range specific for the temperature records we use and the length of time we specify..
The answer might be 1 in 600 or it might be 1 in 130,000 times but the answer will be highly specific to whichever number is chosen.
If a paper is ho be of any scientific value.
Now when you try to specify the probability of a causation you are on much shakier ground.
You have to make specific assumptions about the causes of warming as well as assess the temperature records.
Which set of temperature records to use. What climate sensitivity to CO2 to use. Affects of aerosols. How much CO2 extra are humans producing, How big is natural variability. Are we fully aware of changes in heat from the sun , volcanic events and degree of cloud cover and El Nino effect in this particular case.
If we assert we know all these things we can come up with a very specific figure again and get Michael’s general answer right as well.
If we have a high degree of doubt as to climate sensitivity and natural variability etc then we get a blow out or spread of probabilities like Professor Mann has given us.
“The paper obviously has no scientific value” said Michael.
I disagree.
The paper clearly shows exactly how little is known about the major issues affecting climate change by a major proponent of the effects of Climate Change.
I applaud Prof Mann’s willingness to show the large range of uncertainty that dogs Climate Science today.
To your points about scientific value, I suspect some future studies will indeed have citations about this work. But most likely they will be in the realm of social psychology .
How much more would you be willing to pay to be 600 to 130,000 times more likely to win the California lottery?
130,000 more likely would put your odds at ~5/100th of a percent chance of winning the jackpot!
When someone claims that “2014 was nominally the warmest year on record for both the globe and northern hemisphere based on historical records spanning the past one and a half centuries1,2.” ,
One doesn’t need to hold a PhD in climate (or any other) science to see this is utter nonsense, particularly when “warmest year(s)” are currently being calculated on the order of 10ths or 100ths of a degree. Few, if any ,”historic” records, including most of those prior to the 1970’s came even close to these “precise” increments of temperature recording.
This paper is pure propaganda (complete with self-referencing to bolster its own claim), not “science”. James Hansen used virtually he same claim several years ago, and this is just a current rendition of that . In other words, not even original!
Recent trend is to invent new forms of Drake Equation: we estimate the probability of something we don’t know anything about by probabilities of something we know even less about. An esteemed string theorist Joe Polchinski recently estimated the probability that we live in a “multiverse” at 94% (not 95%; his Bayesian formula yields 94%.) I’ll apply an accounting approach – not to estimate, but to compute exactly – the number of alien civilizations:
ACt = ACy + N – E
where ACt is the number of alien civilizations today, ACy is the number of alien civilizations yesterday, N is the the number of new civilizations that emerged yesterday, and E is the number of civilizations that went extinct yesterday.
Reblogged this on I Didn't Ask To Be a Blog.
In response to the likelihood of 13 out of 15 warmest years occurring after 2000 this is a fact, it happened.
The reason is that the world has been slowly heating up anyway,
If you looked at records in 1990 you could say the same thing, If you looked at them in 1935 you could say the same thing . If you looked at them in 1880 you could say the same thing.
You get the drift. In a warming world there will always be multiple segments where 13 out of 15 years are the warmest in that point on a rising line.
Whether there is anthropogenic CO2 or none at all.
Misquoting Stan Lee.
“To link the two is a more extraordinary claim and requires extraordinary proof.”
The range of possibility quoted seems less than even ordinary proof
Was 1997 warmer than 2015? Yes and no .. methodology has changed. Adjustments have changed. Even now, 1900-1910 temperatures are revised downwards.
I think if you only can make the last years standing out by adjusting old results down, it means the warming is (or at least, recently was) inside the margin of error. This is absolutely very important thing to remember. Don’t let smoothed graphs with a trend to fool you to believe the trend is robust.
We have a small CO2-related warming rate. All the CAGW comes from some models that predicts sustained and much faster growth in future.
Wert, prove to us that there has been any “CO2 warming rate” at all, please.
“Climate models” are not just “running too hot” – they are predicting “wrong stuff.”
Such as rainfall patterns, wind distributions, humidity distributions,… in the lower and upper atmosphere.
Can’t people see that this “modeling” is evidence that GHG “forcing” is WRONG, at least as far as it is incorporated into GCM?
Four Centuries of Spring Temperatures in Nepal by Craig D. Idso
http://www.cato.org/blog/four-centuries-spring-temperatures-nepal
Pingback: On the likelihood of recent record warmth – ClimateTheTruth.com
Judith > Global temperatures have overall been increasing for more than 200 years. Human caused CO2 emissions does not explain a significant amount of this warming prior to 1950.
What is the basis of this claim ?
Reblogged this on Climate Collections and commented:
Executive Summary:
The Mann et al. paper certainly provides a punchy headline, and it is a challenge to communicate the problems with the paper to the public.
As I [JC] see it, this paper is a giant exercise in circular reasoning:
1. Assume that the global surface temperature estimates are accurate; ignore the differences with the satellite atmospheric temperatures
2. Assume that the CMIP5 multi-model ensemble can be used to accurately portray probabilities
3. Assume that the CMIP5 models adequately simulate internal variability
4. Assume that external forcing data is sufficiently certain
5. Assume that the climate models are correct in explaining essentially 100% of the recent warming from CO2
In order for Mann et al.’s analysis to work, you have to buy each of these 5 assumptions; each of these is questionable to varying degrees.
Pingback: Climate Science, Climate Models, and a Rational Application of the Precautionary Principle | Taking Sides
Pingback: Weekly Climate and Energy News Roundup #214 | Watts Up With That?
You don’t need to make a complicated argument about ensemble averaging of independent non-stationery models to dispute this kind of nonsense. In 1991, the Chicago Bulls won the NBA Championship. Without looking up how many points Michael Jordan scored in the championship series games, I computed the likelihood that the Bulls would win the series if Jordan had scored an average of 80 points per game. As it turns out, the Bulls were significantly more likely to have won the title if Jordan scored more than 80 points per game than if he had averaged only 30 points per game. And since the Bulls won the title, it is much more likely that Jordan scored 80 points per game. Call this reanalysis. But it seems to work in climate circles.