by Sergey Kravtsov, Marcia Wyatt, Judith Curry and Anastasios Tsonis
A discussion of two recent papers: Steinman et al. (2015) and Kravtsov et al. (2015)
Last February, a paper by Mann’s research group was published in Science, which was discussed on a previous CE post [link]
Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures
Byron A. Steinman, Michael E. Mann, Sonya K. Miller
Abstract. The recent slowdown in global warming has brought into question the reliability of climate model projections of future temperature change and has led to a vigorous debate over whether this slowdown is the result of naturally occurring, internal variability or forcing external to Earth’s climate system. To address these issues, we applied a semi-empirical approach that combines climate observations and model simulations to estimate Atlantic- and Pacific-based internal multidecadal variability (termed “AMO” and “PMO,” respectively). Using this method, the AMO and PMO are found to explain a large proportion of internal variability in Northern Hemisphere mean temperatures. Competition between a modest positive peak in the AMO and a substantially negative-trending PMO are seen to produce a slowdown or “false pause” in warming of the past decade.
The paper is explained by:
- Michael Mann at RealClimate: Climate Oscillations and the Climate Faux Pause.
- The press release from Penn State: Ocean Oscillations caused false pause in global warming.
Led by Sergey Kravtsov, the stadium wave team was quick to respond with a rebuttal. Many months later, our rebuttal, along with Steinman et al.’s surrebuttal, have been published:
Comment on “Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures”
Kravtsov, M. G. Wyatt, J. A. Curry, and A. A. Tsonis
Science 11 December 2015: DOI: 10.1126/science.aab3570 [link]
Abstract. Steinman et al. argue that appropriately rescaled multimodel ensemble-mean time series provide an unbiased estimate of the forced climate response in individual model simulations. However, their procedure for demonstrating the validity of this assertion is flawed, and the residual intrinsic variability so defined is in fact dominated by the actual forced response of individual models.
Response to Comment on “Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures”
A. Steinman. L. M. Frankcombe, M. E. Mann, S. K. Miller, M. H. England
Science 11 December 2015: DOI: 10.1126/science.aac5208 [link]
Abstract. Kravtsov et al. claim that we incorrectly assess the statistical independence of simulated samples of internal climate variability and that we underestimate uncertainty in our calculations of observed internal variability. Their analysis is fundamentally flawed, owing to the use of model ensembles with too few realizations and the fact that no one model can adequately represent the forced signal.
Marcia Wyatt explains Kravtsov et al. rebuttal
Climate varies on time scales from years to millennia. Recent attention has focused on decadal and multidecadal variability, notably “pauses” in warming. The latest one – a slow down in warming since 1998 – has eluded explanation. One school-of-thought presumes external forcing (natural and anthropogenic) dominates the low-frequency signal; another casts internal variability as a strong contender. Numerous studies have addressed this conundrum, attempting to decompose climate into these dueling components (Trenberth and Shea (2006); Mann and Emanuel (2006); Kravtsov and Spannagle (2008); Mann et al. (2014); Kravtsov et al. (2014), Steinman et al. (2015); Kravtsov et al. (2015)). Despite many efforts, no universal answer has emerged.
Steinman et al. (2015) claim significant strides toward resolving the matter. They argue that they have identified an externally forced response, and per consequence, can estimate the observed low-frequency intrinsic signal. Using multiple climate models, they generate an average of all climate simulations. The resulting time series – a multi-model ensemble-mean – is their defined forced signal. Steinman et al. aver that this forced signal is completely distinct from intrinsic variability of all simulations within this suite of models. If this claim holds, then this model-estimated forced signal justifiably can be combined with observational data in semi-empirical analysis, which Steinman et al. use to expose the estimated intrinsic component within observed climate patterns.
But questions arise – about the modeled data and about procedure.
Output of climate-model simulations can provide insights into the observed climate response to external forcing. Steinman et al. use data from the collection of models of the fifth version of the Coupled Model Intercomparison Project (CMIP5). While a valuable resource; CMIP simulations have limitations. A major one is uncertainties surrounding model input: Specifying external forcing and parameterizations of unresolved physical processes is an inexact science; thus, in attempt to accommodate the speculated ranges and combinations of these factors, modelers script individual models within the CMIP multi-model ensemble with different subsets of such. Hence, each model generates a statistically distinct forced response. Deciding which forced response might be the “right” one is the challenge. Steinman et al. maintain that they have overcome this hurdle with a single forced signal that they claim represents the entire collection of CMIP models. In light of the individual model distinctions aforementioned, and the observation that each individual model generates its own unique forced response, the assertion of a single forced response, suitable for all CMIP models, is striking.
Much of Steinman et al.’s argument rests on residual time series. This is the information left-over from the time series of a climate simulation after the model-estimated forced response has been extracted from it. These time series can be used to ensure the forced signal is unbiased. The reason behind this assumption is that if the model-estimated forced response is completely disentangled from each model’s intrinsic signal, and therefore unbiased, then the modeled intrinsic signals (the residuals) should be unbiased too; the residuals would be statistically independent of one another, i.e. not correlated. Steinman et al. claim to show the residuals are uncorrelated – a significant result, if true.
Steinman et al. generate the residuals by removing the forced signal from individual time series of all climate simulations, across all models and do so using two different methods – one involves differencing (subtraction) and the other, linear removal of a re-scaled forced signal via regional regression. Both operations yield similar results: Each leaves behind numerous residual time series, and each residual is taken to represent model-estimated intrinsic variability. Steinman et al. test correlation among the residuals: They invoke an indirect statistical test based on well-known properties of distribution of independent random numbers. The residuals are shown to be uncorrelated. This result suggests that the forced signal, indeed, is unbiased. Now the path is paved for Steinman et al. to use this signal to evaluate internal variability of observed regional climate patterns. Once they achieve this, they can use the identified intrinsic component to infer its potential relationship to the currently observed “pause” in surface warming. This all sounds promising. But is all as it seems?
The results seem counterintuitive, given fundamental differences among individual models. How could the forced signal truly be unbiased? This apparent puzzle motivated Kravtsov et al. (2015). They explore details of Steinman et al.’s methodology and design an alternate strategy to separate model-estimated signals.
In the big picture, Steinman et al. and Kravtsov et al. go about disentangling climate components similarly. Yet there are significant differences. Steinman et al. place emphasis on a multi-model ensemble. They consider simulated time series “in-bulk”, so to speak, with no distinctions made among individual models. Kravtsov et al., instead, focus on the individual models of the multi-model ensemble; they look at simulations from one model at a time. This difference in approach produces different data sets of residuals. Specifically, Steinman et al. use a multi-model ensemble-average as their forced signal, and remove this forced signal from individual climate simulations across all models of the multi-model ensemble. The result is one data set of residuals for this multi-model ensemble. On the other hand, Kravtsov et al. generate two data sets of residuals within each of the 18 individual-model ensembles: One data set [1] is derived from linear subtraction of a multi-model ensemble-mean from each simulation of a given model; repeated for 18 models. The second [2] is derived from subtraction of the single-model’s ensemble-mean from each simulation of a given model; repeated for 18 models.
This latter approach [2] – subtraction of the single-model’s ensemble-mean from a climate simulation – is a traditional method for decomposing simulated climate variability into its forced and intrinsic components, thereby generating a naturally unbiased signal. The reason for success in generating this naturally unbiased signal is traceable to a fundamental assumption: forcing subsets and physical parameterizations for a given single-model’s ensemble of simulations are identical; any differences in the modeled realizations are due to differences in the model’s intrinsic variability, most of these attributable to differences in initialization of each run, and therefore, differences that are uncorrelated. Thus, when the forced signal (single-model ensemble-mean) is subtracted (differenced) from each climate-simulation time series within a single-model ensemble, remaining residuals within this ensemble should be uncorrelated, or independent. And indeed, using a simple time series-correlation metric, Kravtsov et al. found this to be the case.
In contrast, when Kravtsov et al. linearly subtracted [1] the multi-model ensemble-mean (i.e. Steinman et al. forced signal) from the model simulations within a single-model ensemble, the resulting residual time series within the model’s ensemble were significantly correlated [3]. These residual time series of a single-model ensemble are not independent. They share a common signature. They contain remnants of the multi-model ensemble-mean! Implications are significant: If the multi-model ensemble-mean – i.e. the Steinman et al. forced signal – produces a biased forced response for single CMIP5 models, how could this choice of forced signal credibly provide the unbiased estimate of a forced response in observations?
All this brings us to a final curiosity: Steinman et al. demonstrated absence of bias in their forced signal. How did they do this, if indeed, the signal is biased? The answer lies in their procedure for ensuring statistical independence of model-estimated residual time series. Their procedure is flawed: due partly to their choice of forced signal – a multi-model ensemble-mean, and due partly to how the forced signal is removed from simulations. Recall, they remove the forced signal from individual simulations across all models, versus removing the forced signal from simulations within single-model ensembles.
Alluded to earlier in this essay was the procedure used by Steinman et al. to assess signal bias, or lack thereof. We revisit that procedure here. This simple method determines whether or not a number of individual residual time series share a common signature with one another. Bear in mind: if the forced response is completely separated from each simulation’s intrinsic component, the numerous resulting time series of residuals will be uncorrelated with one another, i.e. statistically independent. Hence, when they are averaged together into a multi-model ensemble-mean of residuals, any differences among them would largely cancel out. The result of this is that the dispersion (difference from mean) of the multi-model ensemble-mean of residuals would be much smaller than the dispersion of the individual residuals. The procedure used to capture this relationship compares the actual dispersion (the variance of the ensemble-mean residuals) to the theoretical dispersion (the average of the individual variances of the residuals divided by the number of simulations). Due to the way variance is computed, the actual dispersion will be much smaller than the theoretical dispersion if the residuals are independent (uncorrelated). Steinman et al. applied this well-known method to their data set of residuals and found that their actual dispersion was, indeed, much smaller than their theoretical. It logically followed that their forced signal must be unbiased.
But it turns out there is a hidden glitch in this procedure that makes this result an illusion: Defining the forced signal in terms of a multi-model ensemble-mean and extracting it from individual simulations in-bulk, across all models, imposes an algebraic constraint on the residuals such that the ensemble-mean of residuals happens always to be zero, by mathematical construction. Due to this algebraic constraint, the actual dispersion will always be smaller than the theoretical one, and therefore the residuals always will appear to be uncorrelated, whether they are or not.
The trick in bypassing this constraint is to limit focus to residual time series within an ensemble of a single model. This is what Kravtsov et al. did. Had Steinman et al. also done this – i.e. removed their multi-model ensemble-mean from simulations exclusively within an ensemble of a single model, and tested the resulting residual time series for statistical independence using actual versus theoretical dispersion, as per Kravtsov et al., – they, too, would have found the residuals to be correlated (not independent).
Hence, through these slightly different methodologies, Steinman et al. and Kravtsov et al. come to different conclusions. Steinman et al. argue they have identified an unbiased forced signal, and with it, have identified the component of observed climate due to intrinsic variability. In contrast, Kravtsov et al. conclude Steinman et al. have not identified a forced response that is unbiased; that procedural artifacts only gave the illusion of such; and that successful disentanglement of climate components remains elusive.
A final point: Intrinsic variability indeed may damp the presumed anthropogenic signature of secular-scale warming in the currently observed “pause”. Many have suggested such, Steinman et al. among them. Yet, without demonstration, conclusions on what may seem valid cannot be made. Thus, Steinman et al.’s argument that they have assessed the role of internal variability in the current “pause” is unsupportable. Their chosen forced signal – a multi-model ensemble-mean – fails to provide convincing evidence.
Notes [notes]
References [references]
Acknowledgements: Feedback and editing suggestions from Sergey Kravtsov and Judith Curry streamlined the text of this essay; ensured its accuracy; and clarified its message. Their input is much appreciated.
Sergey Kravtsov responds to the surrebuttal
Steinman et al. claim that they avoided the issue of the algebraic constraint leading to the apparent cancellation of the “intrinsic” residuals in multi-model ensemble mean by using N-1 models to define the forced signal of the Nth model. However, doing so is really no different from using the multi-model mean based on the entire ensemble, since exclusion of one simulation will not affect the multi-model ensemble mean in any appreciable way. In fact, it is easy to show that the ensemble-mean residual time series in this particular case would be approximately proportional to the estimated forced signal – multi-model ensemble mean time series, – with the scaling factor involving 1/N, thus making its standard deviation much smaller than expected from 1/sqrt(N) scaling due to the cancellation of statistically independent residuals. Hence, the multi-model ensemble mean of the “intrinsic” residuals so obtained would have a negligible variance, but this will have nothing to do with the actual independence of the residuals (see Reference/Note 5 in Kravtsov et al.). Indeed, Kravtsov et al. demonstrated that these residuals are definitely well correlated within individual model ensembles, hence not independent. Steinman et al., in their reply, acknowledge the correlation, but still falsely claim independence.
Steinman et al. built the rest of their rebuttal on the fact that individual-model ensembles only have a few realizations, so the ensemble mean over these realizations, even if smoothed, will contain a portion of the actual intrinsic variability aliased into the estimated forced signal. Hence the estimated residual intrinsic variability will be weaker than in reality. This is a valid point, and it can be dealt with by choosing the cutoff period of the smoothing filter (5-yr in Kravtsov et al.) used to estimate the forced signal more objectively, for example by matching the level of the resulting residual intrinsic variability in the historical runs with that in the control runs of CMIP5 models. However, this issue is only tangentially related to the implications and fundamental limitations of using the multi-model ensemble mean to estimate the forced signal outlined in Kravtsov et al. In particular, if one would use multi-model ensemble mean to define an intrinsic variability in any one model, this variability would have a much larger variance than the intrinsic variability of this model in the control run; this larger variance would be dominated by the individual forced signal bias for this model.
Pingback: Has the intrinsic component of multidecadal climate variability been isolated? | Enjeux énergies et environnement
The term “semi-empirical analysis” says it all for me. How many ways can we model the models? Is this science?
yes, the semi empirical approach has uses throughout the history of science and today in many areas.
An example from the past
http://www.sjsu.edu/faculty/watkins/semiempirical.htm
I do not see a connection between your example and the case of climate models.
Simple. Look harder.
Steven Mosher,
If you read what you linked to, you might discover it doesn’t mean what you think it does.
“The differences between the actual binding energy and that calculated from the regression equations seems to be related to an internal shell structure of the nuclei.”
In other words, the model isn’t very accurate, and we don’t know why.
Read harder. You might find your missing clue. You might even discover something about effective communication using the English language.
Cheers.
Steve,
As you may recall, I actually am a physicist (Ph.D., Stanford), and, as it happens, I have recently been thinking about how much one could deduce about nuclear properties just from looking at the binding energy curve and fitting simple theoretical models. I think I can safely say that I really do understand what Watkins is doing. (One thing that does puzzle me is that he is not including the Fermi-level effect due to the Pauli exclusion principle applied separately to protons and neutrons. But, perhaps that is implicitly included in one of the terms he has already included. Anyway, I would like to have seen him address this explicitly.)
I am afraid I do not see the connection to the issues Judith and her colleagues are discussing.
To be sure, I am much more certain I understand Watkins’ essay than the papers Judith is discussing.
Perhaps you could be more explicit as to the connection you see here?
Dave Miller in Sacramento
“Perhaps you could be more explicit as to the connection you see here?”
Would being explicit make him seem more like a cool dude? If not, I wouldn’t hold your breath.
You could try a coowul phrase like “explicate harder.” But I don’t promise it will work.
How high a priority (hot) is this research? With scores of papers exploring causes of the pause, does this qualify as a “cutting edge” or frontier of climate science? I have posted abstracts to 53 papers, and have certainly missed some (or many).
http://fabiusmaximus.com/2014/01/17/climate-change-global-warming-62141/
Also, is p-r research converging on a few explanation for the hiatus (a pause or substantial slowing of global surface atmosphere warming)?
Editor, IMO this is core because addresses a slightly different issue than ‘pause causes’. Stadium wave asserts the paise will continue into the 2030’s, at least in the northern hemisphere. OOPS.
So Steinman/Mann argued (1) stadium wave was wrong, and (2) only a faux pause by wrongful use of the generally accepted PDO and AMO. Mann’s team procedes in two steps ( with how many cherry picking prior iterations to get there unknown except to Mann and God). 1. Redefine AMO and PDO. 2. Show via bogus mathematical statistics that their redifinition was unbiased, therefore stadium wave wrong, therefore pause faux.
For the bogus math part, reread Marcia’s part of the post, or my much simpler, less academically acceptible but more understandable comment below. More really bogus fruitloopy stats from Mann and gang.
ristvan,
Thanks for the explanation!
The fact that Mann & Co. keep applying “unique” statistical approaches to produce desired results is not an accident, IMO.
It’s not widely known as Mannian statistics for nothing. I’ve now seen that term used in discussion in fields unrelated to climate science several times. I’m sure it’s an accolade Mann will resent much in his retirement.
Ed,
Isn’t it important partly because it comes from Mann, who has been, to put it politely, a rather prominent figure in the global-warming debate?
Dave
“No one model can adequately represent the forced signal”
Assuming it is there in the first place. What would it take to demonstrate the reality of it? Besides religious belief, I mean.
BV, you pose a highly nontrivial question. Lets start with CO2 as a ‘greenhouse gas’ (yes, I know greenhouses work on convection not radiation). Tyndall proved that in 1859, ditto water vapor. Labs can measure the effects by looking at IR scattering in long glass tubes of different GHG concentrations. And the logarithmic doubling consequences are between 1.1 and 1.2C depending on ‘grey earth’ assumptions. Theory and experiment agree The CO2 forcing.
The whole issue is feedbacks, most importantly water vapor and clouds. Now there, we dunno. Essays Sensitive Uncertainty, Humidity is Still Wet, and Cloudy Clouds attempt to explain all this pictorially, no math, in terms almost anyone should be able to grasp. For a bit more complicated version with (gasp) one equation that is explained term by term over several pages of simple prose, see the climate chapter in my previous ebook.
The whole issue is not “feedbacks” of an “effect” if it isn’t there in the first place. We have a system that is supposedly “warming” in the lower layers and “cooling” in the upper layers, and by some unexplained mechanism, the rate of heat transfer between the lower and the upper does not increase – but actually decreases!
Before I can get off the ground I need to know what this mechanism is
ristvan,
With respect, Tyndall demonstrated that gases absorb electromagnetic radiation, of different wavelengths and in different amounts. As a result, the gases warm. When the source of the radiation is removed, the gases cool. The net result is that Tyndall’s very sensitive differential thermometer showed cooling due to a reduction of energy reaching it. More for CO2 or H2O, less for dry air, with CO2 removed. Eventually, his “pile” warmed to ambient. At no time did it show “warming” due to any supposed GHG effect. Tyndall in another chapter provides a diagram to show why this must be so.
So the theory that GHGs create warming can be true, in the same way that pouring water boiled in a microwave oven into a cold teapot heats the teapot. But saying that water heats anything above freezing, is misleading at best.
No GHE. Four and a half billion years years of cooling. Ephemeral warming and cooling here and there. Night, day, winter, summer. Albedo changes, orbital eccentricities, redistribution within the crust, mantle and core. Overall, cooling – remorseless, relentless, and unstoppable. Just like entropy, I suppose.
Cheers.
Mike, I can respect your opinion, but not as science. Denying that the GHG effect exists is what allowed Obummer to label us all as flat earthers in his SOTU.
It is a political battle. Find the most effective soundbites. 80-90 percent truth more than suffices. Your perspective places you squarely into the most narrowly defined 3% ‘deniers’ which as we saw from the Cruz Data/Dogma hearing is still a potent warmunist soundbite.
At least ponder my gentle rebuke. Essay Sensitive Uncertainty may be of some scientific assistance with technical details. Regards.
ristvan,
Thank you for your gentle rebuke. I’m pretty much apolitical, having been accused of being somewhat to the left of Karl Marx, and a bit to the right of Attila the Hun, at different times.
As a matter of choice, I suppose I’d rather see the billions wasted on climate research applied to something of more benefit to humanity. About as likely as the US adopting free speech, I guess.
Cheers.
Mike, I support your goals 110%. Its about the most effective tactics.
Brian, exactly!
Rud wins the internet
“Mike, I can respect your opinion, but not as science. Denying that the GHG effect exists is what allowed Obummer to label us all as flat earthers in his SOTU.
It is a political battle. Find the most effective soundbites. 80-90 percent truth more than suffices. Your perspective places you squarely into the most narrowly defined 3% ‘deniers’ which as we saw from the Cruz Data/Dogma hearing is still a potent warmunist soundbite.”
PRECISELY!!
Mosher, glad you understand the obvious. A relief, as you and your gang had not evidenced that previously, except for extrema.
We have been down that path before. Published. You know, footnote 26 to essay When Data Isnt, and all that. Regards.
Mike Flynn
For quantitive details of CO2 absorption/radiation, see the full CO2 spectra and the use of Line By Line (LBL) radiation models. e.g. NIST’s summary graph
Please take ristvan’s cautions seriously and do not use “science denier” language.
Tyndall measuring the opacity of CO2 is somehow a proof of the GHE? That’s a gigantic stretch. Tyndall also also clung to the mistaken notion of luminiferous aether as well.
Many thanks to both Marcia and Sergey. Fortunately, the math behind the argument is not above my advanced econometrician paygrade. Maybe a very simple almost right precis would be helpful.
Steinman/Mann proceed from the CMIP5 multimodel ensemble mean (IIRC, 102 runs from 27+ different basic models). As RGBatDuke is fond of pointing out, this is fundamental nonsense. A mean is an estimate of central tendency (analogy ‘ the unbiased forcing estimate’) from a single population (jar of marbles, height of people, whatever). By means of parameterization, each model is NOT from the same population. The only commonality is parameterization to hindcast back to 1975 as good as possible. The mathematical flaw is this guarantees the net ensemble residuals are uncorrelated and must net to zero, since the models themselves aren’t related. Big logical goof. Analyze apples. Analyze oranges. Analyze grapes. OK. Analyze fruit salad the same way, and you get meaningless fruitloopy statistics.
The only way to correctly run the Steinman/Mann test is climatemodel by climate model. The run ensembles of a model once its particular parameterization is fixed, so the outputs are samples of a single population. Then the mean estimate that models forcing, from which the residuals are either correlated (meaning bias error) or not (meaning no bias error). Done that way by Kratsov er. al., the Steinman/Mann redefinitions of AMO and PDO are shown biased nonsense since correlated. Steinman/Mann fails on basic ‘simple, fundamental’ stuff.
It appears Mann has learned nothing from McIntyre. Inventing centered PCA guaranteed using a bad methodology to find hockey sticks from red noise. I had assumed this was out of ignorance. Now testing residuals from the nonsensical multimodel ensemble meaningless mean, not so sure its just more ignorance.
Just because you can calculate something does not mean the result has meaning. The real question is how did Steinman/Mann get through Science peer review with such a basic flaw, well known for years in the skeptical blogosphere? Probable answer, Marsha McNutt and her Science editorial, since stadium wave definitely shakes the foundations of the warmunist cathedral.
Nice job, stadium wave team.
Rud: Very useful summary. I still do not understand the physical meaning of averaging different models. If the average does not have physical meaning how can the residuals?
Bernie, you got it. There is nothing to understand. Neither the multimodel ensemble mean nor its residuals have any meaning whatsoever. First, it is of models, not physical observations. Warmunists confuse by calling model output data. NOPE. Second, it is fruitloopy stats. Violates the basic things mathematicians and statisticians are supposed to learn the very first semester. Blaise Pascal is turning over in his grave. And RGB (a Duke physicist) is going apoplectic. Hope he shows up here with a good comment.
Marcia Wyatt: “On the other hand, Kravtsov et al. generate two data sets of residuals within each of the 18 individual-model ensembles: One data set [1] is derived from linear subtraction of a multi-model ensemble-mean from each simulation of a given model; repeated for 18 models. The second [2] is derived from subtraction of the single-model’s ensemble-mean from each simulation of a given model; repeated for 18 models.”
Steinman et al. aren’t novices, Marcia. They tried it that way but didn’t get the right result.
Maybe I have too simple a mind, but when people start averaging multiple models and saying things like ““No one model can adequately represent the forced signal”, I’m thinking : BS, all you need is one model that gets it right. Averaging wrong models will never work.
When all the model output does not match and real data, the only thing you can do to improve the models is to throw them away or fix the reason they do not match. If they fix the models to match real data, the emergency to fix something goes away. They cannot and will not do that. If they throw the models away and use real data that does fix the problem, but they cannot and will not do that.
“… hindcast back to 1975 …”
And that would be a hindcast of the real data? Or do they reproduce the adjusted, infilled, homoginized stuff?
By fixing the data, climate science can close the loop and achieve perpetual delusion.
ristvan: The real question is how did Steinman/Mann get through Science peer review with such a basic flaw, well known for years in the skeptical blogosphere?
I thought that publishing Steinman/Mann was a mistake, but journal editors can’t be correct all the time, given time constraints and work load. Given that Steinman/Mann got through peer review in the first place, I think it is a miracle that the Sergey Kravtsov, Marcia Wyatt, Judith Curry and Anastasios Tsonis letter was published. Perhaps what we have here is a technical discussion beyond what most of the statistical consultants of Science magazine wanted to “adjudicate”, and they felt that the full discussion was worthy of a wider audience.
I am grateful to Kravtsov et al for the time that they took to write a good and thorough letter, and to Marcia Wyatt for her essay here.
A better question is how did Mann get a PhD?
jim2: A better question is how did Mann get a PhD?
Under supervision.
In cases like this it seems an arbitration panel is called for. The question is from which field should this distinguished group be selected. Statisticians? Mathematicians? Logicians? A cross discipline body?
Is an impartial system to rule even possible?
From the logician’s, we have the following: In any system of logic strong enough to contain the axioms of arithmetic, there are any number of statements within the system that are both true and false within the system (or neither true nor false).
That aside, there are obviously any number of statistical methods to analyze the residuals
Ah, the famous Kurt Goedel incompleteness theorems. One of the most magnificent math acheivements ever. Right up there with Georg Cantor’s elegant proof that infinitie sets come in different sizes.
ristvan,
At least I can atone for some past disagreement by agreeing about the brilliance of Georg Cantor. Alas, I believe there are some who refuse to accept the possibility of an infinity of infinities, from aleph null – “to infinity and beyond!”, as Buzz Lightyear would cry!
I am sure there are also many who routinely ignore Gödels incompleteness theorems. More fool them, if it’s ultimately important.
Cheers.
BV, concerning residuals. i will simplify to the OLS BLUE theorem for purposes of exposition. The best linear unbiased estimator for ordinary least squares regression. What Excel does when draing a line through lots of data dots.
Now, there are a bunch of hidden math assumptions. And an easy waymto check if they are satisfied. Claculate that OLS line, then subtract it from the data (like Mann’s gang did, in an analogous way). Then look at the remainders, the ‘residuals’. If all the enabling assumptions are satisfied, they should be random summing to zero. Else you messed up somewhere. (This simple test does not say how or why.) The stadium wave team says in effect Mann’s gang messed up. Then expalins how and why.
Gauss proved that the best fit of any set of data was a least square fit, and by the Stone-Weierstrass theorem, this means a polynomial fit.
This does not say which least square fit
Godel’s incompleteness Theorem? Not the way I recall it. Statements are True, False, or unknowable, not both “True” and “False” since that would imply an inconsistency in the axioms, and they would be revisited.
Perhaps you are thinking about consistency (i.e., that no two statements are both true and false). With any system as complex as number theory, you can’t prove the system is consistent (though it is possible to prove a system is inconsistent, of course).
“Undecidability” was a consequence of Gŏdel’s demonstration that the choice axiom and continuum hypothesis weren’t false in axiomatic set theory
Cohen demonstrated (by modeling set theory) that they weren’t true wither
““Undecidability” was a consequence of Gŏdel’s demonstration that the choice axiom and continuum hypothesis weren’t false in axiomatic set theory”
A statement being “True” or “False” (or being used here in the mathematical sense) is not the same as being neither “True” or “False.” The former implies a contradiction in the axioms, and so the axioms are not consistent. Time to try again. The later was resolved in set theory by adding an axiom.
And neither of these is the same as not being able to know whether something is “True” or “False.”
cerescokid,
“The question is from which field should this distinguished group be selected.”
That touches on something I don’t understand. Given the inherent broad multidisciplinary nature of climate science, the discussion seems to (if I have this right) draw on a relatively narrow range of disciplines.
As in ristvan’s comment, this is a question for a statistician. Were any consulted by the Science editors? As with Mann’s centered PCAs, the p-r debate seems to lack input from relevant experts.
Odd, given the stakes. A relevant parallel in public policy would be FDA reviews, which do involve statisticians (and other relevant specialists).
Follow-up note: To some extent my comment reflects a use of credential-ism in decision-making.
Of all of the participants listed in Professor Curry’s post, Sergey Kravtsov’s CV suggests that he has the deepest knowledge of advanced math. His thesis was “Dynamics of a barotropic monopole on a beta-plane” for his M.S. in Applied Mathematics and Physics.
http://uwm.edu/math/wp-content/uploads/sites/112/2014/11/Kravtsov_CV.pdf
Editor and Ceresco, this is more deep stuff. To paraphrase the latin, who watches the watchman? Much of ‘climate science’ is computatiomal models involving fluid dynamics, or statistical analyses of noisy data (as here). Yet truly relevant experts are seldom if ever consulted. This leads in some part to the ‘pal review’ criticism. It is not only Briffa reviewing Mann reviewing Briffa. That would be easy to fix by degrees of relatedness reviewer rules. It is a much deeper ‘wicked problem’ symptom.
ristvan,
“this is more deep stuff. To paraphrase the latin, who watches the watchman?”
Yes, but solutions have been found in other public policy arenas. As we’re told, the fate of the world is at stake. Why are these vital discussions handled just like those evaluating the age of an arrowhead in a paleolithic cave?
The FDA has a highly developed methodology for review of studies of great public policy importance, from which climate science could learn a lot. Not perfect by any means, but better than the process we see in this post.
My guess (emphasis on guess) is that the leaders of the climate science field are quite happy owning their playing field, and will fight to keep outside experts out. Since so much of this is funded by the public, it is suitable for creating review machinery — perhaps under NOAA or the NSF.
The public policy debate informed by climate science has been running for 27 years (arbitrary start date with Hansen’s Senate testimony). It’s time to try something different.
My apologies for going off-topic on this technical thread.
Editor. Is not off topic. Is a simple matter of who watches the watchman?
Editor
Yes, my question was really broader than these papers. For the last 6 years I have read innumerable criticisms of papers that at first blush could seemingly referred to an appropriate body of experts beyond the particulars of climate science. Fifty years ago I was convinced economists and Supreme Court Justices were impartial in their deliberations. Silly me. Having other climate scientists on a body to sort out these issues probably is a non-starter and would not accomplish much.
ceresokid,
“Having other climate scientists on a body to sort out these issues probably is a non-starter and would not accomplish much.”
The point is to have experts from other fields on the team. Different perspectives, different expertise = multi-disciplinary.
Of course ALL models have 100% error after 2 or three months anyway.
But….. government work.
This is part of that oddly backward debate where the “natural variability” people refuse to believe that the pause could be just natural variability countering the background forcing, and the “forcing” people are defending the importance of natural variability in the “pause”.
Jim D, you got that wrong. Both AR4 and AR5 attribute ‘most’ warming to anthropogenic causes. Too obvious to provide chapter and verse citations. Almost all of the pause explanations now credit natural variability in some fashion. (Steinman/Mann an exception going for now debunked full pause denial).
Your problem is, if natural variation exists now, how did it not exist from ~1975 to 2000? When even IPCC AR4 said it existed to warm from ~1920-1945? You, sir, have a massive logic fail to condend with. Massive.
The so-called pause is well within the range of previous natural variability quasi-oscillations, and so it can’t be ruled out that it is mostly natural variability. This was Steinman’s point.
What about 1975-2000, yimmy? A pause in natural variability?
A climate that is sensitive to natural variability could also be sensitive to radiative forcing, which means the cooling phase of natural variation gets erased and the warming phase enhanced, or what is otherwise know as our recent climatic reality. The up and down trends in the GMST pretty much llne up with changes in the the trend of the PDO… until 1985. And then a very significant thing did not happen; it did not get colder.
Don, there were several large El Ninos including 1998, and so there was a lot of natural variability in 1975-2000. The warming exceeded the models in the second half of that period. That was natural variability too. Not sure where your question was going.
“Don, there were several large El Ninos including 1998, and so there was a lot of natural variability in 1975-2000.”
Jim D,: What could possibly cause “a lot” of “natural” variability in “some” years and not for others? There is nothing to account for that.
If there is nothing to account for it, there is no distinction between “natural” and “unnatural” variability because you can’t point to a relationship with no alternatives
There was a warming variation from 1985-2000, and a cooling and equally large one, maybe up to 0.2 C, in 2000-2015. That’s how it goes, warming by up to 0.1-0.2 in one decade, followed by cooling up to 0.1-0.2 C. That’s how it will continue, but the background meanwhile is rising relentlessly 0.15-0.2 C per decade and only increasing, so after a while of seeing that, you know that part is the one to pay attention to long-term.
Jim D, how do you know this?!!??
Michael Mann wins, Valentine loses, decision called by Jim D
That’s what you get if you remove the forcing correctly. Steinman isn’t the only person to have done this. They all come to the same conclusion: linear detrending is bunk.
Jim – all I want for Christmas is your conviction and not this doubting mind that has plagued me since I was 11 years of age
but the background meanwhile is rising relentlessly 0.15-0.2 C per decade and only increasing,
That’s incorrect, of course.
1. OLS trends through 2014 have never reached 0.2C per decade, the rate promised by AR4.
2. OLS trends through 2014 peaked for the period 1974 through 2014 and have fallen in subsequent years to a minima for the most recent 17 year period ( 17 years being a nominal minimum duration for significance ).
To be sure, 17 years begins with the last big El Nino. ( the 16 year trend indicates the impact of this is about 0.02C per decade ).
But this the data indicates:
1. Global warming is less than expected, and
2. The rate of global warming is decreasing, not increasing.
http://climatewatcher.webs.com/TrendsSince.png
We can say global warming is real, but it is wrong to exaggerate extent or change.
Exactly… they’re selling a natural variability that is incapable of cooling anything. The last real cooling trend in the GMST was 1942 to 1950, and that cooling trend was rudely interrupted by the postwar ACO2 haymaker… right to the chin. The last real cooling trend in the GMST ended around 1905, just as the age of the automobile was starting its engines.
As for the two papers, any scientific paper that gives the AMO credit for doing a single thing is suspect, so they’re both suspect.
JCH, I would keep a little modesty on making GMST analysis of a time series created just for local weather and fishing purposes before the invention of horseless carriages.
And I thought all that was happening was that some people were pointing out mistakes in another group’s paper.
No, it is about how to get the appropriate forcing to help see the natural variability on its own. The suggested newer method likely gives more similar results to Steinman’s than to the original Wyatt papers, so this is just a quibble about how best to improve over their linear detrending.
This has little or anything to do with partialing out natural variability. The method won’t do that with any reliability. There aren’t even attempts to develop and test the various proposed relationships using normal experimental design.
And when we get to the “quibble” over the techniques, there is a difference between fitting too simple a model, and using inferences that violate the assumptions of the techniques.
Each of the models is wrong, but together they are gold?
Sounds like the same BS we get on temps. 90% of the data is wrong and they don’t know which or by how much, but they can adjust the bad data with all the other bad data to produce gold.
From Mann’s abstract –
“The recent slowdown in global warming . . . ” Phew! At least that science is settled! Now to find out why . . .
Newton’s law of cooling? The laws of thermodynamics? Quantum electrodynamics? Ordinary physics? Or maybe bizarre climatological delusional flights of fancy?
Take your pick.
Cheers.
Mike:
“Take your pick”
The answer is none of the above.
Climate change is far simpler than anyone on this blog is willing to admit.
All warming from 1972 – present has been due to the removal of anthropogenic SO2 emissions due to Clean Air efforts. There has been NO warming due to greenhouse gasses.
Temperature projections from 1975-2015, based solely upon the amount of reduction in SO2 emissions, are accurate to within a tenth of a degree C., or less. (no room for any additional warming due to CO2).
The slowdown in warming was simply due to the rise in Eastern SO2 emissions roughly offsetting the decrease in Western emissions, thus slowing down the rate of warming.
The above facts need to be accepted and acted upon.
The Climate Sensitivity factor for the removal of SO2 aerosols is approx. .02 deg. C. of temp. rise for each net Megatonne of reduction in global emissions (currently around 2 Megatonnes per year, but this is expected to increase).
The more that we clean the air, the hotter it will get. This is an unfortunate reality.
(Regretfully, not my first attempt to inject some sanity into this blog)
This reality needs to be accepted and acted upon
Burl Henry,
I agree. I’d assumed that I’d included your obvious injection of sanity under “normal physics”. Something like putting up a sunshade of crap filled air, and then being surprised at getting warmer when you get rid of the sunshade.
Oh dear. Not as clear as I intended. No harm done, still no CO2 induced warming!
Cheers.
“Affirmative: beta represents an inexorable unidirectional change in global temperatures and alpha represents everything else and alpha cancels itself out — reduces to zero (n my model) –so, I’m positive I’ve found beta and I’m positive that beta is positive and I’m positive that beta is caused by human-CO2, tra-la.” ~Michael Mann
This posting loses sight of the reason for the first Steinman paper which was to say that the forcing can be represented better than the linear detrending used by Wyatt et al. in their earlier papers. While Wyatt quibbles with how to get the forcing, they don’t admit that linear detrending was a worse approach, and they (presumably) don’t show how their own new method for the forcing changes their conclusions from their linear detrending assumption used previously to be more like Steinman’s. The forcing change over the last 60 years or so has been far from linear, and in fact more exponential in shape.
Jim D, you seem not to have read the upthread comments. The Steinman paper invented a new definition of PDO/AMO, then used grossly faulty statistical methods to justify that reinvention, all in order to refute the stadium wave.
At the risk of being moderated, crap piled on crap indicates a classical outhouse, such as on my farm until the 1950s. More commonly referred to as a sh1tter. Another Mann designed outhouse of statistics. Got it yet?
No, I traced it back to why Steinman wrote their paper. Previous papers by Wyatt et al. were based on a linear detrending representing the forcing, which by now everyone knows is not good. Steinman came up with a nonlinear forcing change to better get at the shape of the natural variability. It was a clear improvement over the linear approximation. Note that Wyatt did not defend their own linear approximation, but just added another wrinkle to the Steinman approach that probably more closely matches Steinman’s results on the forcing shape than a linear trend.
Jim D. You appear to want to take the Science kiurfuffle off line into data weeds. I am on it. Please get my coordinates from Judith if you cannot otherwise (since I have been ‘hiding’ in plain sight for years).
It’s about linear detrending and whether that represents the forcing of the 20th century at all well. This is the main topic that Steinman was addressing, not the weeds (see this post’s title, for example). Wyatt et al. seem to have capitulated on that point by not defending it, but it was not explicitly raised, so I did.
Jim D: “Steinman came up with a nonlinear forcing change to better get at the shape of the natural variability. It was a clear improvement over the linear approximation.”
Actually we don’t know that given the methods used.
They tested sub-ensembles and the full ensemble. The Steinman paper is here.
http://www.meteo.psu.edu/holocene/public_html/Mann/articles/articles/SteinmanEtAlScience15.pdf
Jim D, I’ve read the paper thanks.
“They tested sub-ensembles and the full ensemble.”
How much do you know about experimental design?
Some. What’s your criticism? The use of subensembles is a method that gives a measure of the uncertainty.
What is your opinion of linear detrending to represent 20th century forced changes? Are you going to say linear detrending is better, or that this paper is an improvement?
Jim D
The problem is that there is no independent verification of the estimates of natural variation. So one can not tell which technique gives a better estimate based on this experiment (but one can say that the Steinman attempt is flawed because it violates the assumptions of the statistics it uses in the way it draws inferences from the multiple models).
What the experiment tests is an artifact of GCMs. It derives a statistic called internal variability from the models (and observations) and describes its behaviour.
Unfortunately GCMs do not reproduce the observed temperatures in the regions of interest well. So any conclusion breaks down. To make any claims Steinman, at a minimum, should have demonstrated those relationship between the models and observations.
I’d also note that there are experimental techniques that could also be used to do this better. The most obvious is to develop a set of relationships based on the period when external forcing was limited, and then test those out of sample (i.e. when external forcings build). However I suspect because of the limitations in GCMs’ ability to replicate the key phenomena, this might not add much to our understanding.
The whole problem is that the CAGW “problem” or “situation” exists only within the realm of statistical interpretation. Ordinary rules of evidence don’t apply: the suspect is guilty of shooting the victim only because we statistically ruled out others within a five block area, compensating for who would or would not be on the street at night, and who would or would not be likely to own a pistol of the correct kind.
The decarbonization of the planet – of the planet, for love of Mike! – is dependent on what one group of modelers say another group of modelers got wrong.
One set of maps, and the North Pole is to our right …. or to our left, depending on whose car we arrived in.
The Big Lie has succeeded only too well. It is an endorsement of common sense that, despite such luminaries as Obama, something like 43% of the American Joe Public still isn’t sure CAGW is real, and more than 90% don’t think it is a problem worth spending a nickel on – despite what Hansen says will be the fate of our grandchildren.
douglasproctor,
You wrote –
“The decarbonization of the planet – of the planet, for love of Mike! ”
And a mighty Voice from on high spake, and all those not named Mike trembled in terror, for the Voice did say “do not take the name of Mike in vain!”
But seeing as how this Mike agrees with you, feel free. The Petes might complain if you use “for Pete’s sake.”
Cheers.
douglasproctor
1. There is no CAGW problem at least in the way you mean. CAGW is the Cult of Anthropogenic Global Warming, a religious cult that has infiltrated the government and managed to scam some tax dollars from the government. All funding for CAGW should be terminated under the establishment clause and an attempt made to recover previous funding.
2. The people who want novelty power systems are members of a rich elite that can afford to pay for them. The subsidies for renewable energy should be stopped immediately. If the rich elites want renewable energy they can pay for it out of pocket instead of gouging the taxpaying poor.
3. There is nothing wrong with burning fossil fuel. Coal is black shiny, kind of pretty and I like the smell. Renewable energy is ugly and unreliable and should be banned (unless paid for by the rich elites).
4. There is nothing wrong CO2, it is the basis of life on the planet. The planet was depleted of CO2 for most of mankind’s existence. We need more CO2 and should investigate soot to CO2 conversion. Since some people are hoarding their CO2 we should fund research into methods of unsequestering CO2 efficiently and completely.
http://www.c3headlines.com/global-warming-quotes-climate-change-quotes.html
Quote by David Foreman, co-founder of Earth First!: “My three main goals would be to reduce human population to about 100 million worldwide, destroy the industrial infrastructure and see wilderness, with it’s full complement of species, returning throughout the world.”
5. Looking at the quotes from global warmers it is pretty obvious they aren’t honest or trustworthy. Claiming that humanity should be reduced to a couple of 100 million in number and that our infrastructure should be destroyed makes me question their motivation and whether they really have our best interests at heart. Reducing CO2, making power limited and expensive, blighting the landscaping with expensive toys that kill animals and waste resources, are the sorts of things one would do to create future food and power crises.
It is not clear it the global warmers actions are malicious or misinformed.
It is not clear it the global warmers actions are malicious or misinformed.
It is clear that all of them are misinformed and it is clear that some of them are malicious.
PA Well Said!
Still waiting fer greenhouse,
still waiting fer evidence
of that positive forcing
they’re seeking here, in
model averages, they’re
seeking there in deepest
oceans, that demmed,
elusive heat is – where?
Mebbe radiated up, up
… and away.
H/T Baroness Orczy? +10
Yep, P.M.D, the elusive Pimpernel. Plus 10 – hey!
So many people know so much about the deep hydrosphere. Looking at their models and theories…it’s all so vivid…like they’d been there, almost.
With much of what’s under our feet and flippers unvisited and unknown, it’s just as well we’ve got clever cookies who can guess it all. Saves a trip and the money can go to…well, more highly educated guessing using more of that “best available knowledge”.
Break out the maypole and mead, and let’s have a passion play or two. Those grand old medieval days are back.
Let them eat gruel!
Yep. Medieval days would have been much more enjoyable. No complications from do-gooders, greenies and nanny governments telling us what is good for us!
And there weren’t so many pied-pipers to pay in those days, either.
Well, we’re heading back to the era of faith, authority and inspiration, those good old pre-Enlightenment days when scholars were scholars and you could avert the worst by paying a few bob to Tetzel to kick on to Pope Leo (they didn’t call the donations carbon credits then – but they ended up just like modern carbon credits). Back then you could know and believe stuff without having to look under every bloody rock.
I blame Magellan.
Michael Mann has a bigger fan base than I do.
It is therefore obvious to anyone that Michael Mann is correct and I am nowhere. This is defines “truth” – it is consensus
Brian G Valentine,
Of course Michael Mann is correct! How could a Nobel Prize winner be wrong about anything? He’s the goto man if your treemometer is giving incorrect readings, and needs splicing.
Cheers.
http://theresilientearth.com/files/images/albedo_models_v_sat.jpg
http://www.skepticalscience.com/images/Earthshine_2008.gif
It seems to me that the 90’s were somewhat the result of decreasing albedo and not increased forcing.
Explaining the pause is sort of backwards. It could be argued that the 90s trend exceeded the forcing and the 90s are what really need to be explained.
Abstract. The recent slowdown in global warming has brought into question the reliability of climate model projections of future temperature change and has led to a vigorous debate over whether this slowdown is the result of naturally occurring,
The models are pretty bad, if it took the pause to get them to question the model projections perhaps they should go into another line of work.
A new Earthshine report has been written and is being reviewed now. It will show that the temperature during the pause does correlate with Albedo.
Albedo stopped decreasing because snowfall has increased due to the warm thawed Polar Oceans.
This is very interesting. I greatly appreciate this blog post.
Whose post – PA’s? If so I agree!
If it is to do with the head post by Maria Wyatt I also agree but the fine statistical print needs to be read with considerable care. I agree that measures of dispersion, such as means, can only be independant if the population distributions are uncorrelated.
Apologies, Marcia, for the missing “c”
Is there any way to get climate science interested in stationary gaussian random processes?
There’s so much to learn that you can’t do, and that you therefore can’t do with more difficult random processes either.
Yaglom is a good text. “An Introduction to the Theory of Stationary Random Functions” (1962), which I see is still in print and pretty cheap.
they are already in use where they make sense.
Stationary gaussian noise processes are used in climate science mostly where they don’t make sense, i.e., where the power spectrum shows oscillatory structure.
The problem is that a lot of “tools” that come out of stationary random processes don’t make sense under a lot of circumstances, which you’d know only by following where the tools came from. Not only because your process is not gaussian or stationay, but because they don’t even make sense for gaussian random processes under the circumstances.
Noisy graphs with a trend line drawn in are particular offenders. I mean, I respect the work that went into it under a value of labor theory perhaps, but it’s idiotic analysis. You don’t even need that the climate is complicated. The math alone kills you.
What’s needed is an adult peer review for climate science.
The fact is that, with a lot happening outside your data window, almost anything you want to distinguish has the same probability of producing your data. Your noise is multiplied by 10^30 or so before being used to produce the most likely source, which is to say that it’s an incredibly ill-conditioned system. No information will make it from data to cause.
What’s to isolate? The sixty year cycle is about sixty years. The thousand year cycle is about a thousand years. The thousand year cycle has entered the warm phase that does last a few hundred years, like the Roman and Medieval Warm periods. These natural and necessary warm periods stay warm, near the upper temperature bound, while the warm, thawed Polar Oceans rebuild the ice on land, getting ready to plunge into the next cold period, similar to the Little Ice Age. During this warm period, every upturn of the sixty year cycle will be another opportunity for the alarmists to forecast runaway warming. Every downturn of the sixty cycle will be another opportunity for the alarmists to forecast runaway cooling.
The pause is a combination of the phase of the sixty year cycle and the phase of the thousand year cycle. Nothing has changed that will stop the natural cycles of the past ten thousand years from going forward the same as before. When it gets warm, it thaws the polar oceans and the snowfall increases and enforces an upper bound. When it gets cold, it freezes the polar oceans and the snowfall decreases and enforces a lower bound.
The thermostats are set at the temperature that Polar Oceans Freeze and Thaw. It is warm, the Polar Oceans have thawed, again, and the snowfall has increased already, and the ice on land is being replenished, again.
You can tilt the orbit, you can change solar cycles, you can add CO2, but you cannot change the temperature that thaws Polar Oceans. That turns massive snowfall on and off. If they study CO2 and do not study the natural cycles, they waste their time and our tax money.
Earth temperature is regulated. Look at the data.
http://popesclimatetheory.com/page78.html
There are thermostats, one in the north and one in the south. Cooling is turned on when it is needed and it is turned off when it is not needed. Over the past fifty million years, the bounds changed as ocean circulation changed, but there is always cold and warm cycles that alternate. The volume and extent of ice on earth is always correlated with the temperature of earth. This is not a result of the temperature, this is how the temperature is regulated.
The Left’s problem in their use of statistics to explain away a pause in warming that cannot happen according to their use of statistics is like trying to sell a pink VW by saying it’s Porsche brown.
science speak is hard to follow for us lesser lights …
1.0 there is absolutely no pause
2.0 when we correct the data, the pause goes away
3.0 it is a ‘false pause’
Does the concept of a ‘false pause’ have implications for the possibility of time travel?
That’s because we dont speak in absolutes… like your comments do.
1. There is an apparent slowing of warming in ONE METRIC.
2. That can be explained in several ways
3. Time will tell which explanation or combination of explanations has the
most power.
Isn’t easier to just adjust the data?
Steven Mosher
:)
being accused by you sir, of commenting in absolutes, is an honor
I stand accused by the best
no hard feelings, I read your comments closely and do my best to grok your POV
I’m a bit cranky meself
Some simple thaughts even if it is a bit more complicated.
If the model residuals are uncorrelated, it means they can not predict any weather at all, and can not tell of the weather in the future. It is coincidence or tuning that makes them follow past weather in average.
If the residuals are correlated then it is rather amazing because the weather is chaotic, meaning the models overrules the chaotic nature of weather, and no real comparision has shown any skill.
Comment submitted by Marcia Wyatt:
Maybe you could inject comment to state goal of stadium wave team was never to disentangle components. It was to highlight multidecadal signal and from that, we documented a propagating signal. We have interpreted the propagation to be intrinsic, most likely, with the tempo tightly connected to variability in North Atlantic, the variability of which marches to a tempo that is likely both intrinsic and externally forced. In numerous studies that have cited our work, this important point has been misinterpreted and misrepresented. Mann et al and Steinman et. among them!!!
That’s the crux of the biscuit, there is a natural part and an anthropogenic part and the problem is separating the two.
Thank you. That helps.
As long as the mechanism(s) driving multidecadal and longer variations remain unexplained, there is no solid basis for identifying “intrinsic components” of the climate signal, whose cross-spectral phase relationship in different regions is incompatible with any propagating “stadium wave.” All of this is purely phenomenological speculation, conducted with wholly inadequate signal analysis techniques.
This is just messing around with computer models which are bound to be inaccurate purely because of the way computers are able to handle numbers and timescales.
How sad that science should have reduced down to this.
Pingback: On the likelihood of recent record warmth | Climate Etc.
Pingback: On the likelihood of recent record warmth – Enjeux énergies et environnement