The title of this post is taken from my AGU Fall Meeting poster presentation on the afternoon of Tuesday Dec. 4 (tomorrow).
You can view the poster from the comfort (?) of your terminal at where you can click on the View-ePoster tab.
For full transparency of the number crunching the poster links to the
Excel spreadsheet from which the graphs were extracted, allowing the reader equipped with Excel (2000 or later) to audit all graphs directly and experiment with alternatives, some of which I’ve provided for with sliders. (Microsoft’s free viewer will allow all this except for the ability to experiment, but I would expect Excel 2000 to be pretty cheap these days. I had high hopes for OpenOffice’s CALC but it turned out to be a nonstarter for non-toy spreadsheets.) Please let me know of any glitches you encounter. Only macros signed by Microsoft are used so Excel 2007 onwards should be happy, though I don’t believe 2000 knows how to read macro signatures and hence lives in a perpetual state of paranoia concerning viruses.
Global warming of some kind is clearly visible in HadCRUT3 (Figure 1,
resistor-color-coded red in the poster) for the three decades 1970-2000. However the three decades 1910-1940 show a similar rate of global warming. This can’t all be due to CO2 since the emissions data from the Carbon Dioxide Information Analysis Center (CDIAC) show that human-emitted CO2 wasn’t rising anywhere near as fast then as during 1970-2000. Both the population and its per-capita technology increased enormously in the intervening 60 years, with WW2 technology capable of obliterating cities with both conventional and nuclear weapons that WW1 generals could only dream of, and with WW3 postponed as sheer MAD-ness.
Figure 1
It would seem therefore that rising climate since 1850 is a lot more complex than can be explained by our rising CO2 emissions.
My hypothesis is that there is less to HadCRUT3 than meets the eye. I
hypothesize the following.
1. I collect all the so-called multidecadal ocean oscillations into one
phenomenon I call a quasisawtooth, namely a sawtooth lacking its first
harmonic or fundamental. Sawtooth waves occur naturally as the result of a sudden perturbation away from equilibrium followed by a slow return to equilibrium. The several 100,000 year deglaciation cycles of the late Quaternary are one example; this might be another, albeit with its harmonics filtered differently due to whatever lies between the perturbations and our perception of them (speculated on at the right of the poster, where I forgot to point out that the effect is seen in the oceans because the crust beneath is only 20% the thickness under the continents and almost nothing along the mid-Atlantic ridge). I remove this from the data (Figure 1) by subtracting it, giving the orange curve in Figure 2 labeled DATA – SAW.
Figure 2
2. I further hypothesize that all remaining natural fluctuations in modern climate have as their slowest component the 21-year Hale or magnetic cycle. I filter out this and all higher frequencies with a low-pass filter designed to aggressively block such frequencies. Its frequency response is given as the curve F3 in Figure 5.
Figure 5
F3 is constructed as the convolution of three box or moving average filters of respective widths 21, 17, and 13 years. The first gives F1 which by itself takes out both the Hale cycle and the 11-year solar (TSI) cycle. The second then gives F2 by bearing up at the one-third point of the first side lobe, while the third pushes down on the two-thirds point to give F3. (This filter can be constructed at
woodfortrees.org as I noted in my previous post here early in 2012.)
The effect is to block essentially all frequencies with shorter periods than 22 years. At most 0.4% of any such frequency gets through. What remains is the green curve in Figure 2 labeled F3(DATA – SAW). This is clearly global warming, whatever the cause; the poster calls it Observed Global Warming, color-coded green.
These two operations on HadCRUT3 take place on the left or experimental side of the poster, which deals with observation (in this case HadCRUT3) and its analysis (in this case by subtracting low frequencies and filtering high frequencies).
The right or theory side of the poster deals with the rationalization of observation, the half of science that goes beyond the mere reporting of
experience by hypothesizing explanations for it. Figure 3, color-coded blue, graphs the formula immediately below it, both before (in orange) and after (in blue) applying F3. (It can be seen that the filter makes essentially no difference except for a one-decade end-effect artifact where the curve is changing rapidly. In that context the (literally) side point is made that all odd decades since 1870 have trended positively while all even ones have trended more weakly and often
negatively, verifiable at Wood For Trees.)
Figure 3
The formula is based on known ideas due to Arrhenius in 1896 and Hofmann in 2009 (that the portion of atmospheric CO2 above the preindustrial level is growing exponentially), with the added twist that the oceanic heat sink delays the impact of radiative forcing variations on HadCRUT3 by 15 years, analogously to the overheating of a CPU being delayed by the addition of a heatsink with no fan, what I refer to as the Hansen delay. I call this the Arrhenius-Hofmann-Hansen or AHH law.
The rest of the poster, color-coded violet (purple, whatever), deals with the similarity to the curve in Figure 2: they are equal to within millikelvins. For the period to 1950 the standard deviation of their difference is half a millikelvin. After 1950 there are two bumps that need explaining; the best I could think of was brown-cloud pollution from uncontrolled western civilization emissions abating in the 1970s and then resuming with the industrialization of nonwestern civilization, but there may well be other explanations. Whatever the explanation however the main point is that multidecadal residual (MRES) is inconsequential in the context of global warming.
The hypothesis then is that multidecadal climate has only two significant components: the sawtooth, whatever its origins, and warming that can be accounted for 99.98% by the AHH law as measured by the R2 of its fit to observed global warming (and could be brought even closer to 1 with a good story for MRES).
Because filter F3 rises slowly on the left of its cutoff there is the worry that some multidecadal phenomenon was overlooked by sneaking into the Hale octave. Something like this seems to have happened on the high-frequency side of the SOL band, perhaps some ENSO noise from DEC (decadal band) getting into the TSI band. However the HALE band looks much cleaner than the TSI band, suggesting that nothing entered from the MUL (multidecadal) side. And since MRES as shown in Figure 10 is essentially flat by comparison with SAW and AGW, it would appear that those multidecadal variations not accounted for here are either too small to matter or have been inadvertently lumped in with (i.e. are inseparable from) one of SAW, AGW, or HALE.
Figure 10
With enough parameters one can make any two curves equal to within any desired precision. The judgement to be made here is whether the nine parameters used here have any chance of achieving a comparably accurate fit on random data in place of HadCRUT3. This was an objection raised to Mann’s methodology, and I would be interested to see if it applies here.
After the comments responding to this post, this might not be the poster I want any more, but meanwhile you go to the conference with the poster you have, not the poster you want. :)
In any event this poster is offered as a preliminary report on work still in progress. One reason for advertising it here at an early stage is that the denizens of Climate Etc. seem well motivated to poke holes in theories of global warming, which I view as a positive benefit of skepticism, as distinguished from flat denial. There seems to be a lot of the latter here too but I haven’t found it as useful.
JC comments: This is an invited guest post. I think that this is an intriguing analysis, but I have not looked at this in any detail. This is a technical post, please keep your comments on topic.
I am at the AGU this week, I will have a post on the meeting later this week.
Confucius say:
Beware results of frequency analysis if most of relevant information occurs at one end of a non-periodic time-series.
If we really understand it all then even if the average global climate does not rise as fast as projected and despite the fact it is godawful cold outside, C)2-producting humanity can still be seen to be the destroyer of the Earth because it should be even colder.
By analogy what Vaughan Pratt is trying so hard to tell us non-believers is, Look, look, the witch didn’t float so she must’a not been a witch. Next!
+100
-100
When you’re incapable of commenting on content, just spit in the writers’ general direction? Thanks for contributing and giving climate skeptics a good name.
Then, you do not find “non-believers” to be objectionable ad hom? Good.
Nice job Vaughan. I still think perfect matches in an imperfect world are scary though. Especially since it can be hard to separate cause and effect.
https://lh3.googleusercontent.com/-7yQZf4cRGG0/UL4YWvB6AnI/AAAAAAAAF3Q/M50BKvLQWzk/s720/TSI%252015%2520year%2520lag%2520with%2520sh.png
Then I am a bit of a thermo dinosaur playing with psych charts and such :)
One question (not the only one but the most obvious) that comes to my mind is the influence of volcanic eruptions. Their effect is strong enough to affect the outcome after the the removal of high frequency phenomena but lumping that effect with the others in the sawtooth sounds rather overfitting than potentially justified by other means.
Up to a point Lord Cooper. Have you ever seen a global temperature response from a volcano? They don’t actually exist, nor are they present in the reconstructions.
The observation is made that cycles of less than 20 years are filtered out, so presumably volcanoes are treated as noise and suppressed.
I took Pekka’s point to be about sustained volcanism, which would need to be sustained for at least 25 years to get through the F3 filter. If seismic events at the crust-mantle boundary are responsible as I suggested, sustained volcanism might be so well correlated with this (whether positively or negatively does not matter) that they may be inseparable.
I had intended, but forgot or ran out of space, to make the point that one cannot choose between well correlated phenomena based solely on observation. Volcanism vs. my crust-mantle/mantle-core boundary explanations is such a situation. Unlike the latter, the former has been the subject of much discussion ,and to save space I focused on new ideas at the expense of old. A longer writeup would need to do justice to both.
What I have in mind can be seen from this Fig 2 from the paper of Lean and Rind.
There we see that volcanic influence has led to cooling by up to 0.25 K and with a decay that has taken a few years. Averaging over periods up to 22 years leaves still a signal of a few tens of mK. As the effect is purely cooling without a compensating warming phase it’s not averaged out but it influences the analysis and reaching an accuracy of mK range in presence of such random external influence seems to indicate overfitting.
Vaughan,
Taking into account what you tell in some of your comments about your goals my comment is not directly relevant to your work as stated.
Going beyond that this detail along many others mentioned in other comments or unmentioned lead one to conclude that reaching the level of accuracy you have reached is probably not very significant. It may indicate that allowing as much freedom for the choice of model as you have used and as many explicit parameters as you have is likely to allow for the observed accuracy as long as the data is reasonably well behaving.
A separate issue is that one may compare the results that you obtain using approach with that of Lean and Rind who try to find out how much of the variability can be explained by solar and volcanic forcing and “explained” by ENSO. I put the latter explained in quotes because ENSO is not really an explanation but another index that tells about the same Earth system as the temperature data. Solar irradiation and volcanic activity are external to the system of atmosphere and oceans while ENSO is internal.
Pekka, place show me a Thermometer record which shows anything like the temperature change shown in that figures, as a result of volcanic aerosols.
In the sciences one tests a hypothesis against the data. If the data is different from hypothetical result is different from reality, start over.
You are completely unable to show a temperature record that shows the model line shape in Figure 2b., because its not true.
@Pekka: Going beyond that this detail along many others mentioned in other comments or unmentioned lead one to conclude that reaching the level of accuracy you have reached is probably not very significant. It may indicate that allowing as much freedom for the choice of model as you have used and as many explicit parameters as you have is likely to allow for the observed accuracy as long as the data is reasonably well behaving.
That’s certainly a possibility that worries me, Pekka. However I would expect that the 9 parameters of my SAW+AGW model are not enough to reach the observed accuracy for more than 1% of other randomly constructed “reasonably well behaving data,” and likely much less.
One way to test this would be to perturb HadCRUT3 slightly by adding a dozen or so sine waves each with randomly chosen phase, small amplitude, and frequency below that of the Hale cycle (since F3 effectively kills all higher frequencies). Do this say a thousand times, and for each note the resulting R2. If 99% of the R2’s were significantly less than 99.98% then consider HadCRUT3 “special” with respect to this model, in the sense that it would seem not to contain the sort of noise being added in this test.
Would this kind of test satisfy you?
Vaughan,
The problem in testing the significance of this kind of findings is in estimating the “effective degrees of freedom” involved in the selection of the form of the model. I cannot know exactly what you have done but probably you have looked at the data and pondered what kind of parameterization might work. You may have also tested several alternatives and finally picked the one that performed best. It’s impossible to evaluate well the role your selective process plays in the success.
Testing with perturbed data would require an equal amount of pondering and search for the best performing model to even approach comparability.
Literature on statistical technical analysis used by people who hope to make money in the stock market by such tools is quite revealing. A number of papers have been written on testing the predictive power of models based on history time series. Some of the papers list known caveats that lead to spurious positive results and methods have been proposed to get rid of such false positives. The problem is that such methods are likely to get rid also of some valid positives.
The most difficult problem is always the human role in putting in information that’s not counted among the free parameters fitted to the history. This problem applies to all simple fits to history data, be that by you, Scafetta, or anyone else. That applies also to the climate modelers who have developed their large models knowing at least qualitatively how the numerous choices that they have made and continue to make affect the outcome. For this reason they can never tell with reasonable precision the statistical significance of the agreement found in hincasting. They’ll never know, how much implicit tuning their models contain.
Pekka, I agree wholeheartedly with your comments. As I think you’ve gathered, my main goal is not to predict climate, or even explain it, but only to describe the multidecadal part of HadCRUT3. Explanations enter only as a motivation for descriptions that are at least consistent with the known physics and that are analytic by virtue of belonging to the class of functions containing the constant functions and closed under linear combination, exponentials, logs, and sines. The multidecadal part seems to lend itself it to simple such descriptions.
There is much more to say about this, particularly as regards the role of F3 in reducing dimension, but after spending a day writing some 25 paragraphs about filtering technology and dimension reduction I realized I should instead organize it as a post and therefore put it to one side for the time being so I can respond more immediately to other comments.
Not to worry Pekka. Volcanic cooling stays up there, never descends into the troposphere. Such volcanic cooling as has been claimed is nothing more than misidentified La Nina cooling incidents whose timing accidentally followed an eruption. Best known is the claimed Pinatubo cooling that has nothing to do with Pinatubo. That is not surprising if you consider that the entire temperature curve is a concatenation of side by side El Nino peaks and La Nina valleys. After all his filtering Vaughn Pratt still could not get rid of them and says that “…multidecadal climate has only two significant components: the sawtooth, whatever its origins, and warming…” He does not understand the sawtooth and is wrong about warming. His analysis is brilliant but signifies nothing.
@Arrak: After all his filtering Vaughn Pratt still could not get rid of them
Since “them” (the southern oscillations) have a period in the vicinity of 7 years, I can assure you that the 21-year F3 filter got rid of them completely. Saying it didn’t is simply contradicting its frequency response with no justification.
Vaughn – five or six years is more likely from observation. I don’t doubt that you can make them invisible but they have been a real feature of climate since the Isthmus of Panama rose from the sea. You need their precise locations to compare with dates of volcanic eruptions. Müller shows four independent temperature curves (NASA, Met Office, NOAA and Japanese) to demonstrate warming since 1880. There are errors but far more striking is the precise correspondence of El Nino peaks among them going back to 1880. You could probably place Tambora on one of his longer term curves and get the lowdown on the year without summer.
Vaughan Pratt
Although I have to admit that I am not really sure I understand all you are saying, I have some comments or questions.
As I see it you take a temperature record with strong multi-decadal warming and cooling cycles plus lots of annual ups and downs (sawtooths) and smooth it by filtering out various assumed short term or long term cycles and end up with a smooth curve that shows exponential warming, presumably from AGW.
This is essentially handling everything except GH forcing as background noise, which can be filtered out to end up with the real residual signal, as I understand it.
Before F3 smoothing you remove all the multidecadal ocean oscillations This apparently eliminates the strong early 20th century warming cycle 1940-1970 (which is statistically indistinguishable from the warming cycle of the late 20th century, 1970-2000). After F3 smoothing, the early 20th century warming cycle is completely gone. How realistic is this?
You indicate that you have built in a 15-year time lag, calling this the Hansen effect. Doesn’t Hansen posit a much longer time lag in his “pipeline” postulation?
You take out the TSI impact of the 11-year solar cycle. Does this smoothing also consider any other solar mechanisms (e.g. Svensmark), or does this even matter to the analysis? How is the unusually high level of 20th century solar activity handled?
Is the 1940-1970 MRES smoothing for increased aerosols a convenient fit or are there empirical data to support it? Same question goes for the 1970-1980 reduction in MRES and the increase after 1990.
A final question: was the intent of this study to end up with the underlying exponential warming curve or did that just happen after all the noise was filtered out?
Sorry if these questions are too simple – but I am just trying to understand what you have written
Max
This seems pretty straight-forward.
HadCRUT3 – CO2 – Hale = quasisawtooth
Then publish as: HadCRUT3 – Hale – Saw = CO2
In the glacial/interglacial sawtooth pattern, the amplitude of each tooth is nearly equal. Why are the 1880 & 2000 teeth in the quasisawtooth blunted versus the 1940 “tooth”?
This does have a similar look to Foster and Ramstorf statistical exercise. Perhaps there is merit in this approach. The fit seems too precise given the errors in the temp records though. It seems difficult to remove all the dynamical processes so very accurately though as to be missing any bias in your assumptions.
The fit seems too precise given the errors in the temp records though.
If a million temperature measurements each have an uncertainty (however defined) of one degree, then a parameter inferred from them will have an uncertainty of 1/sqrt(1000000) = one millikelvin.
Bias is always a problem, but I didn’t get the sense that bias was your primary complaint.
“If a million temperature measurements each have an uncertainty (however defined) of one degree, then a parameter inferred from them will have an uncertainty of 1/sqrt(1000000) = one millikelvin.”
If – and only if – they are measuring the SAME THING. Measuring the fuel efficiency of 1 million different cars does not pin down the fuel efficiency of one particular car UNLESS they are all the same make and model, driven over the same route at the same speed, etc etc. If they are not the same thing, then the only inferences we can make relate to DIRECTLY UNMEASURABLE statistical qualities of the sample – the mean, standard deviation etc etc, such as “our fleet average fuel consumption is x MPG”, or – digging further – “On average, California drivers in our fleet consume 15% more fuel per mile than Texas drivers”. While these statements may be accurate, they do not tell you about the individual cars or drivers, only about the qualities of the whole, or sub-samples thereof. They can help you to predict what changes will have the largest impact on average, but they do not tell you the impact on the individual concerned (in this case, particular drivers and/or particular cars).
So the questions we must ask are:
Is a global average temperature useful to us?
If so, how? What does it tell us that other measurements do not?
Is spacial average more useful than temporal average for particular sites / areas / regions?
“Is a global average temperature useful to us?”
It tell you most of earth is covered by oceans.
I think bias was the main question though I appreciate the answers to Max A’s questions they much more clearly state my curiosities. You have avoided his question regarding if the fit came naturally or by design
Having seen Vaughan’s development of this idea earlier on this forum, I think the first realization comes from seeing how CO2 has followed a simple functional form with time, and combining that function with T as a theoretical function of CO2, which gives T as a function of time. Fitting that to any kind of smoothed temperature trace shows only a few anomalous but temporary bumps. It is very persuasive that these are the main things going on at the century scale.
But it is very telling that much else is happening on decadal scales in more recent times.
@Neil Fisher: Is a global average temperature useful to us?
This is an important question. On less than a decadal scale I would say not: regional temperatures are more interesting to the region in question.
However two or three decades is enough time for regional phenomena to be assimilated into the global temperature. Hence regions can’t exhibit multidecadal phenomena because on that time scale they’ve become global.
So if you’re studying El Nino say then regional temperature is important. But if you’re studying a 75-year cycle, or a slow trend like global warming, then only global temperature matters.
That’s just my opinion, and I’m happy to be persuaded otherwise.
“This is an important question. On less than a decadal scale I would say not: regional temperatures are more interesting to the region in question.”
Ah – I gather from this reply that this is the reason for the 30 years minimum so often quoted as required for climate rather than weather. And while I certainly appreciate the logic (and yes, it does make sense!), I think that from what we can see from direct measurements (~ 150 years) and infer from proxies (up to millions of years) that it is apparent there are psuedo-cycles (your saw) that extend across several orders of magnitude (perhaps best described with a fractal) – everything from day/night, through summer/winter, PDO/AMO, the approximately 80, 300 and 1500 year cycles all the way through the 100k Milankovitch and perhaps beyond. It is difficult to believe (for me anyway) that we are aware of even the existence of some of these, let alone their magnitude and phase (where we are in terms of each cycle right now).
The point is: is the saw wave that you subtract to show the underlying trend merely bumps on the teeth of a yet longer psuedo-cycle that you have not considered (may not even be aware of)? Teeth on teeth, as it were (hence the fractal description earlier). It would certainly make an interesting study to determine the fractal dimension of your saw and then “zoom out” to the next level and re-apply your method – it seems to me that this may even provide a better fit to the available data (inc long term proxies) than the single level you have already calculated – certainly the change in the projections would be most interesting! Certainly it would be difficult to calculate the exact phase relationships, and so there may be more uncertainty than we might hope (and is displayed by your work so far), meaning that several projections (with the phase of the longer cycles differing) may match the available data, but this is quite intriguing to me. Alas, this is beyond my abilities and resources to investigate, but I hope you might be intrigued enough by the concept to investigate – if you do, I would very much appreciate another post here at Judith’s blog outlining the results!
Thanks for engaging with us here BTW – so many of your colleges have been “burnt” by blog interactions in the past and I hope you will not be one of them. I believe that if you ignore the ad hom and other dross, you can pick up some useful “peer” review from interested laymen. I would like to think I fit that description, but I highly doubt it ;-)
Neil, I’m truly sorry I overlooked your second December 7 comment back when you made it. Thank you for your insight, as well as the intriguing idea that SAW might be just a part of a larger fractal.
However AGW makes even SAW hard to see, particularly after 1950, and I would expect it completely washes out anything much slower than SAW. With a reliable temperature record going back further than 1850 (and even 1850-1900 is a stretch according to many) one could do better.
The best candidates there would seem to be CET (Central England Temperature) for 1660-1900, the various ice core samples, and the International Tree Ring Data Bank maintained by NOAA.
Global warming would seem to have hit CET about a century earlier than HadCRUT3 because it’s so regional. While it has always reflected global temperature, Hubert Lamb’s premise, it samples only 0.01% of the Earth’s surface making it much more sensitive to industrialization in that region. The green curve here at Leif Svalgaard’s site starts out cleanly but is pretty ragged during 1900-1980, though it cleans back up after 1980, perhaps on account of more consistently applied emissions controls in that neck of the woods. (I’m using the Sun’s Hale cycle as a canary-in-the-mine because that portion of the climate spectrum seems to be unusually free of interfering natural signals compared to the rest.)
I believe that if you ignore the ad hom and other dross, you can pick up some useful “peer” review from interested laymen.
Yes, overall I’d say the response here has been a plus, at least in terms of my understanding of the issues raised.
The main downside of the “dross” is that it makes the thread much harder to follow. At some point I may try to address this by collecting the more salient criticisms, questions, and other contributions in one place.
Great questions, Max!
How realistic is this?
There are two parts to my analysis of HadCRUT3: describe, then explain.
Questions like yours about realism of a description can only concern explanations of it, not the description itself. As someone perfectly capable of doing the relevant arithmetic I stand strongly behind my description, but nowhere near as strongly behind my explanation. So to answer your question (which I take to be about explanation), not much until I get buy-in from others about whether my explanation (in terms of seismic events at the two mantle boundaries) is at all realistic.
Doesn’t Hansen posit a much longer time lag in his “pipeline” postulation?
Where? And how much longer?
You take out the TSI impact of the 11-year solar cycle. Does this smoothing also consider any other solar mechanisms (e.g. Svensmark), or does this even matter to the analysis?
The simple answer is that it doesn’t matter because the phenomenon Svensmark points to, namely the interaction between the galactic magnetic field and the Sun’s, operates on the same 21-year cycle that F3 removes.
However it’s an interesting question nonetheless. One grad student in hydrology asked me during the poster session this afternoon whether it would ok for him to cite Svensmark’s paper in support of his analysis of cycles in Indian hydrology. I told him that the papers of Ney and Dickinson on the same subject in respectively 1959 and 1975 would serve that purpose much better, not only for priority but also because they did not have the axe to grind that Svensmark does.
How is the unusually high level of 20th century solar activity handled?
Numbers, please.
Is the 1940-1970 MRES smoothing for increased aerosols a convenient fit or are there empirical data to support it?
If you’re referring to the 1950-1980 “bump” in MRES, how is it “convenient?” I wish it would go away. Please play with the Excel spreadsheet so that you can see what I mean. To the question “who ordered that?” it wasn’t me.
Same question goes for the 1970-1980 reduction in MRES and the increase after 1990.
Same answer.
Was the intent of this study to end up with the underlying exponential warming curve or did that just happen after all the noise was filtered out?
Great question. My analysis was in two steps: describe, then explain.
The tendency in climate science has been to eyeball the data and proceed right away to the explanation. All along the “underlying exponential warming curve” was in the back of my mind, but it seemed to me intellectually dishonest to infer it from inadequately described data such as the 162 numbers in the raw HadCRUT3VGL times series, which was just a mess of numbers.
I addressed this concern by reducing 162 numbers to 9. Part of this was done by applying F3, which I estimate to reduce the dimension from 162 to 16.
By putting up with a poor R2 (well less than 1), one can typically lop off a few more dimensions.
In this case the dimensionality went from 16 to 9 with an R2 of 0.9998.
Whereas I only play a statistician on YouTube, MattStat/MatthewRMarler is a real statistician, so I would defer to him on the question of whether 16 –> 9 vs. 0.9997 was significant. What say you, Matt?
(I asked Persi Diaconis this question a couple of months ago and he inclined towards significance. Seems like an interesting question.)
Dr. Pratt, you write “The simple answer is that it doesn’t matter because the phenomenon Svensmark points to, namely the interaction between the galactic magnetic field and the Sun’s, operates on the same 21-year cycle that F3 removes.”
I have enormous difficulty with this claim, but my expertise is not sufficient to really dispute it. But I think it is wrong As I understand things, Svensmark’s point relates to the strength of the sun’s magnetic field. This changes little over the Hale cycles. It is far more related to the sort of measurements Livingston and Penn are making on the magnetic strength of sunspots, which has been decreasing steadily ever since measurements started around 1998. I dont think that this magentic effect shows any sort of 22 year cycle. It is more likley associated with the 189 year cycle of planetary alignment.
But we really need someone like Leif Svalgaard to comment on this claim.
vrpratt: All along the “underlying exponential warming curve” was in the back of my mind
This was what I meant by finding the correct filter to match someone’s expectation. And, if in fact that is the correct function, then you found the best filter to reveal it. There is a symmetry: if you know the characteristics of the noise, you can design a filter that will reveal the signal; if you know the signal, you can filter away ad lib until you have revealed the signal. If both are in doubt, a clear result is ambiguous. There are different ways to say this: instead of testing a hypothesis, one may say that you have “rescued” the hypothesis. Or, the procedure itself has a type 1 error rate of 1.0, when the nominal value is 0.05.
Or, consider the hypothesis that there is a 1000 year period, and we are nearing or in the “next” peak: (a) you can assume it’s true and filter until you have it clearly confirmed; or (b) you can assume it’s false and filter until it’s removed (which you did by focusing on the short recent time series.)
to test whether you have found something reliable, keep your full model prediction: model = “smooth” + “rough”; compare to future data.
MattStat, “one may say that you have “rescued” the hypothesis.”
I like that. Couldn’t you compare “rescue” attempts? Use the same rescue solar and compare. When you get into smoothing you are throwing away lots of information that needs some reasonable standard method to validate.
https://lh3.googleusercontent.com/-7yQZf4cRGG0/UL4YWvB6AnI/AAAAAAAAF3Q/M50BKvLQWzk/s720/TSI%252015%2520year%2520lag%2520with%2520sh.png
I threw solar TSI smoothed to 11 yma and used the 15 year lag in that. With a little SAW I could nail solar pretty easy.
vrpratt: Whereas I only play a statistician on YouTube, MattStat/MatthewRMarler is a real statistician, so I would defer to him on the question of whether 16 –> 9 vs. 0.9997 was significant. What say you, Matt?
It’s really hard to tell.
If you would like your modeling result to be taken seriously as a guide to future planning (I don’t mean to presume to know your motives), then keep track of the squared prediction error, the sum of the squared prediction errors (CUSUM), and the square root of the mean prediction error (RMSE) over the next 20 years. That will provide better information for whether you have a significant result, by any definition of significant.
I used to “lean” toward AGW, and a result like you got. Now I “lean” toward the 1000 year cycles and the idea that we are near a peak. The two ideas make sharply different forecasts for the next 20 years. I am 65, so I may not live long enough to decide which leaning was correct. But the test of the model is in the future data. If you see Dr. Diaconis soon, I would be interested in his responses to my comments. I can’t really believe they are worth his time, but who knows?
vrpratt and MattStat
re: 1000 year cycle
Loehle and Singer evaluated nine temperature reconstructions and found a climate cycle about 1500 years (or 1200) long that may correspond to the Pleistocene Dansgaard-Oeschger (DO) oscillations. See:
Craig Loehle and S. Fred Singer, Holocene temperature records show millennial-scale periodicity. Canadian Journal Earth Science Vol. 47 pp 1327-1336 (2010).
Bond event zero?
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=Vance2012-AntarticaLawDomeicecoresaltcontent.jpg
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00003.1?journalCode=clim
vrpratt and MaxStat
On AGW attribution, what is you NULL hypothesis?
Your filtering exercise shows an underlying accelerating upward trend. However, I also expect a natural underlying accelerating upward trend from the 1500 year cycle. e.g. there is
1) a mild global cooling from the Holocene Climatic Optimum
2) A millenial scale oscillation of ~ 1500 years per Loehle & Singer above
(i.e. an approximately linear rise from the Little Ice Age – or better
an accelerating natural warming since the LIA)
3) A 50-60 year multidecadal oscillation.
4) A 22 year Hale solar cycle (or conventionally 11 year Schawb) (See WGR Alexander et al. (2007) Linkages between solar activity, climate predictability and water resource development).
How then do you distinguish anthropogenic contributions, from CO2 etc. from that Null Hypothesis?
5) AND account socio economic impacts on the temperature data.
(See Ross Mckitrick’s recent papers)
Robustness
Have you evaluated the potential to hindcast/forecast from a two portions of the data and then compare the projections against the third portion?
Have you any comments on the relative physical and statistical validity of your methods compared with those of Nicola Scafetta? cf
Testing an astronomically based decadal-scale empirical harmonic climate
model versus the IPCC (2007) general circulation climate models
or see the links above
Physicality
Consider the recent WUWT comments of physicist Robert Brown of Duke U
On certainty: Truth is the Daughter of Time
So what do your results mean?
Do I understand you to assume an exponential CO2 rise to cause an exponential temperature rise?
From Beers law etc., the warming contribution of CO2 is logarihmic to the concentration. Consequently wouldn’t combined impact of exponential and logarithmic be an approximately linear warming contribution? e.g. as per Scafetta 2011 above?
But it is very telling that much else is happening on decadal scales in more recent times.
@David L. Hagen: On AGW attribution, what is you NULL hypothesis?
That there is no essential difference between the AHH law as I formulated it and observed global warming as I defined it.
How then do you distinguish anthropogenic contributions, from CO2 etc. from that Null Hypothesis?
By its good fit to the data since the onset of serious industrialization. I would be very interested to see how well your null hypothesis fits the data. over that period. My expectation would be that a good fit is not possible, but I enjoy being surprised in matters of science.
@David L. Hagen: Have you evaluated the potential to hindcast/forecast from a two portions of the data and then compare the projections against the third portion?
That’s a rather low bar compared to simply deleting the last few decades. My analysis depends critically on the Keeling curve, so if you try to predict 2010 by deleting all data after 1980, you only have 22 years of Keeling curve to go on. If instead you delete a middle third while retaining the last third as you suggest then you’ve retained the critical part of the Keeling curve, which sounds like cheating.
What you really want to know is how well the model predicts when you don’t know any part of the future, not just HadCRUT3 but also the Keeling curve.
So I deleted everything after 1980 and did the fitting based only on data from 1850 to 1980. (So far only RobertInAz has shown any interest in auditing my work—he’s welcome to play with http://clim.stanford.edu/hadcrut3to1980.xls which does all this.) Here are the changes from fitting to HadCRUT3 to 2010.
The parameters are in three groups, the timing/amplitude of SAW, its shape, and the three CO2 parameters.
Timing/amplitude barely changed:
Period: 151 years no change
Trigger: 1924.46 –> 1924.47 (essentially no change)
Amplitude: 1.80 –> 1.81 (very small change)
The three shape parameters controlling the amplitudes of the 4th and 5th harmonics and their common phase shift also barely changed:
Amp4: 0.14 no change
Amp5: 0.47 –> 0.44 slight decrease
Phase Shift: .03 –> .032 slight increase
The biggest change was in estimating CO2 and its impact.
Natural CO2: 287.3 –> 281.1 (large decrease)
Sensitivity: 2.83 –> 2.43 (large decrease)
I blame these big shifts in those two parameters on our rather incomplete understanding of CO2 up to 1980, relative to what we have now, namely 1850-2010.
Hansen delay did not change:
Hansen delay: 15 years no change
Is there a peer-review process for posters at AGU meetings?
no
Seems to me that this blog is doing a pretty good job already of peer-review.
25 comments from about a dozen individuals in just a couple of hours.
A process simialr to this will be the future replacement for old-fashioned peer-review…and far better than Phil Jones’s gut feel:
‘ I have a feel for whether something is wrong – call it intuition. If analyses don’t seem right, look right or feel right, I say so’
aut viam inveniam aut faciam
Yeah, ain’t it great. And medical decisions will be made in the same way. Need a new liver? We better post on the blog and see what folks say. There is a difference between feedback and peer review.
I would not make too much of the lack of peer-review for the AGU general meeting. It’s been that way for years and those attending know that to be the case. I’ve viewed it as ‘well, good here is an open forum for members.’ It serves multiple purposes–good and not so good, depending on point of vision. Yawn.
‘point of vision’ –>> ‘point of view’ (oops, thinking of Persistence of Vision)
mwg practiced medicine in the recent past because medical references seem to be popping up from time to time ;)
Peter
No medical practice. Medicine is interesting from a number of perspectives including uncertainty, science, rules-of-thumb, ethics, etc., and it has been a significant area of interest over the years for decision theory and artificial intelligence researchers in part because it has non-trivial elements of uncertainty, serious outcomes, extra-medical considerations, etc. The uncertainty and gravity surrounding some medical decisions and the sheer number of these decisions suggest that they might provide useful insights for some of our ‘one-chance’ environmental decisions. Just a personal bias operating here ;O)
mwg your take on the ubiquity of medical issues and their relevance to theories of general decision-making is an interesting and valid one. I have often thought, however, that the better practicioners were slower and more methodical in their approach to diagnosis and were better in patient relationships.
The very bright ones seemed more remote and bored with the whole thing and yet, to get into medical school in Australia one had to be pretty bright. There are, however, a few very hard workers who manage to get the pass levels required for Uni entrance and these ones seem generally to be better at their vocation.
@Latimer Alder: Seems to me that this blog is doing a pretty good job already of peer-review.
Lord Monckton took “peer review” quite literally.
Poster sessions are a means of gathering critical information on an idea you think is good. It might also have a hole a truck could be driven through that you missed in the blinding flash of creative genius that generated the poster session. Someone will wander up and stare at your work for a bit and then say, “have you considered …?” You will either then say “of course …” or stare back like a poll-axed steer. The obvious is often unaccounted for.
Yes. What a pity the authors didn’t take that approach before submitting the paper with this scheme to the journal:
http://judithcurry.com/2012/08/24/a-modest-proposal-for-sequestration-of-co2-in-the-antarctic
They got plenty of advice on this thread, but the paper has already been submitted and the Lead author did not accept most of the comments:
Presentation of am AGU general meeting 10 minute talk or a poster session is not viewed as anywhere near the equivalent of peer-reviewed journal publication. It serves other purposes.
Yes, though not at the journal level. It is easier to find AGU-FM posters that contradict each other than Nature articles.
Has the temperature data been considered in light of Dr. Ross McKittrick’s statistical analysis that brings doubt to its validity? I am not sure if the HadCRUT3 data has similar problems.
And…I am NOT a scientist so I suspect my question above might have already revealed as much ;-) I just wonder if the underlying data regarding warming is accurate.
Mike, in addition to data accuracy, there are quite a bit of filters and delays in the pipeline. These can do magic to get things aligned.
How do we validate the Arrhenius-Hofmann-Hansen’s Law? Any suggestions by the authors of the law anywhere in the science?
The historical record would show that one could not have a change in temperature without a change in atmospheric CO2.
There would never be a case where temperature fluctuated and CO2 didn’t or that CO2 fluctuated and temperature didn’t.
I refute it thus
http://i179.photobucket.com/albums/w318/DocMartyn/last400KYEPICADomeCCO2vsTemp.jpg
How do we validate the Arrhenius-Hofmann-Hansen’s Law?
“Control-Knob Law” is a lot easier to say.
“Control-Knob Law” is a lot easier to say.
Control is a redundancy.
LOL Doc !!!
These might be of interest to folks whose knowledge of calculus and statistics is not covered with rust, as mine is. “Harmonics” caught my eye; my recollection is that “harmonics” can cancel or amplify component signals.
Markonis, Y., and D. Koutsoyiannis. “Climatic Variability over Time Scales Spanning Nine Orders of Magnitude: Connecting Milankovitch Cycles with Hurst–Kolmogorov Dynamics.” Surveys in Geophysics (2012). doi:10.1007/s10712-012-9208-9
Related to paywalled paper Scafetta, 2012?
Scafetta, Nicola. “Multi-scale Harmonic Model for Solar and Climate Cyclical Variation Throughout the Holocene Based on Jupiter–Saturn Tidal Frequencies Plus the 11-year Solar Dynamo Cycle.” Journal of Atmospheric and Solar-Terrestrial Physics 80, no. 0 (May 2012): 296–311. doi:10.1016/j.jastp.2012.02.016
http://www.sciencedirect.com/science/article/pii/S1364682612000648
Pooh, Dixie
For preprints see Nicola Scafetta’s home page e.g. see.Scafetta N., 2012. Multi-scale harmonic model for solar and climate cyclical variation . . .PDF PS Scafetta’s updated graph is at the page bottom.
For full presentation graphics see Scafetta’s Presentation at 2012 SORCE Science Meeting – Annapolis, MD, 18-19 Sept., 2012.
For his earlier technical paper & discussion see:
Scafetta N., 2012. Scafetta N., 2012. Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models. (Science and Public Policy Institute). Web: PDF
As reported by
Watts, Anthony, Y. Markonis, and Demetris Koutsoyiannis. “New Paper from Markonis and Koutsoyiannis Shows Orbital Forcings Signal in Proxy and Instrumental Records” Scientific. Watts Up With That?, November 4, 2012.
http://wattsupwiththat.com/2012/11/04/new-paper-from-markonis-and-koutsoyiannis-shows-orbital-forcings-signal-in-proxy-and-instrumental-records/
Markonis and Koutsoyiannis, 2012
http://www.itia.ntua.gr/en/docinfo/1297/
http://www.springerlink.com/index/10.1007/s10712-012-9208-9
David L. Hagen: Many thanks! :-)
Dr Pratt
A law no less, and to think Reimann only has a hypothesis to his name.
LAW
1) An empirical generalization; a statement of a biological principle that appears to be without exception at the time it is made, and has become consolidated by repeated successful testing; rule (Lincoln et al., 1990)
2) A theoretical principle deduced from particular facts, applicable to a defined group or class of phenomena, and expressible by a statement that a particular phenomenon always occurs if certain conditions be present (Oxford English Dictionary as quoted in Futuyma, 1979).
3) A set of observed regularities expressed in a concise verbal or mathematical statement. (Krimsley, 1995).
THEORY
1) The grandest synthesis of a large and important body of information about some related group of natural phenomena (Moore, 1984)
2) A body of knowledge and explanatory concepts that seek to increase our understanding (“explain”) a major phenomenon of nature (Moore, 1984).
3) A scientifically accepted general principle supported by a substantial body of evidence offered to provide an explanation of observed facts and as a basis for future discussion or investigation (Lincoln et al., 1990).
4) 1. The abstract principles of a science as distinguished from basic or applied science. 2. A reasonable explanation or assumption advanced to explain a natural phenomenon but lacking confirming proof (Steen, 1971). [NB: I don’t like this one but I include it to show you that even in “Science dictionaries” there is variation in definitions which leads to confusion].
5) A scheme or system of ideas or statements held as an explanation or account of a group of facts or phenomena; a hypothesis that has been confirmed or established by observation or experiment, and is propounded or accepted as accounting for the known facts; a statement of what are held to be the general laws, principles or causes of something known or observed. (Oxford English Dictionary, 1961; [emphasis added]).
6) An explanation for an observation or series of observations that is substantiated by a considerable body of evidence (Krimsley, 1995).
Forgot to add auxilary hypothesis to my previous list. Given the nature of the blog topic I feel it would not be complete without it.
The introduction of an auxiliary hypothesis should always be regarded as an attempt to construct a new system; and this new system should then always be judged on the issue of whether it would, if adopted, constitute a realadvance in our knowledge of the world. An example of an auxiliary
hypothesis which is eminently acceptable in this sense is Pauli’s exclu-
sion principle. An example of an unsatisfactory auxiliary hypothesis would be the contraction hypothesis of Fitzgerald andLorentz which had no falsifiable consequences but merely served to restore the agreement between theory and experiment.
“Auxiliary hypothesis” implies that there is a main hypothesis. I prefer to think of hypotheses competing on an even field, rather than in terms of a reigning champion and a challenger.
“…but meanwhile you go to the conference with the poster you have, not the poster you want… In any event this poster is offered as a preliminary report on work still in progress…”
Are we looking at the graph with the jaundiced eye of skeptic, for the sake of the health and credibility of future scientific endeavors, or with a shopkeepers desire to fill shelf space with whatever the government will buy?
Neither. I’m looking at HadCRUT3 with an eye to describing it as simply as possible. I would be thrilled if anyone could offer me a simpler description.
Its the utility of the description you should be worried about (and this is context specific), not the simplicity. Simplicity can be useful, but not always.
The simplest description is that this its a temperature series.
I fear for the future of Western civilization. AGW prognosticating has become a smithy’s craft. From an age of technology, reason and hope we have turned a dark corner to the disinformation age where schoolteachers trade in their sheepskins for hammers to pound out the coffin nails to be used to bury science.
So if this is a work in progress, what are you shooting for? A milliKelvin?
Free from preconception and bias, what can we really know about the theory that humans cause global warming?
■We know that global warming is not proven science. Just what is the circumstantial evidence for global warming?
■We know that climate change is not unusual. It’s not even unusually rapid.
■We also know that the myth of a scientific consensus belies the actual fact of an ideologically-driven consensus supported by fraud and corruption.
■We know that the global warming alarmists have become further and further removed from the kind of rationalism that a dispassionate search for truth requires.
■We see the failure of academia and note its precipitous decline in a sense of truthfulness among AGW scientists in proportion to the reality-inspired cognitive dissonance of the confused Climatology belief system.
■We see global cooling. We see all of the other completely natural explanations for climate change that global warming alarmists ignore.
■We know now about all of the errors in historical land measurements, and how NASA is the next CRU; and, we know how more accurate evidence from satellite data does not show any dangerous global warming at all.
■We have learned that the atmospheric CO2 levels as measured at Mauna Loa are totally erroneous — the mere product of a cottage industry of fabricating data by a father and then his son.
■We all smelled the carcass of stinking fish in Copenhagen and the Leftist-lib agenda is all too clear to ignore the real truth about the global warming hoax.
Some circumstantial evidence is very strong, as when you find a trout in the milk. ~Henry David Thoreau
Wagathon, you’re a loon. Please remember not all of us are when you start tossing around “we.”
It should be implicit that when I use the royal “we,” I do not mean to include hypocrites. Is that better?
If by “better” you mean better at showing you’re a loon… yes. It is better. Otherwise, you’re saying anyone who disagrees with your stupid comments is a hypocrite, and that’s…
Loony.
For example, understanding that global warming is not a proven science and that there is no circumstantial evidence for global warming alarmism — which is why we see goats like political charlatans like Al Gore showing debunked graphs like the ‘hockey stick’ to scare the folks — and, not understanding that climate change the usual thing not the unusual thing and that the climate change we observed can be explained by natural causes is the only thing that really separates we the people from superstitious and ignorant government-funded schoolteachers on the issue of global warming… that and the fact that global warming alarmists do not believe in the scientific method nor most of the principles upon which the country was founded.
I’d say that comment pretty well demonstrates it. Either people who disagree with you, including our hostess, are hypocrites and apparently ignorant, or you’re a loon.
Guess which seems more likely.
Don’t be embarassed if you do not understand that the ‘we’ of science prefer the scientific method to guessing. Bob Carter points to what is missing–independent verification: “the essence of scientific methodology is the free sharing of data, and the unfettered and unprejudiced discussion of those data. Issuing statements of ‘consensus’ or ‘authority’ is antithetical to good science, and especially so in circumstances where the originating organizations have been established with political intent, have acted to restrict public debate or have a financial conflict of interest. Those familiar with the global warming issue will know that (IPCC) authority rules, despite it being well known that some IPCC practitioners of warming alarmism have flouted correct scientific procedures since the 1990s. And, anyway, a science truth is so not because the IPCC, the Royal Society or the Minister for Science asserts it to be so, but because it is based upon a hypothesis that has survived repeated testing by many independent scientists.”
What, successfully evading testing doesn’t count? How naively unpolitical of you!
I think this thread gives a further indication of just how desperate the warmists, including our hostess, are becoming. The more empirical data we collect, the more it gives a stronger and stronger indication that adding CO2 to the atmosphere has a negligible effect on global temperatures. This is, of course, heresy; it does not conform to the religion of CAGW. As Ronald Coarse noted “If you torture the data long enough, it will confess.”. Unfortunately, torturing data is not an indictable offense in any country. Ohterwise I would be delighted to make a citizen’s arrest, and turn Vaughan over to the authorities for prosecution.
I note our hostess claims that this is a technical post. I beg to differ. This is sheer propoganda.
Yes, the certainty (no significant effect) is growing and the desperation of warmists too.
Oh do come off it Jim, the use of epicycles to explain and predict complex phenomena has a long history. When they show the instruments of torture to the ‘denialists’ you will change your tune.
There is no need to torture data anymore, the drones will save us all.
The Cripwell:
I note our hostess claims that this is a technical post. I beg to differ. This is sheer propoganda.
And this is sheer denialism.
And this is nothing but a loser who has nothing better to do with his life than to go around the internet and regurgitate political slogans like ‘denialism’.
Andrew
Do you have to stop breathing when you type, I wonder?
Jim you are hundred percent right on carbon dioxide having a negligible effect. I count it as zero because of Ferenc Miskolczi and because of the failed predictions from IPCC. Miskolczi elaborated his theory in a 2007 paper according to which the greenhouse gases collaborate by feedbacks to keep the IR transmittance of the atmosphere constant. He even calculated theoretically that the optical thickness of the atmosphere in the infrared should have a value of about 1.86. This corresponds to an IR transmittance of 15 percent. He was attacked in the blogosphere because his theory requires water vapor feedback to be negative, the exact opposite of IPCC. This was vital to these guys who needed the positive feedback to produce their outrageous warming predictions. The theory was up in the air until 2010 when Miskolczi was able to put it to an experimental test. What was required was a direct comparison between his theory and the greenhouse theory on infrared absorption by the atmosphere. His theory says that the IR transmittance of atmosphere should not change when more CO2 is added to it. With the greenhouse theory the opposite is true – adding CO2 will lower the IR transmittance of the atmosphere. Using NOAA database of weather balloon observations Miskolczi was able to demonstrate that the IR transmittance of the atmosphere did not change for 61 years while carbon dioxide percentage increased by 21.6 percent. This is exactly what his theory had predicted and was a clear victory over IPCC. If so, it follows, we should be able to show in other ways whether the greenhouse effect works or not. Fortunately we can. In 2007 IPCC predicted from the greenhouse theory that global warming in the twenty-first century shall proceed at the rate of 0.2 degrees per decade. We are now in the second decade of this century and there is no sign whatsoever of this predicted warming. The fate of scientific theories that make wrong predictions is to be consigned to the trash heap of history. The greenhouse theory has already made two wrong predictions First, that adding carbon dioxide to air will reduce atmospheric IR transmittance (it didn’t); and second, that it will cause twenty-first century warming (it didn’t). That’s enough to earn it a place in that trash basket of history. Greenhouse warming theory, R.I.P.
This is the most recent of 2 decades worth of work trying to identify periodic filters and decay rates that can smooth the observed trend and get a relatively straightforward function of CO2 as a result. This is either the Holy Grail or else a carefully constructed flimsy imitation. That is: if this is the signal of CO2, you have constructed the best filters to reveal it; if this is not the signal of CO2, you have constructed the best filters to reveal something conforming to someone’s expectations.
Whether you have found the signal of CO2 is as uncertain as with all the other phnomenological model fitting efforts.
The best test of models is how well they are matched by future data. What is your model for the data collected after the last of the data used in estimating model parameters? What is your model for the next 30 years, say possibly 3 models as Hansen did for 3 realistic CO2 scenarios?
What is your estimate of the transient climate effect, say a doubling of CO2 over a span of 70 years?
There is an apparent period of 1000 years or so, that produced the Minoan Warm Period, Roman Warm Period, Medieval Warm period, etc. That is, it is “apparent” to some. If you subtract out the best estimate of that periodic function, how much remains to be accounted for by CO2?
Remember to smile, wink and chuckle when you say “millikelvin accuracy” and “99.98%” aloud.
Just to add a couple of comments in the same vein.
First why not subtract some CO2 series (possibly one with some basis in reality) from the temp series and then play around to explain the residue? Answer that question and you start to expose the problem with the reverse approach – methodologically they are much the same.
Second why not hold out 50% of your temp time series (random selection of each data point perhaps), do your analysis on on one half and check the fit to the other. Don’t even need to wait for the future to see if it works.
On Dec 4 your post refers to Dec 4 as “tomorrow”. Is a puzzlement!
tangled web weaving is tricky business
I mailed my post to Judith on December 3 and she said she’d post it that evening. Due to time pressure she wound up posting it on the morning of Dec. 4 without editing it accordingly. Less puzzled now, King of Siam?
Thanks for clearing that up, Anna.
Yep. I’m even whistling a happy tune, that I picked up on the docks.
From 1850 to 1980 SO2 emissions rose from almost 0 to 140,000 Gigagrams per year in 1980 and then dropped down to 110,000 Gigagrams around 2000 and then it started to rise again as China began to burn a lot more coal.
http://sunshinehours.wordpress.com/2012/09/14/are-we-cooling-the-planet-with-so2/
You forgot to account for 7 Pinatubos rise of SO2 by 1980 and a 1 Pinatubo drop from 1980 to 2000.
Good point, one that I had the very fortunate opportunity to discuss with Pieter Tans from NOAA Boulder this afternoon when he dropped by my poster. (Perhaps I should call him Al since he and James Butler are the two al’s in my “Hofmann et al” in the poster.)
SO2 (which cools) and brown cloud pollution (which warms) are too well correlated to separate. The only question is which dominates. MRES suggests brown cloud dominates, which Pieter had no quarrel with. Had MRES gone down instead of up it would support that SO2 dominates.
sunshine
The problem with the aerosol explanation for the mid-century cooling is that it begs the question:
If human aerosols (SO2) were responsible for the mid-century cooling, could it not be that their removal was largely responsible for the late century warming (rather than GHGs)?
Hans Erren has plotted this for the USA with the same line of reasoning:
http://members.casema.nl/errenwijlens/co2/usso2vst.gif
Max
“could it not be that their removal was largely responsible for the late century warming (rather than GHGs)?”
Of course.
If 1 Pinatubo of CO2 causes .5C warming, then removing 1 Pinatubo of SO2 should causes .5C of warming.
Ooops. “If 1 Pinatubo of SO2 ” (not CO2 which I typed inadvertently)
” the oceanic heat sink delays the impact of radiative forcing variations on HadCRUT3 by 15 years”
Does this mean the current temperature plateau has its root in what happened 15 years earlier ?
Well the bottom of the Oceans is at 4 degrees so the water there came from polar sources, melted ice/seawater or chilled winter, high salt brines.
The movement of cold water to the bottom is why the Oceans are cold. The Oceans are not cold at the bottom because they are at thermal ‘equilibrium’ with the sea bed.
Strange but true:
The average temperature of the ocean, top to bottom, is 3.9C.
The temperature below 300 meters is a fairly constant 3C all over.
The freezing temperature of seawater is -1.8C.
The maximum density of seawater occurs at -1.8C.
Matter at 4C has a radiative emittance of 335W/m2.
The power delivered to the earth from the sun is 1366W/m2 at top of atmosphere.
Projecting solar power onto a sphere reduces it by a factor of 4 to 341W/m2.
It could be just a coincidence that the average temperature of the ocean is almost precisely that of a spherical black body illuminated by a 1366W/m2 source. Then again maybe it isn’t just a conincidence.
I agree with the general analysis of Mr. Pratt’s but not the extrapolation going forward. We are only now beginning to see some of the Earth system feedbacks begin to kick in from previous decades of CO2 emissions. An ice free summer Arctic and melting permafrost are just two of the Earth system feedbacks that will alter the future shape of the curve. This quadratic rounding or leveling may occur later in the century or next century but we’ve got several jumps upward to come in the deacdes ahead. This is just getting started.
Skeptical warmist
Baseless beliefs of certainty are for religious zealots and realclimate bloggers. Here we expect more.
As Niels Bohr pointed out decades before Yogi Berra, “Prediction is very difficult, especially about the future.” Please don’t view my extrapolations as predictions, there’s a difference. As extrapolations they are perfectly fine.
Ray Pierrehumbert told me this morning at AGU that the permafrost threat was greatly overblown. I have no opinion either way, so if you disagree with Ray please take it up with him, not me.
My comment is more about Earth system feedbacks. Hansen’s last few papers have made some excellent points about these as well as the overall Earth system level of equilibrium. Once big things like permafrost, Greenland and Antarctica really chart to change, the feedbacks fall heavily into the positive side of things. This rounding of your curve of course does not and cannot include these, but they could be significant.
Wondering how much Ray is an expert on permafrost or up on the latest resarch there– really, I’m wondering…I’ve got no idea what his expertise in this very specialized area is, for the latest research would seem to contradict his comments:
http://www.newscientist.com/article/dn22549-arctic-permafrost-is-melting-faster-than-predicted.html
Hi Dr. Pratt
There is no beef in the CO2; the beef is in the mid-Atlantic ridge.
http://www.vukcevic.talktalk.net/SST-NAP.htm
It is shame you left out the bit about the low frequencies and the ocean floor.
A shame indeed. But I only attributed ocean oscillations to that effect. Are you able to account for global warming the same way?
North Hemisphere yes, Enso possibly, not within miliKelvin, not globally, but to a point and to a degree where can be taken seriously, hopefully by those to whom true cause matters more than a personal conviction.
this doesn’t make any sense to me
it’s making more sense to me now, the order of graphs confused me
Well done, and with maybe two parameters more, dr. Pratt might be able to reduce the residual well into the microkelvin range! And congratulations to the people who have measured and processed surface temperatures for over a century for their remarkable achievement.
Actually I’ve pretty much had it with these least-square fits to a single signal with only a few degrees of freedom “explaining” everything and more, with wildly diverging conclusions but always stated with confidence. Whether done by professional scientists or amateurs. In spirit, it comes pretty close to astrology in my opinion.
Three more parameters and he can draw elephant. Oh goody.
About right. For an R2 of 99.98% I’d calculated seven, shall we split the difference? ;)
The occurrence of the records by decade [i.e., by decade by state]… makes it obvious that the 1930s were the most extreme decade and that since 1960, there have been more all-time cold records set than hot records in each decade.
However, there are only 50 states, and this is a number that isn’t large enough to give the best statistical results… [a better metric is a ] year-by-year numbers of daily all-time record high temperatures from a set of 970 weather stations with at least 80 years of record… There are 365 opportunities in each year (366 in leap years) for each of the 970 stations to set a record high… Note the several years above 6000 events prior to 1940 and none above 5000 since 1954. The clear evidence is that extreme high temperatures are not increasing in frequency, but actually appear to be decreasing. The recent claims about thousands of new record high temperatures were based on stations whose length-of-record could begin as recently as 1981, thus missing the many heat waves of the 20th century.
John R. Christy, PhD, Alabama State Climatologist, The University of Alabama in Huntsville / Senate Environment and Public Works Committee,
1 August 2012 (One Page Summary)
(Data: NOAA/NCDC/USHCNv2)
FAIL
A Cornell statistics professor explains why you Do not smooth time series.
And if you do, you never ever use the smoothed data as input to another analytic.
With four parameters I can fit an elephant, and with five I can make him wiggle his trunk. ~Jon von Neumann
There are three kinds of lies: lies, damned lies, and statistics. ~Mark Twain
Lessee, first you massage away inconvenient data (sawtooth). You pretend that there was a semblance of something that could be called global temperature sensing in the late 19th and earlyt 20th century. You smooth the questionable, massaged data into curve that looks like about 30 zillion (rough estimate) curves in nature. Then drop off the past 15 years of data. Then you fit it.
Are you f*cking kidding me?
Briggs, William. “Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data.
“If, in a moment of insanity, you do smooth time series data and you do use it as input to other analyses, you dramatically increase the probability of fooling yourself! This is because smoothing induces spurious signals—signals that look real to other analytical methods. No matter what you will be too certain of your final results! Mann et al. first dramatically smoothed their series, then analyzed them separately. Regardless of whether their thesis is true—whether there really is a dramatic increase in temperature lately—it is guaranteed that they are now too certain of their conclusion.”
David
It’s an old saw, but worth repeating here.
Three statisticians go hunting. When they see a rabbit, the first one shoots, missing it by a foot on the left.
The second one shoots and misses it by a foot on the right.
The third one shouts: “We’ve hit it!…
Max
The mean weight of all statiscians in the world is 3 lbs.*******
Includes urn.
Being in love means never having to say you’re sorry. Being in statistics means never having to say you’re certain.
Bazinga.
Data sets are like people. Torture them enough and they’ll tell you whatever you want them to say.
Celebrating your birthday is good for your health. Statistics prove the more of them you celebrate the longer you are likely to live.
Shoe size is highly correlated with income and education level.
A statistician is a person who draws a mathematically precise line from an unwarranted assumption to a forgone conclusion.
The great majority of people have more than the average number of legs.
Scientists use statistics as a drunk uses a lampost; for support rather than illumination.
Vaughan, I’m not sure where you’re going with this ? Does it, or will it have any predictive value ?
Of course it has no predictive value. He cut the fit off at 1995 because it fails at that point. It’s worthless.
David, It is a problem with the American education system, half of our scientists are below average.
According to a new model developed for AR5, 97% of climate scientists are above average.
And the other half are engineers.
(Again apologies for not getting to all comments promptly.)
@J Martin: Does it, or will it have any predictive value ?
Five years, perhaps not. But ten years, it strongly suggests that 2010-2020 (an odd decade since its 3rd digit is 1) will trend up, very likely strongly. Reasons:
(i) Every odd decade since 1870 has trended up more strongly than its predecessor. This despite the fact that there have been sustained downward trends, much stronger than in recent decades.
(ii) If SAW continues it will trend up.
(iii) SOL looks extremely likely to trend up.
(iv) AGW has been trending up for over a century.
So if 2010-2020 trends down it is hard to imagine stronger evidence than that against my hypotheses!
Looking further into the future is risky. Conceivably new technologies could price fossil fuel energy out of the market, in which case CO2 emissions might drop right off the Keeling curve. Or the permafrost might melt and dwarf the impact of increasing CO2.
But if neither of those things happen, and the Keeling curve stays on its predicted track, then I would predict continuation of the green curve in Figure 2, for two reasons.
1. Because the causal link between CO2 and temperature is well understood (pace those who insist otherwise).
2. Because even if we’d never heard of CO2 we’d still have this separation of multidecadal climate between Figures 1 and 2 into an oscillating component and a rising component, whose upward-curvature has continually been getting steeper and shows no sign of abruptly turning into a downward curvature. Those who claim 2000-2010 proves otherwise are ignoring SOL in Figure 11, which accounts for the pause in 2000-2010. One decade gives no information at all about multidecadal climate, which is the subject of this poster.
Vaughan Pratt | December 7, 2012 at 7:50 pm | Reply
“But ten years, it strongly suggests that 2010-2020 (an odd decade since its 3rd digit is 1) will trend up, very likely strongly. Reasons:”
Let me boil those reasons down for you: numerology.
Incredible.
Are you a fan of bible codes too?
I should add that anyone who can separate multidecadal climate defined as F3(HadCRUT3) into the sum of an oscillating component and a concave-downwards trend (the opposite of what I called Observed Global Warming which is concave-upwards) will have the immediate attention of a lot of people.
The whole idea that multidecadal climate is something that can be so precariously defined reveals an academic mindset in the extreme. Those of us who insist that geophysical processes need to be carefully observed and analyzed incisively without precious preconceptions can only smile at such hubris. I’ll say nothing more here..
Having read this twice, my faith in climate models, already negligible, managed to sink even further.
The purveyors of climate models have a product to sell and obviously seek to protect their own jobs/careers. I am certain you can model some of the factors affecting the Earth’s climate accurately, but they are dwarfed by the number of factors about which we have little understanding, or whose existence we have not yet even recognized. The modellers want you to think otherwise and that there is no GIGO or doubt in climate science.
Climate modellers can be relied on to be shrill in defense of their ‘beautiful creations’, using derision and sneers as their principal defense.
Like an increasing number of people, I deeply resent these models, which are responsible for the hugely wasteful and expensive economic decisions taken by our gullible ‘political elite’, who are stupid enough to be taken in by their highly dubious projections.
Watts et al. demonstrate that when humans alter the immediate landscape around the thermometer stations, there is a clear warming signal due simply to those alterations, especially at night. An even more worrisome result is that the adjustment procedure for one of the popular surface temperature datasets actually increases the temperature of the rural (i.e. best) stations to match and even exceed the more urbanized (i.e. poor) stations… the adjustment process took the spurious warming of the poorer stations and spread it throughout the entire set of stations and even magnified it.
~John Christy, EPW Testimony (1 August 2012)
Christy is a professed Christian. That’s an automatic fail in warmist circles.
Maybe conversion to heathen might be a great career move.
A couple things I did NOT know about Christy but I do now.
1) Christy was a lead author in IPCC 2001
2) His doctoral thesis advisor was Keven Trenberth
Politicians are not stupid for embracing global warming hysteria. It represents a vast untapped tax base able to pay a generous salary and retirement benefits for a million bureaucrats. Even better, no one expects to see any results from the inestimable taxing and spending for the war on global warming for 50 years. Politicians are generally unaccountable to begin with but no accountablility for 50 years is a dream come true.
Peter: re “gullible ‘political elite’”. I don’t concur. Try fitting “money” and “power” as missing variables. All is revealed.
You are obviously correct in many instances, but there are many gullible politicians who have only lived their lives in the political, as opposed to the real, world. These people will believe whatever is trendy and/or what their spin doctors tell them they should believe.
Money and power are the obvious incentive for most politicians, but given the apparent chance “to save the world” as well, and in full view of the public, then that’s the icing on the cake. Then, of course, there is the subject of finding new ways to raise tax revenues.
Anyhow, the point is this: the general public is constantly being told it has “to save the world” by self-appointed elites (environmental and political) by digging deep into its own pockets Why? Because of the ‘predictions’ of highly flawed and dubious climate models, most of which have a problem in making accurate hindcasts.
Whatever your opinion of computer climate models, you have to recognise they are mostly produced by people interested only in the self-preservation of their own comfortable lifestyles. So, whatever results the paymaster wants, the paymaster gets. And the paymaster almost always wants more tax revenues, but he/she also wants you to feel good – hence “saving the world” – about paying them.
Peter Miller. Agreed on your points of altruism (save-the-world) and self-interest. (Strange bedfellows). Over on the post “Should scientists promote results over process?”, there is little choice except integrity of Process. Once the “boss” (UNFCCC) defines the objective, it is little wonder that the worker-bees find justification in their results. But that is not science.
[Article 2 of the UNFCCC charter (1992) sets the objective of stabilizing greenhouse gasses. (United Nations Framework Convention On Climate Change). http://unfccc.int/resource/docs/convkp/conveng.pdf ]
You might find this of interest (from 2008): “The Politics of ‘AGW'”. It reviews various schemes for political profit from AGW. Bottom line, the schemes are very similar to the “Turnover Tax” used in the former Soviet Union, and deliver control of ~70% of our energy supply to bureaucrats. Nirvana at last: control over the means of production.
http://solarcycle24com.proboards.com/index.cgi?board=globalwarming&action=display&thread=192
Mr. Peter Miller, I think the ‘public’ has been gullible, the ‘political elite’ are sociopaths and the question of our stupidity as a nation has not yet been fully addressed.
http://rt.com/usa/news/surveillance-spying-e-mail-citizens-178/
However our emails remain in storage for their future use. At no cost too; you?
JC comments:
I have not looked at this in any detail.
Very wise decision.
Vaughan Pratt
Up-thread I have asked you some questions to be able to understand what you’ve written better.
These are not “loaded questions”, so I would be thankful for a response.
The final question was also not intended to be “loaded” (although it might sound that way), but it is (for me) the most important, so I will repeat it:
Was the intent of this study to end up with the underlying exponential warming curve or did that just happen after all the noise was filtered out?
Thanks for a reply
Max Anacker
Sorry, Max, I was at AGU all day and just getting around now to answering the responses to my post, including your earlier comment. Let me know if you feel I didn’t do it justice.
There has been much discussion on this blog about uncertainty, but none in the comments above that I noticed in a quick read. Is it warranted to reproduce graphs like those shown without confidence limits/error bars?
hillrj: There has been much discussion on this blog about uncertainty, but none in the comments above that I noticed in a quick read
I, David Springer and Dixie Pooh have commented on the risk of error with this procedure. The lack of error bars does not come close to describing it. The procedure could produce very small calculated error bars and still produce a bogus result.
There is a primitive engineering rule of thumb that errors must add or multiply. If some procedure starts with HADCRUT3 with known errors then any result calculated from it must have at least that range of error or more. If the procdure produces very small calculated errors (ie smaller than the input data) then it must be suspect on thos grounds alone..
hillrj: If the procdure produces very small calculated errors (ie smaller than the input data) then it must be suspect on thos grounds alone..
On that we agree.
I claimed an R2 of 99.98%, which might sound like it should be convertible into a fantastic error bar. This is easily refuted by fitting the top of a gaussian to the top of a sine wave, or vice versa (depending on which one you propose to extrapolate from).
They are virtually indistinguishable, with an extraordinarily high R2! Yet they evolve in very different directions.
This makes the point that a high R2 cannot be taken as an indication of certainty. No way, Jose!
Yes. The curve you obtained from the dubious sources and methods closely matches a zillion other unrelated curves. It’s therefore meaningless. Can’t you figure out something productive to do in your dotage?
With a millikelvin in the title they may be hard to see :)
“Error bars ? To god-damned hell with error bars! We have no error bars. In fact, we don’t need error bars. I don’t have to show you any stinking error bars, you god-damned cabrón and chinga tu madre!”
Vaughan Pratt, I do hope that you stop by and respond to our comments. Mine are a little indirect, but in concordance with the more direct criticisms of David Springer and Dixie Pooh.
Sorry, I was at AGU (same excuse as Judith, more or less).
No apology necessary. You have dropped by.
The graph with a rising warming curve through time seems to correspond pretty well to the one swanson and tsonis found:
http://i52.tinypic.com/14cbgwh.png
Excellent point. All sorts of mechanisms could account for a nice smooth warming curve like Figure 2. Do you have a suggestion as to which one to prefer?
I prefer the manufactured curve invented by the producers of the HadCrut data set. There was nothing remotely like a global temperature sensing network in the late 19th and early 20th centuries. Prior to 1979 there was no means of obtaining a reliable global average temperature yet you insist on an accuracy that your initial data cannot come close to supporting. Garbage in, garbage out. Write that down.
I’ve downloaded the Excel spreadsheet.
I’m impressed by its beauty and clarity! :-)
Thanks, Gene. Usually I program in LISP, C, C++, and MATLAB, this was my first attempt at programming in Excel. Avoiding VBA (so that all macros would be signed by Microsoft) made it additionally challenging.
I appreciate all the effort that Dr Pratt has put into his theory and it is well presented.
But as a lay person, I just can’t believe in these cycles:
They just seem contrived. Its as if someone started of with the premise that the fluctuations seen in annual temperature averages follow a cyclical pattern. Whenever the first model is contradicted, another one is superimposed on it to get the desired results.
This has nothing to do with warmists or skeptics, because there are many in both camps that use these cycles to “prove” their point.
The sheer complexity of the model involving all these cycles means that they can’t be validated without 1000s of years of data (which is not available).
For instance, if you were to carry out an opinion poll on an election, involving 2 or 3 main candidates (or parties), you would need to sample at least 1000 people just to get a meaningful result. (That is the number which is always aimed at in these surveys.) But these combined cycles are so much more complex that several thousand years of results would be needed to verify them.
To me, the temperature graph appears to have just taken a random walk.
Mark B
Link to Beenstock statistical analysis of temperature record
http://economics.huji.ac.il/facultye/beenstock/Nature_Paper091209.pdf
Max
Mark B (number 2), all I’m trying to do is succinctly describe the last 162 years of global temperature date. Unless you have a more succinct description, namely one with fewer than 9 parameters, then I don’t understand your complaint.
Vaughan, I think you are actually trying to describe the last 162 years in terms of cycles and harmonics. It seems to me that you are looking for a mathematical explanation for every tiny variation in temperature. In reality we have things happening (apparently at random) such as volcanoes erupting, snow happening to fall in places where it won’t melt as quickly (causing albedo effects), unusual weather patterns etc.
When I look at the original graph, I just see randomness. If I run a random walk on my spreadsheet, there will be many examples of a temperature graph like the one you have shown. And just as many showing the same range of fluctuations, but in a downward direction.
Furthermore, with all the apparently random events happening, which could effect climate/weather, it seems unlikely, to me, that the temperature would slavishly follow preordained cycles.
As there is no reason to believe that such cycles would effect the climate in the short term, why believe in them? To verify your theory you would need longer than 162 years of data. So why make a guess? Wouldn’t it be better to say that we just don’t know, due to the lack of data?
I do appreciate that you have taken the time to reply to a non scientist. I realize that there is a lot that I don’t know. I am just calling things as I see them.
@Mark B (number 2): I think you are actually trying to describe the last 162 years in terms of cycles and harmonics. It seems to me that you are looking for a mathematical explanation for every tiny variation in temperature.
That would be a hopeless task using what we know today. In Figure 11 of my poster, which represents HadCRUT3 as a sum of three curves, the many tiny variations in temperature you’re thinking of are in SOL and DEC, neither of which I try to explain mathematically.
When I look at the original graph, I just see randomness.
Yes, certainly. My technique is to separate out the randomness and put it in SOL and DEC, leaving behind MUL (for multidecadal climate) as the only part of climate I want to describe mathematically. Do you see “just randomness” in MUL?
I was able to separate MUL into an oscillation and a curve with no inflexion points (a point where a car following the curve has its steering wheel straight ahead). The laltter bends upwards, which is a very bad sign. To prove that things aren’t really that bad one would have to separate MUL into an equally convincing oscillation and a curve showing some sign of bending downwards in the near future. I doubt if this is possible.
If I run a random walk on my spreadsheet, there will be many examples of a temperature graph like the one you have shown. And just as many showing the same range of fluctuations, but in a downward direction.
The difference is that the downward curve won’t arise from any known theory. The green curve in Figure 2 is in outstanding agreement with the well-understood physics of radiative forcing by CO2.
If you extend the graph backwards 200 years using your constructions, what happens?
My suspicion is nothing good related to the best observations we have. And what does that say about any predictive power it may have?
If you run the same analysis from 1700 to present, do the results results completely change?
All these numerical constructions that assume the big bang was at year ~1850 are a bit disconcerting. All these will yield an infinite increasing trend of some sort if you want them to. It’s too open to confirmation bias to be reliable, I give it about a 1000% chance that this analysis was tuned iteratively and significantly. Did the author work out his methods independent of the data? Not likely. Torture data until exponential drops out? Be honest with yourself.
Frequency analysis of temperature data just seems inappropriate if you ask me. It may help identify some cyclical signals buried in the noise, but using it as a tool to identify an overall trend is risky business. The typical first step in frequency analysis is to remove any underlying trends, else they corrupt the analysis significantly. Hanning / Hamming windows do this, etc.
Frequency analysis of temperature data just seems inappropriate if you ask me.
If you have a more appropriate analysis then you win. Go for it!
Its worse.
Hadcrut and GISS are not temperature series. Everybody keeps forgeting that.
For some bizarre reason when hansen and Jones did their first series they decided to add together
1. The air temperature over land.
2. the surface sea temperature.
Adding SST to Surface air temperature doesnt give you a temperature of anything. it is more properly understood as an Index. Now, provided you take the index in the same way you can get an indication of the state of the climate from the index, but you do not have a physical metric.
One could use MAT ( marine air temperature) instead of SST. That way you would have a measure of the atmosphere at a constant altitude. Interestingly, the database that contains SST also contains MAT, basically the same amount of data.
The other thing is that SST and SAT have different variances and different uncertainties and they respond with different lags, so I UNLES Vaugh does some work with synthetic data FIRST to prove that the methods he applies to this data actually work, I’d say the signal analysis is flawed from the start since the “signal”, the temperature curves are not really physical metrics.
While I have no doubt in his abilities its clear to me that he didnt do the basics. Define a method. Show that the method works to recover a signal using synthetic data ( where the truth is known ). Also, the failure to.
A) test results WRT choice of dataset ( Hadcrut versus, others, other solar forcing datasets)
B) hold out data for verification.
Make his result an interesting start, but thats about it
Is that much worse?
For all practical purposes all temperature averages are just indexes. Temperature is an intensive variable, not extensive. The average temperature has no more influence on any particular matter than some other temperature index.
What we hope to have is an indicator that is
– strongly correlated by significant variables
– allows for accurate enough determination
– is not unnecessarily volatile
– allows for construction of as long time series as possible.
It’s by no means obvious that any “more natural” average temperature would be better than some less natural one. There are obvious advantages in the possibility of calling the number “global average surface temperature” or something like that, but that’s not essential for it’s value for science.
It’s quite possible that some temperature index defined slightly differently from the present ones would be better by the criteria that I list above but I don’t know what the best index would be.
thats an interesting defense. I think im
going to agree with you. I would still like to see the effects on spectra from adding and averaging such different quantities
We agree then, averaging temperature isn’t physical. So what do these averages have to do with physics?
Brian,
The problem is you have a system ( the ‘climate”) that is multi dimensional. It is useful to a metric that indicates change in the system, this will of necessity be a lower order metric. Like temperature versus time.
Temperature, is selected for a variety of reasons, but there are probably better metrics like OHC or energy imbalance.
Don’t buy it Steve. As Pekka stated these average are some sort of index. So is phone book, so maybe we should just average the phone numbers in each thermometer location and use it as an index.
Brian,
There’s nothing basically wrong with indexes. Suitable indexes are as good and some indexes probably better for following the warming than the true average temperature at some near surface altitude like 2 m.
The main point of my comment is that many different temperature based indexes are essentially as useful and that several existing time series have basically the required quality. Some of them span, however, a too short period for being as useful as those spanning a longer period are.
Pekka
I understand what you are saying. I’m saying if increasing temperature means “warming,” what does increase “index” mean? Both you and Steve, in my opinion, are thermodynamically minded people. The earth isn’t in thermodynamic equilibrium, so there is no single temperature for the whole earth. Just because you average a bunch of numbers doesn’t mean the average means anything at all. Try making sense out of averaging speed limits.
Tom Scharf: If you extend the graph backwards 200 years using your constructions, what happens?
Which graph? If AGW, that’s essentially perfectly flat before 1850, based on hindcasting preindustrial CO2 to a constant 287 ppmv. We know that’s not exactly true because 2250 years ago it was 284.7 ppmv according to the Vostok ice cores. The point however is that natural temperature fluctuations due to other causes, particularly the ocean oscillations, will dwarf those attributable to CO2 fluctuations, which is what AGW accounts for. Hence for all practical purposes AGW may as well be modeled as perfectly flat for the period 1650-1850. There are no known fluctuations in recent (last millennium) natural sources and sinks of CO2 that are remotely as large as the growth in CO2 of the last half century.
All these will yield an infinite increasing trend of some sort if you want them to.
Quite right, and I don’t claim that CO2 will continue to follow Hofman’s raised-exponential law forever. In fact the doubling time for anthropogenic CO2 seems to be increasing a bit in the last couple of decades, confirming your point, though not enough for your point to save the world in say 2050.
I give it about a 1000% chance that this analysis was tuned iteratively and significantly.
Right again. Welcome to parameter estimation. Do you have methodological or philosophical objections to estimating parameters from data?
The typical first step in frequency analysis is to remove any underlying trends, else they corrupt the analysis significantly.
That would make sense when the only analysis tool you have is Fourier analysis, where every signal is expected to be a sum of sine waves and any trend present throws the analysis into chaos. But if you have Laplace transforms and/or wavelets in your toolkit the order becomes less important. In the case at hand the difference made by removing the AGW trend or the SAW oscillation first is in the least significant bit of a double precision floating point number, i.e. none at all.
Hanning / Hamming windows do this, etc.
A Hamming window is just a brute force way of turning an infinite impulse response (IIR) filter into an FIR filter to obtain some temporal locality. When applied to Fourier analysis, which I’m guessing is what you have in mind, it’s rather a procrustean bed. Better to go the whole hog and use wavelet theory in its full generality. This is how my analysis looks at the situation.
Two fundamental issues severely damage the credibilty of Pratt’s interpretation that a bona fide global warming trend has been revealed by his simple filters:
1. The naked assumption that HADCRUT3 represents an unbiased estimate of GST, as if that index was free of UHI effects on land and had fully adequate spatio-temporal coverage over the oceans from1850 to present.
2. The manufacture of low-pass values throughout the entire time-interval of the raw data, as if properly applied boxcar filters did not necessarily truncate the output near both ends of the available time series.
Based upon results of more rigorous filtering, I also suspect something amiss in the construction and removal of the “quasi-sawtooth.”
This work is neither credible physical science nor proper signal analysis.
Compared to what? I eagerly await your more credible physical science and more proper signal analysis.
Well I await more credible physical science. In the meantime what properly can be said about ‘signals analysis’?
For starters, cross-spectrum analyses between long concurrent time-series of atmosphericCO2 concentrations and of total enthalpy metrics (not just hybrid temperature indices) on a global scale would provide an unequivocal indication of the coherence–or lack thereof–and phase relationship between the putative cause and the observed effect. On the very limited scales that such analyses have been performed, due to limitations of bona fide measurements, the cross-spectral results militate strongly against any very intimate physical relationship in any frequency range.
What you have done here is largely a mathematical exercise in multi-parametric curve fitting (reminiscent of von Neumann’s quip that with 5 parameters he could wiggle the ears on a mythical elephant), with but conjectural reference to unproven AHH theory.
With a bit more leisure now that AGU has ended, let me comment more carefully on:
@John S.1. The naked assumption that HADCRUT3 represents an unbiased estimate of GST, as if that index was free of UHI effects on land and had fully adequate spatio-temporal coverage over the oceans from1850 to present.
I made no such assumption. Like many other commenters you’re reading things into my poster that simply aren’t there.
If it turns out that HadCRUT3 bears no relationship to actual global temperature then my work will have been merely a (possibly) interesting theoretical exercise. To the extent that pessimists such as yourself have exaggerated such biases, others may find my work relevant to global temperature. I claim nothing more than that it is relevant to HadCRUT3 itself, warts and all. I will have to leave it to others to quantify the biases etc., not having the requisite resources myself. As an example Berkeley’s BEST project to such an end is a massive undertaking.
2. The manufacture of low-pass values throughout the entire time-interval of the raw data, as if properly applied boxcar filters did not necessarily truncate the output near both ends of the available time series.
That’s a fair point. Please avert your eyes from the first and last decades of MRES (figure 6 of the poster) and consider only the rest. (I do so myself and should add a paragraph somewhere advising others to do so.)
Based upon results of more rigorous filtering, I also suspect something amiss in the construction and removal of the “quasi-sawtooth.” This work is neither credible physical science nor proper signal analysis.
Computer programs today can generate better critiques than first-year grad students when asked to referee a paper. I’ll play safe and guess you’re not a first year grad student. Either way your paragraph there is not exactly what I’d call incisive.
How charming of you to guess that!
I like this. The backbone of this final curve is two assumptions. (1) that CO2 sensitivity is a log function with 2.83 C per doubling, and (2) that man’s contribution to CO2 is a growing exponential with a doubling time of 28.6 years that fits the Keeling curve. The fit gives credence to these assumptions. It is fortuitous that other factors are not distorting it. We know other GHGs are also increasing, for example, and aerosols and solar forcing are changing, but these seem to be absorbed in the low-frequency saw-tooth together with ocean variations.
The interesting extrapolation graph (hidden in the bottom left) shows 4 more degrees of warming and CO2 levels over 1000 ppm by 2100 if the manmade exponential use of carbon continues (which is pessimistic).
Jim D | December 4, 2012 at 10:14 pm said: ”I like this. The backbone of this final curve is two assumptions. (1) that CO2 sensitivity is a log function with 2.83 C per doubling”
Jimmy boy: ”doubling ZERO by two, or by 10, or by 100, is still ZERO!!! What you like, never had anything to do with the reality; because Vaughn Prat is your Tudor / Brainwasher,…
I agree with Jim D that naive extrapolation of carbon is pessimistic. Inspection of recent CDIAC data on carbon emissions shows that it’s been falling off since the mid-1970’s, relative to the high values it reached between 1870 and 1970. Some of this could be accounted for by the increasing cost of fossil fuel energy, some by the increasing awareness of its hazards, and there may be other factors.
Forecasting the future is not the same cakewalk as hindcasting the past.
This was my much more modest attempt:-
Top Temperature anom and CO2 (Keeling and then estimated based on calculated fossil fuel consumption).
Middle Natural log of [CO2] vs Temperature anom. Th best fit slope give a climate sensitivity of about 2 degrees for a doubling of CO2
Bottom. What temperature looks like if we subtracted the effect of [CO2].
http://i179.photobucket.com/albums/w318/DocMartyn/LNCO2vstemp.jpg
I am so not worried about 560 ppm Co2.
I am so not worried about 560 ppm Co2.
What, me worry? (TM)
Somehow life on Earth managed to survive much higher levels of CO2 than that. Perhaps by thriving on it, perfectly plausible.
Mass extinctions aren’t so dramatic when the climate changes gradually.
The problem comes when you give the biosphere only a century or less to adapt to some dramatic change.
Especially if the change is global. Mount St. Helens recovered within a decade, but the change was extremely regional, allowing species to move in to the changed area quickly.
When a Mount St. Helens type change hits the whole planet, where are the replacement species going to move in from?
That’s a recipe for reducing biodiversity far more effectively than if you give Earth’s species 10x more time to adapt.
This time it really is different.
Disclaimer 1: this impact might not happen for all sorts of reasons, including the intervention of God.
Disclaimer 2: I’m not wearing my scientist hat, just partying on with everyone else.
‘This time it really is different.’
Yes it is so much more difficult to respond to a change in CO2, which will give rise to a 2 degree doubling of ‘average’ global temperature, from 1750 to 2050, compared with an ELE that happens in hours.
I understand bits of Scotland when from warm than present to being underneath a glacier in less that a century.
Let me put these odds things about the future in a perspective you can understand; the chances are that you will die of heart disease, cancer, lung disease or dementia. Each of them is going to make you last 6 months unpleasant. You will not witness any event that can be attributed to CO2 increases in the atmosphere in all you life.
@DocMartyn: You will not witness any event that can be attributed to CO2 increases in the atmosphere in all you life.
This point is readily conceded by those pointing to the increases in frequency and violence of storms: no single storm can be blamed on CO2.
It’s like the proverbial frog in the pot being boiled with the stove set on high: convection mixing hot water coming up from below with cold water at the top results in more frequent and more violent fluctuations in temperature than would be observed with the stove turned way down, but the frog cannot blame any single fluctuation on the stove being on high.
Jim D
I’d agree with you that “it is an interesting graph”, but even Vaughan agrees that the exponential curve is most likely pessimistic (i.e. there will be less that 4C warming by 2100).
Exponential atmospheric CO2 growth rate will most likely not increase beyond the recent ~0.5% per year, when population growth rate is expected to decrease to less than one-third of the recent past rate, even if per capita use of fossil fuels increases by 50% by 2100.
This would get us to 600 ppmv by 2100 (all other things being equal), around the same as IPCC “scenario + storyline” B1 or A1T, with warming by 2100 of 2C (rather than 4C as predicted using the exponential curve).
Max
This 2 C would be the amount above the current or 2000 temperature. To hold it to 2 C above preindustrial, we would have to keep below 450 ppm. Some consider that 2 C mark as already a limit to be avoided, but we should reach 450 ppm by 2040 just assuming a linear CO2 growth, which is a slowest growth estimate.
There has been very little CO2 impact on temperature but extrapolation is absurd. Climate is not linear and to continue to pretend that it is – is profoundly idiotic.
Vaughn Prat: ”It would seem therefore that rising climate since 1850 is a lot more complex than can be explained by our rising CO2 emissions”
”.rising climate” == climate doesn’t rise, you idiot! If you are ashamed to say: ” rising the phony GLOBAL temperature” – just admit it!!!
con #2: ”However the three decades 1910-1940 show a similar rate of global warming”
A#2: you and nobody else knows what was last year’s GLOBAL temp – you are driveling about 1910-1940…
con #3: ”the poster calls it Observed Global Warming, color-coded green”
A#3: was it ”observed” from the top of the hill, or, from your future jail cell?
.Vaughn, instead of an EXTREME Warmist; you are starting to sound as a Fake Skeptic (as inbedded Warmist in the Skeptic’s camp) What did they do to you; did they promise you more rip-off money – or are you starting to run with one leg on each side of a barbed wire fence… will get even more painful!!! CRIME SHOULDN’T PAY!!! Those ”loaded comments” of yours; will earn you another 10years!
P.s. the therm ”decadel” was invented, because I was ridiculing how can they see 0,003 degrees differences between years -> they invented ”decadel” hopping that: the nutters cannot realized that: zero multiplied by 10, or by 100, is still zero. It’s only kiling and screaming on the way to the confession box / under oath!!!…
Any analysis using smoothed time series data over a relatively short time scale wouldn’t have much predictive value IMO and it would also be wise for Dr Pratt to state his assumptions about the use of HadCrut3.
Retro curve fitting is a piece of cake if you massage everything enough and that’s what has been happening with the GCM’s that are now being used to drive Govt climate policy.
I am sort of smiling here, maybe Vaughn Pratt is just testing us wit this !
Not me, but if anyone wants to grade this as a test, feel free. By all means grade on a curve.
This is the wit from above. I think it is a doubled edge question.
“With enough parameters one can make any two curves equal to within any desired precision. The judgement to be made here is whether the nine parameters used here have any chance of achieving a comparably accurate fit on random data in place of HadCRUT3. This was an objection raised to Mann’s methodology, and I would be interested to see if it applies here.”
Thank you for posting this here. Vaughn Pratt
Could you fit a random data to this, it seems difficult, so many dependent variables. I don’t know perfectly well, that’s why I ask.
I appreciate that you are willing to post a best estimate publicly with the supporting documentation for all to see.
For 162 years with a 21-year filter, by Nyquist I figure 2*(162/21) = between 15 and 16 parameters. I used only 9 parameters and got an R2 of 99.98%. Go figure. (I asked Persi Diaconis this and he thought of the top of his head that I was ahead of the game, but a more careful calculation is in order.)
Dr. P,
I think this is an important fact for folks to take note of. Personally i’d make more of it ( be clearer ) than you were in the poster. (yes space is limited ) but it really switched around they way I was looking at the work
Thankyou Professor Pratt for posting this. I will attempt to provide some skeptical but I hope constructive comments. I apologise I have not had much time to study this, and have not been able to play with the spreadsheet. My perspective on this is that I trained as a physicist and have some experience of modeling two-phase fluid dynamics. It has always seemed to me futile to try to predict long term climate trends with GCMs when much of the physics such as temperature feedbacks is not well understood. The starting point should be simple physical models backed up by empirical evidence. So I am initially sympathetic to your approach.
1) Unless I am misunderstanding you are not using a physical model of multidecadal cycles. You are saying the temperature data can be fitted by an exponential (modeling AGW) plus a “sawtooth” (harmonics thereof, with 6 free parameters), representing multi-decadal effects plus periodic terms with period less than 22 years that are smoothed away as noise.
There are many ways to decompose a function. As a power series, as a Fourier series, etc. That does not necessarily reflect physical causes. It is not clear to me how well this decomposition into an exponential plus periodic terms is constrained by the data. Would it be possible to do it differently? For example your “sawtooth” is pretty much flat from 1990 to 2000, leaving all the warming to be accounted for by AGW. That accords with the IPCC view that “most” of the late 20th C warming is due to AGW. But one possibility much discussed on this blog is that a substantial part of that was “juiced” by multi-decadal cycles. I am not clear if you are saying you can show that is not the case, or if you have effectively assumed it isn’t by your choice of sawtooth parameters.
You do say the SAW and AGW parameters are obtained by least squares. You don’t describe the method, I guess that would be in the spreadsheet. I find it surprising, with a function like your “sawtooth”, if there is only one global minimum when varying all the parameters. I would expect you would need to use something like simulated annealing to find mutiple local minima in the least-squares measure. Then we need to know how much better your fit is than the others. (Even if there is a global minimum we would still want to know how “flat” it is, i.e. how fast does the least squares measure change around that minimum).
I don’t want to seem to accuse you of taking the largest possible exponential component and then choosing the sawtooth parameters to fit whatever is left, but what would it look like if someone did that? What are the physical arguments for the parameters you have chosen?
You say the “sawtooth” has a possible physical explanation as the effect of two seismic events at the core-mantel boundary. That’s a lot of heat. Could such events produce heat that is globally distributed or would that be localized around the event? And would they not be detectable in the historical record as earthquakes? I have not heard of this phenomenon and I don’t see a reference. Physical evidence of these events (other than their assumed heat signature) would certainly support your model. In the absence of such evidence, there may be many other ways to model multi-decadal cycles that would leave a substantially different AGW component behind.
2) The most obvious feature of your figures 2 and 3 is the divergence between AGW and F3(DATA-SAW) since about 2000. One of the biggest discussion points on AGW recently is the near 15-year flattening of the land surface temperature curve. You label this “End-effect artefact of filtering”. I have not followed how that artefact arises under your analysis but I will assume you are correct. Even so, if you are asking us to ignore the discrepency since 2000, I think that still means you can’t model the last decade or so of data. For me at any rate, it is the discrepency between prediction and reality of the last decade-and-a-bit that provides the strongest argument there is something wrong with the AGW “consensus”. You will not do much to close the gap between “warmists” and “skeptics” if you can’t address that.
3) Your AGW model is based on the Arrhenius logarithmic model of the effect of CO2 on temperature, and the exponential Hoffman model of CO2 emissions. The Hoffman model may have been reasonable until now, but is it likely to continue until 2100? Exponental growth never lasts very long. Population growth peaked in the late 1980s in absolute terms. With population growth slowing CO2 emissions will surely cease to be exponential, even if no special measures are taken to reduce them. That would reduce the estimate of 2100 temperature.
4) You use a constant value for the oceanic pipline delay. The main “warmist” explanation for the recent hiatus in land surface warming is that the heat is going into the oceans. If that is true it implies the pipeline effect is complicated. The atmosphere/ocean coupling must be stronger than assumed. The delay may be reducing. Or it may be cyclical. In either case the value that best fits the period up to 2000 will not be representative of the next century. Increased warming of the oceans will again reduce the estimate of land surface warming by 2100.
3) What would happen if you included a linear term to model the “rebound” from the little ice age? (Or is this supposed to be included in your sawtooth term?) I guess the effect of a long-term linear term would be to reduce the amount of warming attributed to AGW.
4) I think your claim to 99.98% accuracy is going to cause you unnecessary grief. I do not think you are claming to fit the actual data to that accuracy. It is the fit between the smoothed data and your AGW model you are talking about. But if you smooth data until it looks smooth and then fit a snooth curve to it you will get a pretty good fit. It is not as impressive as it sounds. As a physicist I would expect you to report the fit of your model to the actual data and maybe compare it with other models.
In conclusion, I don’t want to sound too negative. I think this kind of analysis is valuable, and I think it demonstartes there is an alternative to the GCMs. But unless your nine free parameters are very tightly constrained (and I have missed that) the reaction from skeptics is likely to be that it is an exercise in confirming the IPCC “concensus”. I would be interested to see a range of results based on a plausible range of the free parameters, constrained where possible by physical models of the underlying phenomena.
gareth | December 5, 2012 at 2:20 am said: ‘Thankyou Professor Pratt for posting this. I will attempt to provide some skeptical but I hope constructive comments”
gareth, .you and ”daily planet’ are capable of providing ”skeptical comments” as much as cane-toads can provide wool. For the 2 of you + Pratt, genuine ”skeptical comments” are a nightmare = you can’t provide one, you can only try to silence those. Now is obvious that: all 3 of you are sucking & sponging from the same suffering taxpayer .
Rubber-stamping / dignifying Vaughn’s LOADED post with crap; Vaughn should have already provided washing pegs, for people’s noses; before starting to read it, now you Gareth &daily planet are adding extra and string up Vaughn’s witchcraft doo-do. Is that what ”peer reviewed” means for the Warmist EXTREMIST, as the 3 of you?!?!?!
@geronimo: I have to ask the daft question are you saying that having filtered out those natural variations you have found a strong warming signal because the only warmer left is CO2?
A “warmer” in what way? For example if I turn a dial on my radio to tune in a different station, was it the mere rotation of the dial that did the job or is the dial merely a means of controlling a variable capacitor that does the real work?
CO2 is properly understood as a control knob in that sense. We know quantitatively, albeit roughly, from the work of Tyndall in the 1850s the extent to which CO2 itself, with no feedbacks, blocks the passage of thermal radiation; these days we infer this much more precisely from the HITRAN tables of spectral absorption/emission lines. We also know how much CO2 is in the atmosphere. Much less certain is how much assistance CO2 gets from feedbacks.
A closer analogy than a radio knob is vacuum assisted power steering on a car. The steering wheel exerts some force on the front wheels but greatly amplified by the power assist. HadCRUT3 is like an instrument for measuring the force applied by the power-assisted steering directly to the front wheels: you can’t infer just from that the force needed at the steering wheel, which is a mere fraction of what’s needed when the vacuum fails.
Without that sort of information about division of labor in observed global warming, all I can say is that there is an extremely good correlation between observed global warming and the warming predicted by the greenhouse hypothesis. That doesn’t prove the greenhouse hypothesis, it merely makes it a contender. There are other hypotheses, for example Ferenc Miskolczi’s hypothesis that water vapor does the work. If his hypothesis had a comparably straightforward account showing an excellent correlation between some global temperature dataset and another observable (his alternative to CO2 as the control knob) it would be a good contender, but so far Miskolczi has been unable to bring any clarity to his hypothesis. This in turn makes it hard to verify the soundness of its reasoning; at one point it incorporated the virial theorem that average potential energy is twice average kinetic energy, which however doesn’t apply at all to the atmosphere because the frequency of molecular collisions is many orders of magnitude too high for the theorem to hold.
@geronimo: if it [the odd-even-decade phenomenon] hasn’t got anything to do with AGW, what has it got to do with? And if you don’t know the answer to that question then there must be unknown unknowns lurking in the climate system.
If you look at the middle component of HadCRUT3 shown in Figure 11 of the poster, SOL, which is essentially all due to the influence of the Sun’s magnetic on both the Sun and Earth, you’ll see that it’s sufficiently synchronized with the decades since 1870 as to be able to take full credit for the phenomenon. Being roughly 21 years instead of exactly 20, it will drift out of phase on either side of 1870-2010 and in due course the phenomenon will resume with the opposite phase, with the even decades trending up more strongly than the odd ones, independently of whether the longer-term trend is up or down.
@geronimo: Forgive me if I’ve misunderstood what you were trying to prove, but thanks for sharing your data with the skeptics, it’s surely a good way of proving it correct if the “enemy” can’t find anything wrong with it.
I appreciate knowing that there are those on Judith’s blog willing to listen to both sides of the argument. This may not appear so to the casual observer because each side accepts their side’s arguments as reliable and the other’s not. This comes about from the natural tendency to judge logic not by whether the steps are sound but whether the conclusions fit the “known” facts. When there is agreement as to the facts this is a great way to debug an argument, but when there is not then you have to fall back on soundness of reasoning as a criterion for resolving the disagreement. I believe the latter could be done with greater care by both sides. (I was trained in both physics and logic, and although my original goal was a career as a physicist I ended up spending much of my research career on logic.)
Gareth, thanks very much for your detailed and thoughtful comments. Let me respond with your numbering.
1. Lot to respond to here. First, are there different decompositions? Excellent question. While I didn’t find any, mine is easily shot down by a better one, which I eagerly await.
For example your “sawtooth” is pretty much flat from 1990 to 2000, leaving all the warming to be accounted for by AGW
As Santer et al and others have pointed out, AGW is not a decadal phenomenon and as such can’t hope to account for the warming 1990-2000. This is due in part to this being an “odd decade,” namely one whose third digit is odd. You can verify at woodfortrees.org that every odd decade in HadCRUT3 since 1870 has trended up, and every even decade since then has trended down relative to the two odd decades on either side. Obviously this very interesting phenomenon has nothing to do with AGW.
Regarding the least-squares method I used, you correctly surmised that it found only a local minimum. Simulated annealing might find other local minima but with outrageous parameters. One of the main reasons for making my spreadsheet broadly available was to allow others to search for other minima. With such a minimum in hand one then can address the difficult question of whether its parameters are within reasonable limits.
Then we need to know how much better your fit is than the others.
That question is easy: to my knowledge it is way better, and if I am wrong about this then it will be easy to demonstrate this with a better model.
2. I have not heard of this phenomenon and I don’t see a reference.
Glad to hear it, yet more support for my sense that it’s an original hypothesis.
Physical evidence of these events (other than their assumed heat signature) would certainly support your model. In the absence of such evidence, there may be many other ways to model multi-decadal cycles that would leave a substantially different AGW component behind.
Fully agree.
The most obvious feature of your figures 2 and 3 is the divergence between AGW and F3(DATA-SAW) since about 2000.
That’s an apples-and-oranges comparison. You need to compare F3(AGW) and F3(DATA -SAW). F3 as currently constituted gives unpredictable results at the first and last decade.
3. (Will the exponential CO2 growth continue?) Excellent question. Quite likely not. It was a mistake for me to include the predictive slide at bottom left.
4. (your repeat of 3) The pipeline delay may well vary, though I guess that the ARGO data would tell us more about that.
5. The little ice age is out of scope for my analysis. I’m not sure I have to contribute to that area.
6. (your repeat of 4). But if you smooth data until it looks smooth and then fit a snooth curve to it you will get a pretty good fit.
Does your “pretty good” extend to 99.98% R2? This question is of great interest to me.
Dr. Pratt, thanks for coming on this blog to share your ideas, I’ve only scanned the document because of lack of time, so I have to ask the daft question are you saying that having filtered out those natural variations you have found a strong warming signal because the only warmer left is CO2?
If you are you then go on to say:
“Obviously this very interesting phenomenon has nothing to do with AGW,” about the cooling decades. Which of course begs the question from the ignorant such as myself, if it hasn’t got anything to do with AGW, what has it got to do with? And if you don’t know the answer to that question then there must be unknown unknowns lurking in the climate system. And if they are causing decades of cooling why can’t they, or other unknown unknowns, be causing decades of warming? Surely until you have the answer to these questions you simply have a graph showing correlation of warming with increases of CO2 emissions, don’t you?
Forgive me if I’ve misunderstood what you were trying to prove, but thanks for sharing your data with the skeptics, it’s surely a good way of proving it correct if the “enemy” can’t find anything wrong with it.
Must go deadline to meet.
Dr. Pratt
On decadal trending:
During the last 100 years or so, solar cycle period was on average 10.54 years, while theHale cycle is twice as long. This means that Solar coronal mass ejections CMEs in the even-numbered solar cycles tend to hit Earth with a leading edge that is magnetized north. Such CMEs open a breach and load the magnetosphere with plasma starting a geomagnetic storm .
Geomagnetic storms hit the Arctic, induce strong currents, disturbing the Earth’s field and feed back into the oceanic currents, releasing some of the stored heat during the previous cycle (with less geomagnetic input):
http://www.vukcevic.talktalk.net/Spc.htm
Oops, I clicked on the wrong Reply button, geronimo, with the result that my reply appears a short distance (which may widen with further comments) above your question, sorry about that.
A few weeks ago, links from another website alerted me to Vaughan Pratt’s earlier, stimulating contributions on the theme of the current article.
An in-page search for (rebound from) “LIA” &/or “little ice age” highlighted:
1. gareth | December 5, 2012 at 2:20 am |
http://judithcurry.com/2012/12/04/multidecadal-climate-to-within-a-millikelvin/#comment-273951
2. Vaughan Pratt | December 5, 2012 at 5:11 am |
http://judithcurry.com/2012/12/04/multidecadal-climate-to-within-a-millikelvin/#comment-273999
Gareth is correct that Vaughan Pratt’s model can be improved with attention to hard constraints. Suggestion for sensible parties: Be aware of Earth Orientation Parameters.
Reading more comments in this discussion suggests that most contributors will have to make substantial, concerted effort to deeply appreciate & understand gareth’s lucidly aware & eminently sensible suggestion about hard constraints.
A simple example of how the thought process looks:
Munk (2002) 20th century sea level enigma
http://www.pnas.org/content/99/10/6550.full.pdf
Accessible background:
Chao (2006) Earth’s temporal oblateness
http://www.earth.sinica.edu.tw/~bfchao/publication/eng/2006-Earth%E2%80%99s%20oblateness%20and%20its%20temporal%20variations.pdf
Fully agree, Paul. –v
@gareth: I trained as a physicist and have some experience of modeling two-phase fluid dynamics.
Sorry for overlooking this in my previous reply, Gareth, this is an important point. I’ll try to incorporate your response into an overall picture of these comments. I also doubt if I did justice to all your points in my earlier reply to you due to pressure of time, this will take time to sort out. In any event thanks very much for your very insightful input!
Vaughan
@Vaughan
Sorry to have dropped out of the conversation for a while. I have fingers in several pies at the moment. If I had spare finger I would download your spreadsheet and try out a few ideas (BTW I have found OpenOffice Calc to be quite up to handling non-toy spreadsheets, and faster than exel for very large ones, >50,000 rows. The macros are not always compatible)
I had also missed that you started out as a physicist. So I hope you will understand what I mean if I say that from my perspective what you have done is an interesting piece of maths but not (yet) physics.
If your analysis is correct then you have separated two independent effects: the AGW signal and a new mechanism whereby heat is transferred from the mantle to the crust, driving multidecadal climate.
But until there is independent evidence that the “sawtooth” components exist as physical effects, what you have done effectively is take the smoothed temperature data, subtract your preferred AGW model and show that what is left can be fitted by a 9-parameter model pretty good.
You asked if “pretty good” extends to a 99.98% R2. I don’t know how to calculate that theoretically. (It looks almost too good – as if you have enough free paremeters to fit whatever is left over) Personally I would model it. Produce some random data sets (with an underlying linear trend – there clearly is some real warming going on – or was until about 2000) until you get a few that look broadly like the observed data. If you can’t fit those at 99.98% R2 then you may be on to something.
As I said I do have some sympathy with this approach, which suggests new physical effects and is in principle testable. Far better this than running a dozen GCMs with funamentally different assumptions about climate feedbacks, plotting them on a graph and claiming that as a measure of the uncertainty in the behaviour of the real climate.
Thanks, Gareth. Currently I view F3 as too low a frequency and 9 as too many parameters: while not overfitting neither are they underfitting, which is where Fourier transforms stand.
My current plan is to double F3’s frequency (which will double the number of degrees of freedom in the image of F3 assuming the same SNR) while cutting back on parameters, maybe down to 6.
I may also try to improve F3 to get a faster rise from the cutoff frequency. This will further improve the number of degrees of freedom in the image of F3.
Underfitting understood as having fewer tunable parameters in the model space than there are dimensions in the observation space, consistent with a good fit of the model to the data, ought to be a powerful argument for any hypothesis.
@gareth: (BTW I have found OpenOffice Calc to be quite up to handling non-toy spreadsheets, and faster than exel for very large ones, >50,000 rows. The macros are not always compatible)
Sure, 50,000 rows of trivial formulas, I can believe that. Anyone could implement a spreadsheet app that can do that much, and much faster than Excel if you skip Excel’s very time-consuming error-checking. Excel 2000 is way faster than Excel 2010 for exactly that reason.
Unfortunately that does not describe my spreadsheet, which has charts along with sliders to control parameters in real time. OpenOffice Calc cannot handle either charts or sliders.
Just this afternoon I installed Ubuntu 12.04 (“precise”) on a machine I just bought to try out my spreadsheet again with OpenOffice CALC (actually LibreOffice CALC, which is what comes with Ubuntu and which works a darn sight better than OpenOffice CALC speedwise, at least with my spreadsheet). My spreadsheet was still complete garbage: graphs not working, sliders inoperable, useless.
Maybe OpenOffice Calc can handle ten million rows of something trivial, but it is completely incapable of handling a spreadsheet containing anything to do with real-world problems like climate. Macros are not the problem, my spreadsheet contains no macros other than those needed for sliders to work: the sliders show up but don’t work.
It would be fantastic if OpenOffice could run this spreadsheet because then I wouldn’t be stuck having to use Windows to display it. Hopefully someday those working on LibreOffice will get it to work properly.
AGW was negative before 1972?
Yes. HadCRUT3 is a dataset of “anomalies,” meaning temperatures relative to some arbitrary index, in this case the mean temperature between 1960 and 1991. Since the mean of SAW is very close to zero the mean of AGW can be expected to track that of HadCRUT3 when averaged over sufficiently long periods.
I just note the continued assertion that you can get any desired precision with enough averaging of data items… and ignoring that the effect can only work on random error and can not work on systematic error.
So, for example, the conversion from Stevenson Screens to MMTS was accompanied by the observation of a ‘cooling bias’ in the MMTS, which was ‘corrected’. Except now we find that the real error was that aging Stevenson Screens get warmer. So a slow increase (“warming”) over the life of the Stevenson Screens that is an error gets ‘locked in’ via the ‘removal of the cooling bias’ of swapping to the MMTS.
That is a SYSTEMATIC error that will never be removed by averaging. So you can never have milli-Kelvin, or IMHO, even good partial Kelvin accuracy or precision in the historical trend data. And that is only ONE such systematic error. The dramatic increase in use of Airports (that are known to be warming) as percent of data introduces another systematic error term. Aviation increased dramatically from post W.W.II to now. Aviation thermometer are to record temperatures near the runway (where the wings are, and were density altitude determines if you fly or crash…) so they want to know how hot it is over the asphalt / tarmac / concrete.
There’s more, but you get the picture. Systematic errors in equipment splices, location microclimate, instrument selection (thermometer change over time to less volatile locations during a PDO / AMO 60 year cycle).
Gives a nice 30 to 40 year ramp up. That then suddenly stops. Unless continuously ‘corrected’ with added ‘adjustments’… When long lived rural non-airport stations data are observed, it does not show this warming trend. So what you have shown in your filtered data is the signal of economic growth, aviation growth, and urbanization; along with equipment changes and ‘corrections’ that go the wrong way.
But it makes pretty graph and the ‘milli-Kelvins’ is a nice touch… that makes it clear the concept of False Precision is being soundly ignored.
E M Smith said’
“I just note the continued assertion that you can get any desired precision with enough averaging of data items… and ignoring that the effect can only work on random error and can not work on systematic error.”
I have a similar argument with Mosh.
We have -as one example-loads of extremely dubious SST figures many of which are likely to bear littlle relationship to reality the further back in time or the more geographically remote the sampling location-yet somehow averaging all these figures together apparently enables us to know the temperatures of the worlds oceans to 1850 to fractions of a degree.
Yet well authenticayed observations of weather/climate are dismissed as ‘anecdotal.’
Seems to me that its the ‘four legs good two legs bad’ syndrome. Anything with a figure in is considered highly accurate ‘data’ yet anything comprising just words is barely worth looking at.
tonyb
Tony, your historical approach to climate study is appreciated. However, many of your anecdotal examples, although authentic, are not suitable evidence for suggesting that any climatic abnormality had occurred. The opinions being expressed are of general interest only, as one person’s opinion could well be negated by someone else’s.
Hi, TonyB!
up thread is an assertion of exactly the kind I’m talking about. That any data can be made more precise and accurate with enough averaging.
Sad, really. I learned in High School that you can’t fix really bad data that way. (Though you can remove random errors that way… IFF they are truly random in distribution…)
What is becoming particularly interesting to watch is how the “warmest ever” years keep being touted even as the world gets snowy and cold.
Snow returning to Kilamajaro. Snow in the UK. Cold and snow fairly strong in the S. Hemisphere this last winter (and N. Hemisphere last year… this year a work in progress as it’s only fall…) So the simple fact of ‘snow on the ground’ will be hard to dismiss as ‘just words’…
Yet the disconnect exists. It’s the abnormal fascination with numbers. ( Hey, I have it too… Math Geek with a math award / scholarship and Aspe tendencies… I’m just more aware of the risks it brings… )
Oddly, I can still hear my college Stats professor talking about how the deviation in a mean can be made arbitrarily small with larger numbers of items, but that does NOT improve the accuracy of the data nor of that mean… due to systematic error problems. I guess they don’t push that as much as they used to. IIRC, they used the example of measuring a door height. If you measure it a 1000 times, you can get a very small variation in the mean, but if your tape measure is an inch short, you will never recover that inch in the averaging…
At any rate, the snows will demonstrate that the numbers are wrong. The move to “dirty weather” and “weather extremes” shows that global “warming” isn’t any more… Signs of desperation, really. Have to find a way to call “snow” a sign of “warming” and can’t… so changing the definition of the problem…
E M Smith said;
“Yet the disconnect exists. It’s the abnormal fascination with numbers. ( Hey, I have it too… Math Geek with a math award / scholarship and Aspe tendencies… I’m just more aware of the risks it brings… )”
I assume this unhealthy obsession with dubious numbers is a result of computers needing numerical data in order to create models. If there is anything the modern world likes its playing with computers and creating umpteen versions of models that enable you to prove anything you want.
There are numerous things I fear much more than so called man made climate change and at the top of that list is the consequences of the Western Worlds obsession with computers.
Whether its a Carrington event , rogue hackers or a concerted state led attempt to wage cyber war
http://www.dailymail.co.uk/sciencetech/article-2241525/The-Boeing-blitzing-drone-cripple-nations-electronics.html
our civilsation is far more likely to be brought down by computers than by climate change
tonyb
Peter Davies
Thanks for your kind words. Generally I try to combine anecdotal accounts with scientific studies. If one contradicts the other it would need several independent anecdotal accounts to counter a scientific study asserting something different.
How do you feel about the ‘anecdotal’ numerical data that is commonly used?
Tonyb
tony b
Your historical perspectives on climate are always interesting to me. They open up a whole new viewpoint, often exposing the human element as it interfaces with changes in climate over the centuries, and in some cases demonstrating how climate had an impact on human history
What Peter Davies calls “anecdotal” evidence, others refer to as “historical” records.
Crop records from China or the Middle East, records of silver mines being covered by advancing snow and ice in the Alps, records of wine being grown in northern England – these are all examples of historical evidence.
Then there is actual physical evidence: carbon-dated remains of trees (or even signs of earlier civilization) recovered under receding glaciers high above today’s tree line, Viking farms buried in the Greenland permafrost, etc. These are hard data.
Some people (like Mosh) only believe numerical data with the code.
But these data were not available before there were computers.
Much of the historical data on climate precedes the computer age, so we have a dilemma.
Paleo-climate data are dicey. Often these are simply subjective interpretations of unreliable proxy data from carefully selected time periods of our planet’s geological past, in some instances made to “prove” a preconceived hypothesis. We saw how poor this was just for the past 1300 years (Mann hockey stick fiasco). Another example is the “CO2 control knob” posit of Richard B. Alley, which is based on carefully selected paleo reconstructions.
These studies have another basic flaw: the analyses usually involve an “argument from ignorance” (i.e. “we can only explain this if we assume…”), where unknown factors are simply ignored and the assumption is that all factors can be accounted for. A fatal flaw.
Historical data are not questioned when it comes to “anecdotal (or reported) evidence” of battles of WWII, for example, and it is inconceivable to me that climate scientists give higher weighting to dicey paleo-climate studies than to historical documentation.
Guess it’s what the French call “une déformation professionnelle”.
Or is it “cherry picking” methodologies that can more easily be used to achieve the desired results?
(Just my skeptical mind at work, tony – forgive me.)
Max
Tony, If there are historical thermometer readings then its a different story. The reason for this is that, to my mind, thermometer readings are objective whereas opinions by individuals are subjective.
As Max rightly points out, anecdotal evidence is not as good as historical records or contemporary paleo evidence subjected to rigorous scientific processes, simply because in the latter two cases, the evidence is unbiased.
Max said
“Historical data are not questioned when it comes to “anecdotal (or reported) evidence” of battles of WWII, for example, and it is inconceivable to me that climate scientists give higher weighting to dicey paleo-climate studies than to historical documentation.”
At present the number crunchers and statisticians are in charge and Historical climatologists are taking a back seat as far as influence goes. They should work more closely together, but until number crunchers stop using the word ‘anecdotal’ in a perjorative fashion that seems unlikely.
Trouble is if the basic data number crunching leaves somerhing to be desired the statistical element is on very shaky ground.
tonyb
Peter Davies
I believe you may have misunderstood what I wrote.
“Anecdotal data” is the term that climate scientists (IPCC, etc.) assign to what tony b calls “historical evidence” This can be old sea charts, notes by explorers, crop records, old chronicals of mines being covered by advancing ice and snow, etc.
Paleo climate studies are not very compelling IMO – sort of like “reading tea leaves”. “Rigorous processes” may help avoid a hockey-stick fiasco, but there are still two very basic problems: a) the proxy data themselves are often dicey, especially when the time scale is large, and b) the interpretation of the data is based on an “argument from ignorance” (i.e. “we can only explain this if we assume…”), where unknown factors are simply ignored and it is falsely assumed that we have the knowledge of all factors that could possibly have been involved; if these studies are used to provide evidence for a preconceived hypothesis, I think they are next to worthless.
I would say the first category provides more compelling evidence than the second, for the reasons stated – but “consensus” climate science today sees this differently.
The best data of all, of course, are provided by actual physical evidence: carbon-dated remains of trees or signs of past civilization under receding glaciers, farm houses buried in permafrost, etc. But this sort of evidence only exists for the recent past and, even then, is rare.
This is all just my opinion, of course.
Max
This long thread about bias in measurement is reading things into my poster that are simply not there. All I claimed was a millikelvin fit to “observed global warming” as defined. I agree that it is highly implausible that this gives us an understanding of actual global warming to that precision, only of observed global warming (OGW) as I’ve defined it in the poster. Your concerns about bias in the observations are entirely legitimate, which for all any of us know could be present in OGW to some degree or other, and are outside the scope of the results presented in the poster.
They are addressed however by others, for example Richard Muller’s BEST project at Berkeley, which claimed to be unable to find significant bias of the sort you refer to. Like my work, BEST’s supporting software (in MATLAB) can be downloaded so you can verify BEST’s claims for yourself.
Incidentally although I used MATLAB myself for this work, I translated it into VBA-free Excel because MATLAB is a lot more expensive and less widely available — your administrative assistant is much more likely to have Excel than MATLAB for example. This was my first exposure to writing a nontrivial spreadsheet, which is quite a different thing from ordinary programming, especially when you don’t allow yourself the luxury of resorting to Visual Basic. VBA in Excel has the downside of obscuring the clarity of a macro-free spreadsheet as well as opening a gateway to viral infections.
manacker
And as far as your constant disparagement of all paleoclimate studies goes, it is worthless. Actually, it is worse that that. It is actively distorting.
Your endlessly repeated dismissal of an entire field is nothing more than a self-serving misrepresentation.
Remember manacker, MBH98/99 ≠ the entire field of paleoclimate. As dear, dear David Springer would say: write that down.
Max
‘Now imagine that you have never seen the device and that it is hidden in a box in a dark room. You have no knowledge of the hand that occasionally sets things in motion, and you are trying to figure out the system’s behavior on the basis of some old 78-rpm recordings of the muffled sounds made by the device. Plus, the recordings are badly scratched, so some of what was recorded is lost or garbled beyond recognition. If you can imagine this, you have some appreciation of the difficulties of paleoclimate research and of predicting the results of abrupt changes in the climate system.’ http://www.nap.edu/openbook.php?record_id=10136&page=12
But of course the milenialist cult of AGW groupthink space cadets has a different dynamic.
‘Janis has documented eight symptoms of groupthink:
– Illusion of invulnerability –Creates excessive optimism that encourages taking extreme risks.
– Collective rationalization – Members discount warnings and do not reconsider their assumptions.
– Belief in inherent morality – Members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.
– Stereotyped views of out-groups – Negative views of “enemy” make effective responses to conflict seem unnecessary.
– Direct pressure on dissenters – Members are under pressure not to express arguments against any of the group’s views.
– Self-censorship – Doubts and deviations from the perceived group consensus are not expressed.
– Illusion of unanimity – The majority view and judgments are assumed to be unanimous.
– Self-appointed ‘mindguards’ – Members protect the group and the leader from information that is problematic or contradictory to the group’s cohesiveness, view, and/or decisions.
When the above symptoms exist in a group that is trying to make a decision, there is a reasonable chance that groupthink will happen, although it is not necessarily so. Groupthink occurs when groups are highly cohesive and when they are under considerable pressure to make a quality decision. When pressures for unanimity seem overwhelming, members are less motivated to realistically appraise the alternative courses of action available to them. These group pressures lead to carelessness and irrational thinking since groups experiencing groupthink fail to consider all alternatives and seek to maintain unanimity. Decisions shaped by groupthink have low probability of achieving successful outcomes.’ http://www.psysr.org/about/pubs_resources/groupthink%20overview.htm
It results in an inability to rationallty assess the strength of evidence. In particular blah blah doesn’t have any science background and simply imagines that his narrative superficially in the objective idiom of science is the indisputable truth. So sad too bad.
Cheers
CH
Different screen name, same deranged crap.
Only loons and hucksters resort to sock-puppetry.
manaker.
i think you misunderstand my concerns about documentary “evidence”.
Think about the problems people have with uncalibrated sensors. now ask yourself how you calibrate a person who writes an account.
In short apply your skeptical skills across the board. doubt everything with the same vigor
One persons symbology is anothers little needed rationale for frothing at the mouth.
‘- Belief in inherent morality – Members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.
– Stereotyped views of out-groups – Negative views of “enemy” make effective responses to conflict seem unnecessary.’
Delusional space cadets repsponding with rage and abuse? Seems par for the psychopathology of AGW groupthink.
EM Smith fails to mention that the NH Late spring/Early Summer snow cover has been in decline for some time and there is absolutely no way any return to glacial or even LIA conditions could occur until this trend starts to go the other way:
http://climate.rutgers.edu/snowcover/chart_anom.php?ui_set=1&ui_region=nhland&ui_month=6
E.M.Smith: That is a SYSTEMATIC error that will never be removed by averaging.
What, never? No never!
What, never? Well, hardly ever… (H.M.S. Pinafore)
An accurate estimate of a systematic error permits its subsequent removal.
Now if a 160-year dataset A is accumulated that is full of errors and false precision, and another dataset B is accumulated that is free of such errors, would you expect A or B to be more amenable to analysis to within a millikelvin?
And if B then how would you explain accomplishing this for A? And would you expect the analysis for B to be even more accurate?
So Arrhenius had a couple of wild guesses at what the warming would be from carbon dioxide after misreading Fourier and without ever having established if such a trace gas could have such great effects of raising global temperatures several degrees C, and its now a “law”?
Still the denial out of ignorance…. textbook Myrrh.
That Arrhenius’s numbers from over 100 years ago have needed correction, does not gainsay that the principle was correct.
But please, do feel free to furnish us with your ‘alternative’ revised measurements of greenhouse gas absorption characteristics.
Carbon dioxide can’t trap heat, backradiation/blanketing doesn’t exist, and no proof has ever been given that it can do these things.
Which is why, as I said, Arrhenius based his figures on an imaginary concept of which he had no proof – that you keep ‘adjusting’ his figures based on nothing at all is on par with his silliness.
Carbon dioxide can’t trap heat
IOW, you deny the well-established absorption spectra of CO2.
“Carbon dioxide can’t trap heat”
Any references ?
Carbon dioxide has for all practical purposes, zilch heat capacity, it releases any it gets instantly.
Unless you want to claim, as the analogy given by a warmist in Clouds and Magic, that carbon dioxide is a thick down blanket 100% of the atmosphere, you don’t have a hope of a snowball’s chance in hell of showing how it can trap heat.
You have no sense of scale.
And anyway, carbon dioxide is fully part of the Water Cycle, which CAGW/AGWs have excised from their calculations, every time it rains it clears the air of carbon dioxide, all pure clean rain is carbonic acid. Gosh, even Arrhenius knew that..
He also got Fourier wrong, so he based whatever thinking he was having about all this on something Fourier said didn’t exist..
Fourier didn’t maintain this:, quoting Arrhenius, “Fourier maintained that the atmosphere acts like the glass of a hot-house, because it lets through the light rays of the sun but retains the dark rays from the ground.”
What Fourier really said:
“Misattribution, Misunderstanding, and Energy Creation
Arrhenius’ first error was to assume that greenhouses and hotboxes work as a radiation trap. Fourier explained quite clearly that such structures simply prevent the replenishment of the air inside, allowing it to reach much higher temperatures than are possible in circulating air (Fourier, 1824, translated by Burgess, 1837, p. 12; Fourier, 1827, p. 586). Yet, as we have seen in the previous quotation of Arrhenius, this fundamental misunderstanding of greenhouses is attributed by Arrhenius to Fourier.”
From Timothy Casey on http://greenhouse.geologist-1011.net/
So, what was Arrhenius ‘measuring’ anyway? This was a man who was exploring acids, his ideas have been superceded by better understanding, but, nevertheless, he knew that carbonic acid was water and carbon dioxide. His paper is all about carbonic acid.
So why do we have this typical AGWScienceFiction response to this?
http://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf
“Arrhenius’s paper is the first to quantify the contribution of carbon dioxide to the greenhouse effect (Sections I-IV) and to speculate about whether variations in the atmospheric concentration of carbon dioxide have contributed to long-term variations in climate (Section V). Throughout this paper, Arrhenius refers to carbon dioxide as “carbonic acid” in accordance with the convention at the time he was writing.”
BS, Arrhenius knew what carbonic acid was, knew it wasn’t carbon dioxide. His ‘measurements’ were about CARBONIC ACID, not Carbon Dioxide.
If you’re going to claim Arrhenius as “known” as if proven, then you’re deceiving yourselves. You have never, not one of you, ever shown any analysis of Arrhenius’s work, never examined him to confirm his basic principles in his claims. You, generic, show absolutely no bent for science at all.
You continue to base your arguments on Arrhenius not only without making any attempt to show he is correct, but while knowing full well he got Fourier wrong. And if you didn’t know he got Fourier wrong, then what the heck are you doing in this discussion on the effects of carbon dioxide in the atmosphere?
You don’t have any basics in your claim about “greenhouse gases warming the Earth”..
That’s why all your fisics is gobbledegook.
Following on the outright laughable lie you made about longwave from the sun being being ‘excised’ from AGW theory, further evidence of your profound ignorance of the theory you purport to challenge, is your obsession with backradiation, which, in order to support some predetermined conclusion, you presumably want to pretend is a vital factor in your trademark silly fisicsfiction.
It isn’t. A (CO2-)warmed atmosphere does not directly heat the earth as such, it slows down the rate at which the earth cools.
Memphis | December 5, 2012 at 5:37 am | Following on the outright laughable lie you made about longwave from the sun being being ‘excised’ from AGW theory,
Enough of this stupidity, this is the AGWSF Greenhouse Effect energy budget, that “shortwave heats the Earth and no longwave infrared from the Sun plays any part in heating the Earth’s land and water”, either because it doesn’t get through some unknown unexplained silly idea of an invisible barrier like the glass of a greenhouse, as per Arrhenius’s getting Fourier wrong, or, as Pekka gives, that the Sun produces very little longwave infrared.
Either reason for it not being in the Greenhouse Effect irrelevant to the main point about this: that the direct heat from the Sun which in real physics is thermal infrared, has been excised from the Greenhouse Effect’s energy budget.
Which proves none of you know anything about climate physics, you haven’t even noticed..
Utter, total, idiotic, stupidity – either way.
And, I think you know this is the claim “shortwave in longwave out and no direct heat from the Sun”, and you are being disingenuous as you’ve shown elsewhere in your posts, on the other hand, that’s crediting you with logic thought, I’m not sure but it could be you simply don’t know what you’re talking about.
@myrhh
Pekka “The Weasel” Pirila is correct about longwave infrared from the sun and he doesn’t need to weasel about it. Infrared begins at an arbitrary point just past the range of human vision and stretches all the way down to microwaves. It is broken down into various ranges inside that limit into near, mid, and far. Longwave infrared is another name for far infrared. The sun only emits near infrared in significant amount. Far infrared is what the earth emits. It’s the difference between the radiative spectrum of blazing hot 5000C matter and barely above freezing 15C matter.
By continually failing to recognize the differences and similarities between various frequency ranges along the electromagnetic spectrum you demonstrate either willful ignorance or a total failure of you grade-school phyical science teacher. This is VERY basic stuff that is part of the curriculum of every child in the developed world. Yet you don’t know it. What are we that know better than you supposed to make of that?
By the way, Myrrh.
What’s the physical difference between a blue photon from the sun and a blue photon from a laser?
This is the umpteenth time I’ve asked you that question and you have yet to answer it.
David Springer | December 5, 2012 at 9:01 am | @myrhh
Pekka “The Weasel” Pirila is correct about longwave infrared from the sun and he doesn’t need to weasel about it. Infrared begins at an arbitrary point just past the range of human vision and stretches all the way down to microwaves. It is broken down into various ranges inside that limit into near, mid, and far. Longwave infrared is another name for far infrared. The sun only emits near infrared in significant amount. Far infrared is what the earth emits. It’s the difference between the radiative spectrum of blazing hot 5000C matter and barely above freezing 15C matter.
By continually failing to recognize the differences and similarities between various frequency ranges along the electromagnetic spectrum you demonstrate either willful ignorance or a total failure of you grade-school phyical science teacher. This is VERY basic stuff that is part of the curriculum of every child in the developed world. Yet you don’t know it. What are we that know better than you supposed to make of that?
You are so brainwashed by the AGWScienceFiction memes that you have absolutely no idea what you are saying. The heat we feel direct from the Sun is longwave infrared, aka thermal infrared, aka radiant heat aka simply, heat – this is standard basic physics knowledge in traditional physics and thermodynamics. Thermal infrared is the Sun’s heat energy in transfer.
The power of heat, that’s what thermodynamics means: heat is transferred in three the ways, conduction, convection and radiation. The heat transferred to us direct from the Sun is thermal infrared. That’s why it’s called thermal, because near infrared is not thermal and because it is the heat of the Sun not the light of the Sun. Thermal means ‘of heat’, it is the direct heat energy of the Sun, transferring the Sun’s great heat by radiation.
You have no heat from the Sun in your world, and visible light from the Sun cannot move molecules of matter into vibration, because visible light from the Sun is too small, so small, we can’t even feel it – it works on the electronic transition level, the level of electrons, it doesn’t impact matter on the molecular vibrational level, it takes the bigger more powerful heat energy from the Sun to move the molecules of matter into vibration – which is what it takes physically to heat up matter. Do you understand I am pointing out that there is a difference here?
That you have been brainwashed into believing such utter idiocy that visible light heats matter is one thing, that you haven’t any sense of scale and don’t even know how to tell hot from cold is another, that’s a disadvantage in someone interested in science. Which shows of course, you’re not interested in understanding the differences between things as your stupid reply to me confirmed, you repeat it again:
David Springer | December 5, 2012 at 9:13 am | By the way, Myrrh.
What’s the physical difference between a blue photon from the sun and a blue photon from a laser?
This is the umpteenth time I’ve asked you that question and you have yet to answer it.
This is the question you asked me in response to my challenge. My science challenge was – prove that visible light from the Sun can heat the land and water at the equator to the intensity these are heated which gives us our huge equator to poles winds and dramatic weather systems.
That is not an answer to my challenge. You are avoiding it. It doesn’t deserve any response from me, but your idiot repetition of it as if it proves you’re being clever is getting tedious, grow up. Answer my challenge, I have worded it as I have for a reason, see if you can work it out.
Anyway, as I said, you are all so brainwashed by these AGWSF sleights of hand that you actually believe that our blazing hot Star the Sun millions of degrees C hot, doesn’t give off any heat!
You just don’t know how stupid you all look..
..to anyone with the basics of traditional physics.
I have done my best to avoid being so brutally honest in past postings, but it really is getting tiresome trying to get some of you to think.
The heat we feel from the Sun is the Sun’s heat in transfer, it is not the Sun’s light in transfer, we cannot feel near infrared or visible light.
You can’t ignore that I have given exactly the same information from a NASA page, you have to take that on board if you’re making any claim to being science minded. The NASA information CONTRADICTS YOU.
Are you even capable of understanding what that means?
It challenges your claim.
If you’re having any problem comprehending that last sentence, you’re really not up for this.
I’ll show again that NASA on a traditional page contradicts your AGWScienceFiction fake fisics claims:
Read it until you understand that it is contradicting you..
http://science.hq.nasa.gov/kids/imagers/ems/infrared.html
“Infrared light lies between the visible and microwave portions of the electromagnetic spectrum. Infrared light has a range of wavelengths, just like visible light has wavelengths that range from red light to violet. “Near infrared” light is closest in wavelength to visible light and “far infrared” is closer to the microwave region of the electromagnetic spectrum. The longer, far infrared wavelengths are about the size of a pin head and the shorter, near infrared ones are the size of cells, or are microscopic.”
Here it is again, do you understand that it is saying there is a distinct difference in size?
“Infrared light lies between the visible and microwave portions of the electromagnetic spectrum. Infrared light has a range of wavelengths, just like visible light has wavelengths that range from red light to violet. “Near infrared” light is closest in wavelength to visible light and “far infrared” is closer to the microwave region of the electromagnetic spectrum. The longer, far infrared wavelengths are about the size of a pin head and the shorter, near infrared ones are the size of cells, or are microscopic.”
Next:
“Far infrared waves are thermal. In other words, we experience this type of infrared radiation every day in the form of heat! The heat that we feel from sunlight, a fire, a radiator or a warm sidewalk is infrared. The temperature-sensitive nerve endings in our skin can detect the difference between inside body temperature and outside skin temperature”
Have you got that? Read it again and again however many times it takes until you understand what it is saying. It is saying what I am saying, that the HEAT we feel from the Sun is Thermal Infrared.
Here it is again: “Far infrared waves are thermal. In other words, we experience this type of infrared radiation every day in the form of heat! The heat that we feel from sunlight,
Next:
“Shorter, near infrared waves are not hot at all – in fact you cannot even feel them. These shorter wavelengths are the ones used by your TV’s remote control.”
Have you taken that on board? We cannot feel the much tinier wavelengths of LIGHT, near infrared is classed with light. Hence the same principle of reflection capture in near infrared cameras. Near infrared is classed as Reflective, not Thermal. These are distinct categories in real world physics because there is difference between them.
We cannot feel visible light, because it is not a thermal energy, it is not heat, if we feel heat we are feeling thermal infrared.
The heat we feel from the Sun, which we can feel heating us up, heating up the land and water around us, is thermal infrared.
This is what AGWSF’s Greenhouse Effect has excised from it energy budget.
So you have no heat from the Sun in your world – because you have substituted Visible light from the Sun which cannot heat matter
What does it take to get this through to you?
As I said, you cannot ignore this if you make any claims to be a scientist, or interested in science.
Either you are wrong or what this NASA page and I am saying is wrong. They can’t both be right.
Sort it out, don’t avoid it.
Enough of this stupidity, this is the AGWSF Greenhouse Effect energy budget, that “shortwave heats the Earth and no longwave infrared from the Sun plays any part in heating the Earth’s land and water”
Once again Myrrh repeats this obvious lie. Enough of this stupid dishonesty (even using quotes to give the false impression he is quoting some authoritative source!).
And, exactly as predicted, the militant ignoramus Myrrh again (10th time?) simply ducks the point that the specific wavelength that warms the earth is completely irrelevant to AGW theory.
Hoping we won’t notice this, he again quietly switches his argument – this time to this new notion that CO2 is unheatable. Which I suppose explains why it is always found to be at zero degrees absolute.
Another Myrrh ‘fact’, is that “every time it rains it clears the air of carbon dioxide”.
Needless to say, not empirical studies cites. And one does wonder why, if CO2 is anyway unheatable (see above), he bothers mentioning this. Through in enough denials, maybe one will stick ?
Memphis | December 5, 2012 at 3:49 pm |
Enough of this stupidity, this is the AGWSF Greenhouse Effect energy budget, that “shortwave heats the Earth and no longwave infrared from the Sun plays any part in heating the Earth’s land and water”
Once again Myrrh repeats this obvious lie. Enough of this stupid dishonesty (even using quotes to give the false impression he is quoting some authoritative source!).
You really are this dumb or just continuing the disingenuousness you’ve shown in other posts?
So I have to reply to every repetition in case some unsuspecting reader thinks I have been lying? You stink Memphis.
I have given authoritative sources. You are dishonest, others should be wary of discussing anything with you, because you think you’re being clever but you just show yourself to be deceitful in your posts.
Memphis | December 5, 2012 at 4:14 pm |
And, exactly as predicted, the militant ignoramus Myrrh again (10th time?) simply ducks the point that the specific wavelength that warms the earth is completely irrelevant to AGW theory.
I have already answered this, but in a nutshell for information:
There is no Greenhouse Effect.
That’s why AGWScienceFiction has taken out the direct heat from the Sun, which is the the Sun’s heat in transfer, which is longwave infrared, which is thermal infrared.
See the NASA quote which gives the same traditional real world physics teaching I have been giving. This is the heat we actually, really, feel from the Sun. We cannot feel visible light, this is not a thermal energy and it cannot heat land and water because it is too small to heat matter and water is a transparent medium for it, water transmits visible light from the Sun through unchanged.
AGWScienceFiction has deliberately excised the real direct heat from the Sun, thermal infrared, so that it can pretend there is such a thing as its claimed “greenhouse gas warming”, it uses real world measurements of increases of downwelling longwave infrared but pretends the Sun doesn’t have anything to do with this, that it all comes from “greenhouse gases in the atmosphere backradiating/blanketing”.
This is a con.
There is no such thing as the Greenhouse Effect which is based on the claim that “greenhouse gases heat up the Earth 33°C from the minus 18°C it would be without them.
Firstly to get this, AGWSF has excised the Water Cycle from the real world. Without water the Earth would be 67°C, think deserts. It is the Water Cycle which cools the Earth down to 15°C.
So, I’m not expecting you to see that, Memphis, but there might be someone reading who has the nous to follow what I’m saying.
The AGWSF has given that -18°C figure to its claim that this would be the temperature only without its “greenhouse gases”, but, in real world physics that is the temperature for the Earth without any atmosphere at all, that is, without any nitrogen or oxygen too.
This is a magician’s trick. It has fraudulently given the figure of one scenario to another entirely different. The 33°C figure is what is fake here, it doesn’t exist. It’s an illusion. There is no direct connection between -18°C to 15°C accounted for by the “greenhouse gases water and carbon dioxide which take in radiated longwave”. Which is why no empirical science is ever produced to prove it exists.
So, firstly by taking out the Water Cycle and secondly by falsely claiming this is the figure only without its “greenhouse gases” when it is the figure for the Earth without any real gas atmosphere at all.
If you’re actually thinking about what I’ve just said, you’ll realise that the real world’s greenhouse gases are predominantly nitrogen and oxygen, they are the bulk of our atmosphere and act like a blanket delaying the escape of heat from the surface..
All of AGWSF is like this, it twists the real facts and terms in physics and changes meanings, plays with words, gives the property of one thing to another and takes laws out of context and so on, and this applies to all its basic claims. All its basic claims are fake fisics. All to promote that the AGW exists to the agenda of those who created this illusion.
For any reading this who do know the real physics involved, it can be very amusing to see what kind of world is depicted by these fake fisics claims. For example, it has substituted ideal gas for real gas, so it actually has no atmosphere at all – only empty space with hard dots of massless, volumeless, weightless molecules without attraction and not subject to gravity zipping around at great speeds under their own molecular momentum miles apart from each other..
.. so they have no sound in their world either..
It really is comic cartoon fisics of an impossible world.
So Memphis, you claim the Greenhouse Effect exists, but it doesn’t, it’s an ILLUSION created as I’ve been explaining, so it doesn’t matter what energy is heating the Earth to get it, the gods at the four corners farting hot air would do just as well..
Memphis’s question is redundant, there is no “AGW from a heated Earth”, the AGWSF Greenhouse Effect is science fraud.
Now, stop avoiding my direct science challenge.
A (CO2-)warmed atmosphere does not directly heat the earth as such, it slows down the rate at which the earth cools.
Exactly right. We must be in 2012, I don’t recall anyone saying this in 2010.
Myrrh : There is no Greenhouse Effect
We are all swell aware you deny the well-established physics on the absorption spectra of greenhouse gasses. We eagerly await your ‘alternative’ empirical data that contradicts it. The longer you avoid this challenge, the more certain we are you are just making it all up.
A (CO2-)warmed atmosphere does not directly heat the earth as such, it slows down the rate at which the earth cools.
Vaughn >> Exactly right. We must be in 2012, I don’t recall anyone saying this in 2010.
Yes previously the cry was “back-radiation”.
@Memphis: Yes previously the cry was “back-radiation”.
And moreover cried so long and loudly that even climate skeptics knew that, as an explanation of the greenhouse effect, it was preferred over the analogy of CO2 as a warming blanket. On now-archived Wikipedia talk pages for the greenhouse effect article one can find would-be “owners” of that article slapping down those daring to draw the blanket analogy.
I believe the first post on this blog to protest the illogic of the back-radiation theory was my post sixteen months ago. As evidence that such an objection was a complete novelty at the time, although there were a few positive responses there were many more negative responses such as “The whole article is ridiculously unphysical,” “Dr. Curry, how much undergraduate physics would I have to *forget* in order to be invited to ‘blog for you?” “there is no rational basis to dispute [back-radiation],” “it’s a joke post, right?” “DLR is Settled Science, and nobody who says otherwise has any right to expect to be taken seriously. I find Vaughan Pratt’s approach terribly flawed, in that it systematically wastes the time of the best scientists, in servitude to folk too ignorant to inform themselves and too arrogant to admit their egregious ignorance.” and much more in that vein.
So I’m quite chuffed to see people nowadays abandoning the back-radiation explanation of the greenhouse effect in favor of what is essentially the blanket explanation: a thicker blanket is colder on the side from which it radiates to a colder environment (and warmer with a warmer environment—radiation from a 115 F desert will warm an unprotected 98 F human, which a blanket can reduce independently of the fact that your body generates heat).
Not only is it a simpler explanation but it’s easier to calculate with. How on earth do you calculate the increased warming resulting from increased back-radiation when the CO2 is both radiating and absorbing in all directions? Following the trajectory of photons as they seemingly bounce around in the atmosphere (being bosons they don’t actually bounce but are created and annihilated) is a very challenging book-keeping task to say the least!
Vaughan, an interesting parallel is seen in Pierrehumbert’s AGU talk on the history of AGW. After Arrhenius, who had the correct top-view of the energy balance there was a long period that Ray calls dark ages where the bottom-view prevailed including such names as Callendar and the top view didn’t really prevail again until Manabe in the 60’s.
Vaughan Pratt
Thanks very much for your response to my questions. How did the presentation go?
To my question of whether Hansen posits a much longer time lag than 15 years in his pipeline postulation, you answer with a question:
I was referring to the “pipeline” paper, which he co-authored in 2005, in which the authors postulate that half of the warming since 1880 was “still in the pipeline”.
http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf
The “last 15 years’ emissions” were only one-third of the total cumulated emissions, IOW “half of the warming” figures out to a longer time lag than 15 years, so I just wondered from where you got the “15 year” figure.
Thanks for answering my question on TSI vs. GCR/cloud impact in solar cycles (it’s the solar cycle, not the magnitude that counts).
To my question about how the unusually high level of 20th century solar activity had been considered in your analysis, you answered:
I was referring (for example) to Solanki et al. 2004
http://www.ncdc.noaa.gov/paleo/pubs/solanki2004/solanki2004.html
And (on a four hundred year basis) the Lean curve of solar irradiance 1611-2001
http://farm9.staticflickr.com/8202/8246230123_71547c34c5_b.jpg
You did not really answer my question on physical data supporting the MRES smoothing. I have checked the sources out there (cited by IPCC) and the data are very feeble – I had hoped you might have been able to cite a better source of information, but apparently this does not exist.
My last question related to whether or not you started with the “correct answer” and worked your way backward to arrive at it statistically, or whether or not you made an independent statistical analysis of the data and arrived at the “correct answer”. You confirmed:
So this answers my question.
Thanks again for taking the time to respond.
Max
@manacker: Thanks very much for your response to my questions. How did the presentation go?
Thanks for asking, Max. Way better than its precursor at last year’s meeting, in all respects – better results, cleaner production (my poster last year was a mess), much greater percentage of the four hours explaining the poster to interested parties, much higher quality of such including a couple of directors of relevant labs, and very positive reception of the novel items, in particular the huge importance of Hansen delay in estimating climate sensitivity from observation (as distinct from GCMs) and my seismic hypothesis for the sawtooth.
Embarrassingly I discussed the poster with Pieter Tans, one of the coauthors of the Hofmann et al result cited in the poster, for ten minutes without recognizing his name until he pointed it out. Ouch! But he didn’t complain, other than to point out that the exponential part of their law was weakening lately with the 1974 oil crisis and increasing prices for all fossil fuels, with which I fully agreed. I plan to redo the work using actual emissions data reconciled with the Keeling curve in place of the exponential. While I expect that to make very little difference to my numbers since they model only the past, I do agree with Tans that the exponential law is a pessimistic predictor of the future on account of this decline in the rate of rise of emissions—the problem there is how to estimate the likely future decline. The possible impact of methane from melting permafrost is another obstacle to a meaningful extrapolation of global warming to date.
@manacker: The “last 15 years’ emissions” were only one-third of the total cumulated emissions, IOW “half of the warming” figures out to a longer time lag than 15 years, so I just wondered from where you got the “15 year” figure.
Those are two different things. If you start up a CPU with a heatsink but no fan, after ten seconds the heatsink may have retained half of the total dissipation of the CPU but (with the appropriate circumstances) the CPU temperature at 10 seconds could nevertheless be what it would have been at 9 seconds without the heatsink. What I’m calling the “Hansen delay” is the 1 second in that situation, not the 10 seconds.
@manacker: I was referring (for example) to Solanki et al. 2004
http://www.ncdc.noaa.gov/paleo/pubs/solanki2004/solanki2004.html. And (on a four hundred year basis) the Lean curve of solar irradiance 1611-2001
That variation in the Sun’s output is a reasonable candidate for portion of either SAW or AGW or both. All I’ve done is separate the multidecadal part into an oscillatory component and a rising component and modeled them accurately. As I say in the poster, “Whether SAW describes a single phenomenon or several is an
excellent question.” Solar variation is a very reasonable candidate, especially when you consider that a more accurate fit to the Lang curve is obtained with a second-order curve, the last century of which would be concave downwards, opposite to AGW and therefore more likely to be associated with SAW.
To clarify my answer to your last part concerning what was in my mind, I fitted all 9 parameters jointly taking the variance of MRES as what was to be minimized The exponential is indeed part of the hypothesis being tested here as to whether there is any good fit of theory to data. The point of splitting up F3(DATA – (SAW + AGW)) as F3(DATA – SAW) – F3(AGW) was to allow the denominator in the formula for R2 to be visualized, namely by comparing “observed” and “predicted” global warming by eye, which shows visually that MRES regarded as the unexplained variance is a very tiny fraction of the total variance.
Vaughan Pratt
Thanks for answering my questions.
Agree with your statement that the exponential function is overly pessimistic.
IPCC has two “scenarios + storylines” (B1 and A1T) which end up with CO2 at around 600 ppmv by 2100.
These assume a continuation of the past exponential growth rate of atmospheric CO2 of around 0.5% per year despite a dramatic decrease of the population growth rate to less than one-third of the past rate so, even if the world per capita fossil-fuel based energy use increases by 50%, these are most likely “upper limits” themselves.
On this basis, using the IPCC mean CS of 3.2C we would end up with warming of 2C above today’s temperature, all other things being equal.
This is about half of the warming which the exponential curve would predict.
Of course, one can argue about whether or not the IPCC mean CS of 3.2C is exaggerated and about “all other things being equal”, but that would be another argument.
Thanks again for your time.
Max
Dr Pratt,
This is a quote from your paper:
“With either dataset, the model forecasts a 4 C rise for 2100”
Does this mean a rise from the current temperature (2012) or from the vague “pre industrial” value. (If it is the latter, can you please say what the temperature is assumed to be at the moment, on the same scale?)
Furthermore, I don’t see any decimal point with zeroes after it, just “4C”, Does this mean that you cannot actually predict the future temperature to an accuracy of 0.001 degrees C (1 millikelvin)?
Also do you have a temperature change prediction for the next 10 years?
Mark B
Assume that Vaughan Pratt will answer your specific question regarding the 4C warming forecast to 2100.
But let’s assume for now this refers to the warming from today and do a quick “sanity check”.
We have 88 years to go, so that means an average decadal warming rate for the rest of this century of 0.45C per decade. This sounds pretty high to me (three times what it was during the late or early-20th century warming cycle). But maybe that’s what you get from an exponential curve.
But how realistic is this projected warming?
Let’s assume that other anthropogenic forcing beside CO2 (aerosols, other GHGs) will cancel each other out, as IPCC estimates was the case in the past.
Using the IPCC mean 2xCO2 climate sensitivity of 3.2C (and assuming there will be as much warming “in the pipeline” in 2100 as there is today, this means we would have to reach a CO2 level of 932 ppmv CO2 by 2100 to reach a warming of 4C (all other things being equal, of course).
This is unrealistic, since WEC 2010 estimates tell us there are just enough total optimistically inferred fossil fuels to reach around 1030 ppmv when they are all gone.
Let’s assume, on the other hand, that Dr. Pratt is referring to 4C warming since industrialization started (a pretty arbitrary figure, as you point out, but a concept that is often cited). On this basis, there has been ~0,8C warming to date, leaving 3.2C from today to year 2100.
Using the IPCC climate sensitivity of 3.2C, the CO2 level by 2100 would need to double by 2100, from today’s 392 to 784 ppmv, to reach this warming (the high side IPCC “scenario and storyline”A2 is at this level, with estimated warming of 3.4C above the 1980-1999 average, or ~3.2C above today’s temperature).
So, on this basis, Dr. Pratt’s estimate would agree with the high side estimate of IPCC.
I’d question the realism of this “high side” estimate by IPCC, since it assumes that the exponential rate of increase in CO2 concentration will jump from the current rate of 0.5% per year to 0.74%per year, despite a projected major slowdown in human population growth rate.
But I guess that only shows that you can demonstrate anything with statistics.
Max
Furthermore, I don’t see any decimal point with zeroes after it, just “4C”, Does this mean that you cannot actually predict the future temperature to an accuracy of 0.001 degrees C (1 millikelvin)?
It depends on whether you’re predicting average temperature for one year such as 2097 or one decade such as the 2090’s or twenty years. SAW + AGW can be evaluated to ten decimal places at any given femtosecond in time. But that’s no good for a forecast because you have to add SOL and DEC from Figure 11. Double their joint standard deviation and you get a reasonable figure for the uncertainty of a prediction in any given year. For any given decade the uncertainty decreases, but I wouldn’t want to forecast to two decimal digits so far ahead.
But even one digit isn’t that reliable because of unknowns like those Max refers to.
Dr Pratt,
I am not sure what you mean by “SOL”. The only mention that you make of it is in this paragraph
“Because filter F3 rises slowly on the left of its cutoff there is the worry that some multidecadal phenomenon was overlooked by sneaking into the Hale octave. Something like this seems to have happened on the high-frequency side of the SOL band, perhaps some ENSO noise from DEC (decadal band) getting into the TSI band.:
Do you mean that SOL is a variable which has to be incorporated into the model retrospectively, once its values (over a given time period) become known?
I have to admit that I have jumped into all this theory a bit late on, and terminology which you and the scientific community take for granted are like a foreign language to me.
Thanks again for replying to my previous posts.
SOL = HALE + TSI. HALE is obtained by filtering RES1 = HadCRUT3 – F3(HadCrut3) with another filter F3′ twice the frequency of F3 (so HALE = F3′(RES1), and TSI is obtained as F3”(RES2) where RES2 = RES1 – HALE.
The windows for F3 are 21/17/13, those for F3′ are 11/9/7, and for F3” 7/5/3.
Vaughan Pratt
Above, Jim Cripwell asks a question regarding the magnitude of the solar influence, referring to the SvensmarkGCR/cloud hypothesis being tested at CERN.
You explained to me that for removing the 11-year “solar cycle”, the amplitude of the cycle itself did not matter, and I can accept that, as far as the 11-year solar cycle is concerned.
But I think Jim’s question goes in a different direction, and it is the same question I asked, namely how your analysis had handled the unusually high level of 20th century solar activity.
You asked me for supporting data, so, at your request, I linked two sources for this information (Solanki and Lean).
There have been several other independent solar studies, which have concluded that around 50% of the past warming (instead of 7%, as assumed by IPCC) can be attributed to this unusually high level of solar activity, and I just wondered how you had handled this in your analysis.
Since a major portion of this solar forced warming is believed to have occurred during the early-20th century warming cycle (which is statistically indistinguishable from the late-20th century warming often cited by IPCC, which is believed to be “mostly” caused by GH forcing), it seems to me that the magnitude of the solar forcing is a component that should be included in your analysis.
[Of course, if you have assumed (as IPCC does) that the only solar forcing is that from direct solar irradiance alone, then that would answer my question.]
I hope I’ve explained this clearly enough.
Max
Thanks, Max, I have had some further thoughts on this subject. I dont think that anyone understands just how the Svensmark effect works. I am not sure that Dr. Pratt has the necessaary expertise and background to be able to justify the simple assume that all of this effect can be accounted for in the Schwab and Hale cycles. One wonders which solar physicist he consulted before he made this assumption. Did he contact Henrik Svensmark himself? I know my expertise is nothing like good enough to state with any clarity just where Dr. Pratt is wrong, but I am reasonably certain that he is not correct.
What Pratt did was remove the actual data from the HadCRUT-tortured temperature record to leave behind the smooth curve introduced by the model adjustments. It is well known that said adjustments artificially lower the early years in the record and raise later years.
Here is the effect of each adjustment:
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif
Note two adjustments called SHAP (station homogeneity adjustment procedure) and TOBS (time of observation bias) account for all the warming in the United States Historical Climatology Network data set. USHCN data is the gold standard in instrument records and the warming is trend is not in the raw data. The warming trend only exists in the modeled data.
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif
Above is the final result of modeled adjustments to actual thermometer readings. Here is how NOAA describes this graph:
“The cumulative effect of all adjustments is approximately a one-half degree Fahrenheit warming in the annual time series over a 50-year period from the 1940’s until the last decade of the century.”
Take out the adjustment from Pratt’s figure 2 and there’s no residual remaining for CO2 to account for. CO2 warming is a sham and it’s being exposed by mother nature and our network of satellites which actually do have the capacity to measure the average temperature of the lower troposphere over the earth’s entire surface. There’s been little if any warming in that data in the past 15 years despite there being no respite in anthropogenic generation of CO2 during that time.
The jig is up. Pratt’s work is mental masturbation. It doesn’t reflect well on him.
@Jim Cripwell: I dont think that anyone understands just how the Svensmark effect works.
First, this effect (influence on cloud formation by solar cycle modulation of cosmic rays) has been known for half a century. It was suggested in 1959 by Edward Ney and again in 1975 by Robert Dickinson. What is your basis for attributing it to Svensmark?
Second, there is nothing at all in my results to either confirm or deny the Ney effect. The Hale cycle is clearly present, but there is no way to tell from the HadCRUT3 data whether it results from the Ney effect or say from the Birkeland current which runs a 100,000 amp current through the ionosphere tracking the Hale cycle. Why not a hundred-thousand amp toaster warming the Earth periodically?
Third, whatever the cause of the Hale cycle as a component of HadCRUT3, it’s obviously been there for as long as the Sun has had a rotating magnetic field (which accounts for both the Ney effect and the Birkeland current), so why would it contribute to global warming right when humans suddenly pump an incredible amount of CO2 into the atmosphere?
@DS: What Pratt did was remove the actual data from the HadCRUT-tortured temperature record
David, nothing has been “removed” as you put it, though it may have been misplaced. HadCRUT3 = MUL + SOL + DEC, your quarrel should be with whether portions of one of these three wide frequency bands have crept into the wrong band.
Can you point to a specific component of HadCRUT3 that you feel my spectral analysis has put in the wrong band?
Dr. Pratt:
it’s obviously been there for as long as the Sun has had a rotating magnetic field (which accounts for both the Ney effect and the Birkeland current), so why would it contribute to global warming right when humans suddenly pump an incredible amount of CO2 into the atmosphere?
Dr.Pratt
Simple. Geomagnetic storms!
http://www.geomag.bgs.ac.uk/images/image022.jpg
Can you spot similarity?
Dr. P. You cause me grief, now your colleague ‘Svalgaard of Stanford’ will be after me again.
Dr. Pratt, you write “What is your basis for attributing it to Svensmark?”
The Wilson Cloud Chamber preceded the suggestion by Ney. Henrik Svensmark was the first to do an actual experiment, and collect empirical data that strongly suggests that GCRs contribute to cloud formation.
David.
If instead of USHCN you use unadjusted GHCN daily data or unadjusted FSOD data, you get the same result as using USHCN.
Further if you remove the US from CRUTEM the answer doesnt change much. it cant as the US is only 2% of the land surface.
And as you know the rest of the world doesnt use SHAP or TOBS.
And SHAP, as you know, adjusts cold sites that were at 1000 ft ASL level
when they are moved to lower altitudes. Now, if a site was at 1000 ASL
and moved to o ft ASL ( and warmed ) would you suggest leaving.
As for TOBS. If you are making a measurement at 7AM and you change the time of observation to midnight do you believe ( have any evidence) that this change in practice does not effect the results.
Your nobel awaits you if you do
Are you saying NASA lied about the effeci of the corrections?
Everyone using surface station data applies TOBS and SHAP corrections to raw data at some point before the finished product although SHAP may not be explicitely called that.
The fact remains that NOAA explicitely admitted that the adjustments produce the warming from the raw data. You must either accept that or claim that NOAA lied about the effect of the adjustments.
You seem to want to have your cake and eat it too now both claiming that the adjustments are justified and even without the adjustments the warming trend is still there. The latter claim makes NOAA out to be lying.
As to your claim that the U.S. is only 2% of the globe, big deal. It’s representative enough for this purpose. US land-only temperature doesn’t have a markedly different trend than global satellite data. In fact if that doesn’t hold true prior to the satellite era then you can kiss the instrumental record prior to 1950 goodbye because its coverage isn’t anywhere near global and is almost completely absent for undeveloped countries, remote regions, and over the ocean.
You can’t have your cake and eat it too.
Myrrh | December 5, 2012 at 6:46 am |
“Carbon dioxide has for all practical purposes, zilch heat capacity, it releases any it gets instantly.”
Really? So if I electrically heated a tank of CO2 to say 350F and opened the valve to release a jet of it you’d have no problem holding your hand in the jet?
David
More realistically, suppose you were to heat a tank of co2 at 390ppm to the earths average temperature of 15 Degrees C and a tank of water vapour (Fog?) to the same temperature and opened the valve for an hour, what would be the temperature of each tank at that time?
tonyb
1. What is the pressure in the tanks?
2. How big are the tanks?
3. What is the temperature outside the tanks.
Also, what are the tanks made of, how thick, etc, etc …
How about if these tanks were to be dropped into your compound…
http://worldnews.nbcnews.com/_news/2012/12/05/15706380-syria-loads-chemical-weapons-into-bombs-military-awaits-assads-order?lite
you would pray it was CO2.
If the tank contained ordinary air but with CO2 removed you’d only roast your hand but if anyone were so craZy as to add CO2 that could start a chain reaction that might lead to runaway global waming and very well could destroy every living thing on the planet.
Jim2
Its Davids tank. :)
However I’m sure your tank would be just as good if you’d care to make the calculation using your own parameters.
tonyb
Not enough information to answer.
Typical idiocy I’ve come to expect from you in response.
No sense of scale, no sense of difference between hot and cold, no sense of context.
Myrrh
Presumably your last comment is directed at David Springer?
You posed an interesting comment about c02 releasing its heat instantly. It would be good to have an answer based on real world temperature conditions within Davids hypothetical tank.
tonyb
climatereason | December 5, 2012 at 2:47 pm |
Myrrh
Presumably your last comment is directed at David Springer?
Grin.. yes, sorry about that. I usually post to whom and to what I’m replying..
Re your: You posed an interesting comment about c02 releasing its heat instantly. It would be good to have an answer based on real world temperature conditions within Davids hypothetical tank.
Real world physics such as heat capacity is avoided by AGWSF because it spoils its “carbon dioxide traps heat” meme, that’s why they’ve taken rain out of their Carbon Cycle, so they can pretend it accumulates trapping more and more heat. So also the misdirection in the pretend experiments which stop short of measuring how long flasks of carbon dioxide take to cool..
Besides the other skullduggery in them, for example that a flask full of carbon dioxide against a flask of “air” is hardly the beginning of a rational experiment to prove anything, why not appropriate volume of the trace carbon dioxide, and, what’s in the “air”? A high volume of water with it great heat capacity which will taken longer to heat up..?
A while ago, can’t recall off-hand which discussion here, I was told that carbon dioxide had a much greater heat capacity than oxygen and nitrogen.. I hadn’t heard that fake fisics meme before. On checking out the heat capacity figure I was given it turns out it was for carbon dioxide at gazillions degrees temps! They’ll use anything they can to confuse the unwary.
By the wa Myrrh,
Have you figured out yet that there’s no difference between a blue photon from the sun and a blue photon from a laser?
I’m still waiting for you to cowboy up and either describe the difference or admit there isn’t one.
David
Debating partners of Myrrg need to know his groundrules. Here is a starter pack :
– he doesn’t address points that disprove what he is saying
– his preference in such situations is a tactical switch; failing which, simple silence
So please don’t wait up expecting him to cowboy up any time soon. His overall style is self-important, arrogant ‘lectures’ on his fisics fiction, and the abovementioned switch-if-defeated tactic.
Memphis I agree the Myrrh is selective of what he chooses to follow up. Rhetoric needs to be backed up with salient information at least even if we cannot agree if they are “facts”
Rapidly expanding a gas cools it as the thermal energy is converted to translational energy. Supersonic expansion of gases into vacuums is often used as a method to adiabatically cool the gas and reduce its internal temperature (i.e. low rotational and vibrational states). This technique allows for cleaner spectral analysis of molecules not achievable at RTP.
Though your example is not a supersonic expansion into a vacuum, I would expect the gas jet to be considerably cooler than the tank temperature.
You’d be wrong about the exiting gas being lower in temperature than the tanked gas. The gas inside the tank is expanding. The gas that has left the valve is done expanding and won’t drop in temperature further except as it is duluted by lower temperature ambient air. Given I’m electrically heating the tank I would be replacing the energy lost from expansion and the tank temperature would remain constant.
I’d guess you don’t have a shop with an air compressor to experience these effects first hand. I do.
Dear Myrhh would receive third degree burns if the gas hadn’t diluted much with ambient air before contacting his skin. I’d take a jet of 350F CO2 over a jet of 212F steam any day of the week though and twice on Sunday’s. I have a hot-air plastic welder that delivers well regulated air streams in the hundreds of degrees F with fairly constant regulation to whatever temperature you set it to. You can pass your hand quickly through it without getting burned. You can’t do that with steam. Steam is dangerous as the heat capacity is about 1000x an equal weight of dry air.
“The gas inside the tank is expanding.”
Well, assuming the tank volume is constant, the gas is not expanding but the pressure is increasing (which is not expansion). Expansion requires the volume to increase… so the gas is not expanding inside the tank as you heat it.
“The gas that has left the valve is done expanding”
No, that gas is expanding as it leaves the tank and doesn’t finish expanding until it’s pressure is equal to the surrounding gas it is expanding into.
I think you have to look at the Joule-Thompson inversion temperature of CO2 to determine if the gas would cool upon expansion starting at a temperature of 350 F or 450 K. The inversion temp of CO2 is 968 K at 1 atm which is greater than 450 K so CO2 should cool upon expansion due to the Joule-Thompson effect.
Here is an interesting paper related to carbon sequestering and injecting CO2 into depleted natural gas wells and the problem of freezing water during the process.
http://esd.lbl.gov/FILES/research/projects/tough/documentation/proceedings/2006-Oldenburg_CO2.pdf
Check it out
Why start the analysis in 1850? What happens if you choose 2000 BCE as the starting point? That would include at least three of the 1000-yr oscillations.
The Earth has been in a cooling trend over the last 4000 years, and since the time of Jesus as well… The Earth has been in a cooling trend for 10000 years.
What 1ky oscillation?
Captain Kangaroo – aka Skippy – is ultimately a climate warrior on a blue horse called shibboleth – and who is quite immune from jibes from buffoons such as you. Symbology rather than sock puttetry – it signifies a hardening of attitude. It calls attention to the descent of this site into a battle field in the climate war – not least attributable to your abusive and ignorant comments. Take it as read that scientific and civil discourse with climate activists is impossible to be replaced entirely by tribal polemic.
‘Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values.49 Nor is that to be seen as a defect. Of course choices between competing values are not made by relying upon scientific knowledge alone. What is wrong is to pretend that they are’ http://www.lse.ac.uk/collections/mackinderCentre/
It is the narrative that has no resemblance to the search for truth that distinguishes AGW groupthink space cadets such as yourself. The groupthink is not susceptible to rational discourse. Rationalisation follows distortion and bad faith. Your frequent resort to ad hom, distortion, rationalisation and misrepresentation – as well as the absurd and overweening confidence in your own narrative – laces you firmly in the grasp of AGW groupthink.
This is what I actully quoted from the NAS about the uncertainty of paleoclimate research.
‘Now imagine that you have never seen the device and that it is hidden in a box in a dark room. You have no knowledge of the hand that occasionally sets things in motion, and you are trying to figure out the system’s behavior on the basis of some old 78-rpm recordings of the muffled sounds made by the device. Plus, the recordings are badly scratched, so some of what was recorded is lost or garbled beyond recognition. If you can imagine this, you have some appreciation of the difficulties of paleoclimate research and of predicting the results of abrupt changes in the climate system.’ http://www.nap.edu/openbook.php?record_id=10136&page=12
This is what I cited in respect of variability.
‘There is variability on all scales. Putting a name or a putative period to these things is irrelevant.
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=chylek09.gif
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=tsireconstruction.png
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=Vance2012-AntarticaLawDomeicecoresaltcontent.jpg
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=ENSO11000.gif‘
The references are all in the names or on the graphs themselves. But please – it would be absurd to suggest that climate wasn’t variable at all scales. Not that I would put that past an unqualified and unscientific bozo like you.
As for GIS and ‘the planet’ – the only use of GIS I have ever made relates to geographic information systems. GIS certainly doesn’t google as anything else.
There is variability on all scales. Putting a name or a putative period to these things is irrelevant.
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=chylek09.gif
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=tsireconstruction.png
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=Vance2012-AntarticaLawDomeicecoresaltcontent.jpg
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=ENSO11000.gif
Why are you resorting to sock-puppetry? Were you banned for being an arse?
What 1ka oscillation? Let’s have some references. Not the usual cobbled-together snippets clipped out of their proper context.
Nearly forgot – have you worked out the difference between the top of the GIS and the entire planet yet?
How was it you described yourself again? Oh yes (how could I forget):
Bit weak on paleoclimate though.
;-)
My comment is awaiting moderation? It is in the wrong place anyway. Too many links I take it.
Captain Kangaroo – aka Skippy – is ultimately a climate warrior on a blue horse called shibboleth – and who is quite immune from jibes from buffoons such as you. Symbology rather than sock puttetry – it signifies a hardening of attitude. It calls attention to the descent of this site into a battle field in the climate war – not least attributable to your abusive and ignorant comments. Take it as read that scientific and civil discourse with climate activists is impossible to be replaced entirely by tribal polemic.
‘Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values.49 Nor is that to be seen as a defect. Of course choices between competing values are not made by relying upon scientific knowledge alone. What is wrong is to pretend that they are’ http://www.lse.ac.uk/collections/mackinderCentre/
It is the narrative that has no resemblance to the search for truth that distinguishes AGW groupthink space cadets such as yourself. The groupthink is not susceptible to rational discourse. Rationalisation follows distortion and bad faith. Your frequent resort to ad hom, distortion, rationalisation and misrepresentation – as well as the absurd and overweening confidence in your own narrative – places you firmly in the grasp of AGW groupthink.
This is what I actully quoted from the NAS about the uncertainty of paleoclimate research.
‘Now imagine that you have never seen the device and that it is hidden in a box in a dark room. You have no knowledge of the hand that occasionally sets things in motion, and you are trying to figure out the system’s behavior on the basis of some old 78-rpm recordings of the muffled sounds made by the device. Plus, the recordings are badly scratched, so some of what was recorded is lost or garbled beyond recognition. If you can imagine this, you have some appreciation of the difficulties of paleoclimate research and of predicting the results of abrupt changes in the climate system.’
This is what I cited in respect of variability.
‘There is variability on all scales. Putting a name or a putative period to these things is irrelevant.’
The references are all in the names or on the graphs themselves. But please – it would be absurd to suggest that climate wasn’t variable at all scales. Not that I would put that past an unqualified and unscientific bozo like you.
As for GIS and ‘the planet’ – the only use of GIS I have ever made relates to geographic information systems. GIS certainly doesn’t google as anything else.
GIS = Greenland Ice Sheet. As you would know if you were not somewhat vague about paleoclimate. It was the *region* that experienced extreme warming at the end of the YD. The *region* you persistently confuse with the entire planet. Or perhaps this is deliberate misrepresentation.
***
If there’s no 1ka oscillation (and there isn’t) why did you not point this out to Caz instead of having a pop at me? I ask because among the bizarre statements you make above, we find this:
Well, CH, you are the most unpleasant commenter I have ever encountered, which is also the key to your single redeeming feature: your comical lack of self-awareness.
As for all this nonsense about ‘symbology’ and ‘blue horses’ etc, what can I say? You deny sock puppetry in spoutingly grandiloquent terms that frankly raise the suspicion that you are losing your grip.
Oh I remember – you have a reference that suggsts that the Younger Dryas was a arctic phenomenon.
‘More than a decade ago, ice core records from Greenland revealed that the last glacial period was characterized by abrupt climate changes that recurred on millennial time scales. Since their discovery, there has been a large effort to determine whether these climate events were a global phenomenon or were just confined to the North Atlantic region and also to reveal the mechanisms that were responsible for them…
Finally, given the potential role for processes occurring in both low and high latitudes, we suggest that a global approach is necessary for understanding the problem of abrupt change. Coupled GCMs certainly offer this kind of perspective, but they have been used only in limited
applications to this problem, primarily in studies of the climate response to freshwater forcing in the Atlantic. While this has been useful, there are other ways to perturb the climate (e.g., different initial conditions or forcing persistent changes in particular phenomena) that may help to reveal the global-scale coupled feedbacks that can cause the climate to change abruptly around the globe.’ MECHANISMS OF ABRUPT CLIMATE CHANGE OF THE LAST GLACIAL PERIOD
Amy C. Clement and Larry C. Peterson
Really it just this AGW spoace cadet narrative of climate that I complained about. An inability to weigh evidence or to acknowledge uncertainty. The paleoclimate is especially an area of uncertainty – which is not what I said but what I quoted from the NAS.
‘Now imagine that you have never seen the device and that it is hidden in a box in a dark room. You have no knowledge of the hand that occasionally sets things in motion, and you are trying to figure out the system’s behavior on the basis of some old 78-rpm recordings of the muffled sounds made by the device. Plus, the recordings are badly scratched, so some of what was recorded is lost or garbled beyond recognition. If you can imagine this, you have some appreciation of the difficulties of paleoclimate research and of predicting the results of abrupt changes in the climate system.’ http://www.nap.edu/openbook.php?record_id=10136&page=13
I don’t know why you persist. It is quite clear that I saif that there was variability at all scales. Please if you think otherwise just say so. It would be quite in keeping with your ill informed AGW groupthink narrative – so not surprising.
But your ongoing frothing at the mouth has little meaning or impact for me. I am the most unpleasant commenter you have ever encountered? What a joke. My tone is reasoned and mild by comparison. All you do is spray your rants and insults quite widely and most indiscriminantly at any number of denizens. Do you think it has escaped notice?
Do you really believe what you say. I suppose it is possible – but quite bizarre.
Chief Kangaroo
When are you going to admit that you have repeatedly mistaken the GIS ice core data showing abrupt temperature change at the end of the YD for an abrupt *global* temperature change of ~10C?
Or was that a deliberate misrepresentation? I’m trying to work out if you are genuinely ill-informed or genuinely dishonest.
Please clarify.
And please explain why you have changed screen name. Were you banned for being an arse?
You drop in with silly comment – frothing at at mouth, insulting and lying. Why? Who gives a rat’s arse.
Captain Kangaroo
Stop being evasive and answer the questions:
1/. When are you going to admit that you have repeatedly mistaken the GIS ice core data showing abrupt temperature change at the end of the YD for an abrupt *global* temperature change of ~10C? Or was that a deliberate misrepresentation? I’m trying to work out if you are genuinely ill-informed or genuinely dishonest. Please clarify.
2/. Please explain why you have changed screen name. Were you banned for being an arse?
Cheif changes his screen name because he is Aussie insane. This is different than regular insane. It is the same affliction which caused a couple of Aussies to call the hospital where Kate was being treated while trying to impersonate the royals.
All these Aussie commenters from Chief, to Doug Cotton, to Myrrh, to Stefan the Denier, to Girma are all probably just pulling our collective leg. This is clown insanity and these guys are part of the Aussie insane clown posse.
WHT
That’s terribly unfair. Chief Kangaroo is a great man. He tells us so himself:
Now bend the knee. Show the proper respect.
BBD,
Yr: “Why are you resorting to sock-puppetry/”
You know, BBD, it is curious how the crushers on this blog get all worked up by “sock-puppetry.” I mean even the alternate “handles” used by the Chief and Latimer, which everyone knows belong to them, seem to trigger spoiled-brat-temper-tantrum, up-tight, nit-noid obsessed, totally weirdo, freak-out, over-wrought objections on your part suggestive of severe mental-health issues. Like I say, BBD, curious–especially since you and your fellow crusher hive-bozos don’t have the slightest objections to, say, the use of pseudonyms.
Just a theory, here, BBD, but let me run it by you and see what you think. So, BBD, I’m, like, thinkin’ you crushers are keepin’ some sort of a file of selected deniers’ comments and the “sock-puppet” business messes up your rigid, amateurish, DISCIPLINED, top-down-iron-fist-controlled, typical-greenshirt-inflexible-set-up, file system.
And instead of just whippin’ up some modifications to your little “surveillance” system so that it cross-links “sock-puppet” monikers, you, BBD, and your crusher hive-retards respond, as your alternative of choice, by spinning yourselves up into a series of little, fussbudget-geek, whiny-dork, prig-dude snit-fits that routinely lead to a blown, control-freak gasket or two before an astonished humanity and hope that solves the problem.
You know, BBD, I can hardly wait for you and the other crushers to become our power-and-control, whip-cracker, autocrat Philosopher-Kings-and Queens so that you can plunge us all into that misery-loves company, nit-picking, fault-finding, nag-bot hive-hell, you eco-weenie, “little-man” martinets call home.
mike
I’m really not supposed to talk about this, but since it’s just you and me in here I will confirm that you are correct. We keep files. Sock-puppetry means more bloody paperwork and we resent it.
And yes, you are on the list. Several of them in fact. When we take over, if you are not shot at once, you will be among the first shipped to the work farms, where you will learn to love the smell of tofu in the morning.
But I have said too much. Forget this. Carry on as normal while you can.
Several hectoring trivialities all at once?
Let me explain just once more – only because it is fun. It is simply the recognition that rational discourse with millenialist cult of AGW groupthink space cadets is impossible. It is just one trivial skirmish after another.
‘Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values. Nor is that to be seen as a defect. Of course choices between competing values are not made by relying upon scientific knowledge alone. What is wrong is to pretend that they are.’
It is just a simplistic narrative that is defended with overweening moral and intellectual certitude as some grand and immutable truth . It is not about science. It is about the climate war. This site has descended into the abyss driven not least at all by the abusive and repulsive antics of blah blah and butch.
Variability is about randomness the webster says. It echoes what Vaughan says about saw tooth functions. ‘Sawtooth waves occur naturally as the result of a sudden perturbation away from equilibrium followed by a slow return to equilibrium. The several 100,000 year deglaciation cycles of the late Quaternary are one example; this might be another…’ Not so much randomness as such but perturbation – but we will let that slide.
My own view is that it is truly heroic to view the glacials/interglacials of the Quaternary as a statistically stationary system. These and other variabilities are best seen as chaotic shifts in a complex and dynamic system. But this does not imply that any state is possible – just those on the complex topology of the climate phase space.
So we have natural variability – but it is not about this. It is all about the the millenialist groupthink memes of the hive-bozos. That they happen to be appallingly scietifically illiterate is probably to be expected. That they distort, lie and misrepresent is part of the psychopathology. So sad to bad.
1/. When are you going to admit that you have repeatedly mistaken the GIS ice core data showing abrupt temperature change at the end of the YD for an abrupt *global* temperature change of ~10C? Or was that a deliberate misrepresentation? I’m trying to work out if you are genuinely ill-informed or genuinely dishonest. Please clarify.
Just to make it abundantly clear – the comment was on sensitivity and relates to the YD only in your own fervid imagination.
Here’s the comment.
‘Girma – I no more believe in a constant sensitivity than I believe in fairies at the end of the garden. Actually fairies are probably higher on the list of the feasible.
What we had was 10 degrees C warming in as little as a decade at times. Let’s see – that’s about a sensitivity of 296,000.’
I gave you a reference for regional and time varying sensitivities. I no more more believe in a global sensitivity than in a time unverying sensitivity.
2/. Please explain why you have changed screen name. Were you banned for being an arse?
No it’s because you are an arse. I thought that much was abundantly clear even to a hive-bozo such as you.
BBd : 2/. Please explain why you have changed screen name. Were you banned for being an arse?
As the undisputed biggest arse that has ever been on this site – even Myrrh and Web pale in significance – and who has not been banned, what comments do you imagine might have earned a ban ?
Or was that question just more of your usual arse talk?
Chief Kangaroo
Caught out in ignorance and now resorting to *lies*. You’ve made too much of the ~10C shift at the end of the YD over the past few weeks to get away with this. It’s childish. Don’t underestimate your enemies. We businessmen understand this well.
Chief Kangaroo
You wriggle frantically on the hook but let’s keep the focus where it needs to be: on your refusal to acknowledge your errors.
I pointed you at a state-of-the-art study demonstrating that the global cooling associated with the YD was modest (Shakun & Carlson 2010). It’s misleading and mistaken to suggest that there was a 10C global climate shift at the end of the YD but you have been doing so for several weeks to my *certain knowledge*. This was simply the latest example:
One can readily see just how misleadingly mistaken you are in saying this. What you have to do now is admit your error.
Only children and nutters refuse to admit their mistakes when confronted with the evidence. So come on, out with it.
Here’s the comment.
‘Girma – I no more believe in a constant sensitivity than I believe in fairies at the end of the garden. Actually fairies are probably higher on the list of the feasible.
What we had was 10 degrees C warming in as little as a decade at times. Let’s see – that’s about a sensitivity of 296,000.’
That’s obviously hard science and you’re an idiot.
‘Large, abrupt climate changes have affected hemispheric to global regions repeatedly, as shown by numerous paleoclimate records (Broecker, 1995, 1997). Changes of up to 16°C and a factor of 2 in precipitation have occurred in some places in periods as short as decades to years (Alley and Clark, 1999; Lang et al., 1999). However, before the 1990s, the dominant view of past climate change emphasized the slow, gradual swings of the ice ages tied to features of the earth’s orbit over tens of millennia or the 100-million-year changes occurring with continental drift. But unequivocal geologic evidence pieced together over the last few decades shows that climate can change abruptly, and this has forced a reexamination of climate instability and feedback processes (NRC, 1998). Just as occasional floods punctuate the peace of river towns and occasional earthquakes shake usually quiet regions near active faults, abrupt changes punctuate the sweep of climate history.’
No that I mentioned the YD – but.
‘The Younger Dryas is one of the most well-known examples of abrupt change. About 14,500 years ago, the Earth’s climate began to shift from a cold glacial world to a warmer interglacial state. Partway through this transition, temperatures in the Northern Hemisphere suddenly returned to near-glacial conditions (Figure 6). This near-glacial period is called the Younger Dryas, named after a flower (Dryas octopetala) that grows in cold conditions and became common in Europe during this time. The end of the Younger Dryas, about 11,500 years ago, was particularly abrupt. In Greenland, temperatures rose 10° C (18° F) in a decade (Figure 6; Cuffey and Clow, 1997).’
‘The Younger Dryas cold reversal event is one of the best known instances of an abrupt climate change on record. Theories about what caused the shift from global warming to a distinctly cooling period are varied. Debate continues amongst scientists as to whether the Younger Dryas was a regional or global occurrence. Evidence from the Northern Hemisphere strongly supports the existence of the event, but evidence from the Southern Hemisphere is less compelling. Concern about the impact of a similar abrupt climate change episode in today’s world has prompted further research to better understand the Younger Dryas. ‘
There is an ongoing debate about the YD – including asteroid impact as a cause. Yet you have one ‘state of the art’ paper. You are an absolute moron.
The latest news is that the nurse that took the phone call from the Aussie Larrikin pranksters has now committed suicide a few days after this incident.
I want to say that actions have consequences.
To all you Aussie tribal clowns that inhabit this comment area with your pranks and antics, which includes sockpuppetry, word salad, foo, FUD, and general spew, you have been pwned.
StefanTheDenier : shove it
Chief Hydro : pretentious prick
Girma : get a clue
Doug Cotton : get some help
Myrrhh : buy a vowel, your schtick ain’t working
Peter Lang : whatever
Alexander Biggs : no one is interested
Tim Curtin : and you are who?
with due respect to Professor Pratt, who I realize is a native-born Australian, but somehow managed to escape this affectation.
Bit of a stretch here webnutcolonoscope? But then frothing at the mouth and irrationality is what you do best.
My actions in pointing out your hick Minnesotan irrationality about climate randomness, peak oil, power rules for everything and the atmosphere heating the ocean has consequences? I hope so.
Chief Kangaroo
I don’t dispute that there have been large, abrupt climate shifts. Nor do Shakun & Carlson.
I don’t dispute this either. Nor do Shakun & Carlson. Their work provides insight into the extent and effects of the YD, regionally and globally.
My point has always been that the large, sometimes abrupt climate shifts over the last ~15ka were associated with deglaciation.
Where we seem to differ is over the likelihood of significant, global-scale cooling episodes during the C21st. I’m sceptical because I don’t see the mechanisms that might produce a surprise at the cool end of the scale. We are 11.5ka into the Holocene. The albedo-driven instability from a large NH ice sheet is gone. Huge fluxes of freshwater from proglacial lakes are gone. But increasing GHG forcing is ever with us.
***
Presumably we agree that the climate system is moderately sensitive to radiative perturbation. So do we agree that under a sustained and increasing forcing, the centennial trend will be up? And that as energy accumulates in the climate system, even modest cool climate shifts become ever-less *possible*?
‘GK: San Diego —– (SEAGULLS, SURF) a beautiful city with a Mexican flavor (LATIN DANCING, SS SPANISH), a city of perpetual summer most of the time. But for some San Diegans, it’s just too much. The freeways. (TRAFFIC) the helicopters going overhead day and night (HELICOPTERS)—the price of real estate (FN: HOW MUCH? A HUNDRED THOUSAND DOLLARS FOR THIS???). Why not try Minnesota? (LOON). A reasonably pleasant state on America’s other border, Minnesota offers an interesting variety of weather conditions. It’s quiet, especially at this time of year. (QUIET WIND, DISTANT OWL). In northern Minnesota, you can buy a 3-bedroom house for less than you’d pay for a garage in San Diego. The people are courteous.
Minnesota. It’s quiet. It’s cheap. It’s mannerly. And it’s interesting, in its own way (WIND) Maybe it’s the place for you after all.
(BIG CHORDS)
JEARLYN (TO “NATURAL WOMAN”):
Maybe it’s time
Maybe it’s time
You think about Minnesota……’ http://prairiehome.publicradio.org/programs/2011/02/26/scripts/minnesota.shtml
I was listening to White Top Mountaineers on the radio last night – absolutely gorgeous music. Such charm and such a rich tradition of banjo picking. Replete with self deprecating humour. They had dinner with the family one evening. It was all chicken. Fried chicken, baked chicken, chicken casserole, chicken nuggets. They was it was a bit strange but ate dinner – and it was good.
After dinner sitting on the porch and pickin’ when a chicken came staggering around the corner of the house and fell over and lay on the ground.
‘Hey’, Martha said, ‘what do you reckons wrong with that chicken.’
“Well I don’t rightly know Martha. All we know is that they are dying faster than we can eat them.’
They like Australia to and we like them I found out last night. But the attitude is something that seems alien to the webster. He is all pompous self aggrandising and absurd abuse. I guess the nexus of redneck USA is Minnesota and not the Appalachians.
I am all over this crazy Australian Larrikinism.
And of course the crazy Chief Hydro will adopt all sorts of sockpuppet names such as Capt. Kangaroo to try to pin his inadequacies on me.
That is called pure projection.
Given that there are 8+ commenters from Australia on this site with crackpot theories (and some other questionably unbalanced such as Beth), and given that the USA has almost 14 times the population of Australia, then statistically you might imagine that there would be at least 100 American crackpots commenting here.
In fact there are only a handful of wacko American commenters, and they are really pitiful sad-sack characters such as Oliver, HAP, and Joe’s World. They can almost compete with Girma.
Hmmm …. maybe what we are seeing coming out of the Aussie commentary are Black Swans or maybe these are the the Dragon Kings that the Chief is always yapping about. Yes sir, these crackpots are Dragon Kings. They are Sky Dragon Kings!
The climate geniuses coming out Australia are more numerous than one originally imagined. Who would have thunk it? Ahh, but remember the first Black Swan was discovered in Australia …
All hail the Australian Black Swan Sky Dragon Kings !
The redneck mouth from Minnesota. Is that a mixed metaphor? Try at least not to just say I’m not you are. Have a bit of style about it. Study Mike that’s the go. Redneck, hive-bozo, greenshirt creep out etc etc but try to be original. Try not to repeat things endessly either. Embellish, develop a patter but be constantly creative. As it is – we have heard it all before and frankly it was tedious and stupid the first time.
You didn’t count yourself as one of the biggest space cadet wack jobs on the net. That’s gotta count for something.
And let me be very clear – the parable of the suiciding nurse above is just Minnesotan redneck insanity.
It was worth the wait – the pinnacle of Web’s intellectual achievements thus far:-
– the prank callers are from Oz
– some sceptics are from Oz
– therefore CAGW is true
Can she ever top this?
You should watch what you say. I have never used the term CAGW or catastrophic AGW on my blogs (since I started in 2004) or here. So that is a completely false premise.
I even have trepidation of looking at recent temperature time series because I know the theory and practice of noise at a fairly detailed level. And these are noisy time series, so can laud Vaughan Pratt for applying interesting signal processing techniques to extract the signals from the noise.
Yet, by that same experience I can judge when the statistics of some process show an obvious trend. The fact that Australians are overly represented on this site (and residents or ex-patriots of the British Empire to a lesser degree) says that there are tribal and cultural influences at work here.
For Australians, it probably comes down to the mocking of authority and practicing mischief that a certain subculture is known for. That is the basis for the term Larrikinism.
For the Brits, it is the joy of argument and perhaps the fact that they are free from legal troubles for saying the wrong thing on a USA-based commenting site.
What you and other rethugs should read is Nate Slilver’s recent book “The Signal and the Noise”. You will find that numbers and statistics have underlying meaning and one can infer sociological meaning as well as physical meaning from the numbers as presented.
Yes, I am guilty of using a single anecdote with the nurse suicide, but that was meant to call attention to the circus of pranking mischief that the Aussie commenters have cultivated here by infiltrating their tribal influence.
Chief Kangaroo
When you get a minute.
Caz, they are not really oscillations. They are weakly damped decay responses due to the differences in ocean/atmosphere sensitivities to difference forces and feedbacks. 1470 +/-500 years is roughly the Bond Event timing, but they are not very consistent. Some of the solar harmonics are pretty reliable though.
https://lh5.googleusercontent.com/-V3BcTzzsesU/UL-q6KsdAXI/AAAAAAAAF4w/HyaBLOtva7o/s835/ocean%2520atmosphere%2520lags.png
Bintanji and Van de Wall have a paleo reconstruction that rocks as far as the 100 year interpolation. That plot I made by normalizing the surface temperature by dividing by standard deviation (4.2) and the deep ocean temperature (0.82). The difference shows the lead/lag between the two.
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/bintanja2008/bintanja2008.xls
So no 1ka oscillation then. Why not just tell caz that (s)he is mistaken?
Why, instead, do we get *reams of shite* from you and Captain
FruitcakeSkippy?You supposed rigorous men of science are doing an extremely poor job of pointing out a simple error by a single commenter.
I wonder why that is?
BBD, “Why, instead, do we get *reams of shite* from you and Captain Fruitcake Skippy?” You perceive reams of *Shite* because you are as confused as Doug Cotton. I have been pretty consistent in referring to weakly damped decay responses, recurrent responses, internal heat imbalances and transfer rates, though I am forced to refer to some “oscillations” because some have that in their name, PDO, AMO, AO, AAO, JO, QBO, NAO. I didn’t name them.
If you look at that chart, the normalizing by dividing by standard deviation and then taking the difference, is not like super advanced math. Tdo, the Temperature of the Deep Ocean reconstructed by Bintanji, leads and lags Tsurf, Temperature of the surface, because the two respond differently to forcings and feed backs. The differences in heat capacity and uptake/discharge causes the leads and lags. Pretty simple basic stuff. If you look at the standard deviation of Tdo of 0.82 and my online handle, you may have an epiphany. The Standard deviation of Tsurf, might also turn on the light bulb. Which has the most thermal mass?
BBD, My well thought out comment must have gone to spam So here is the short version, Pftttt!
BBD, I will attempt another detailed response, but this too may go to spam.
I consistently refer to weakly damped decays or recurrent response patterns. Oscillations are forced on us by the names given for some of the shorter term patterns. They are more accurately called Pseudo-cycles or quasi-oscillations.
Some of the longer term patterns are still stuck with the “oscillation” because of convention, but they are still Pseudo-cycles, quasi-oscillations, weakly damped decays or just recurrent patterns of unknown origin.
The Bintanji Van de Wal reconstruction is one of the few that has consistent time intervals, 100 years in this case, which easily allows simple comparisons. Normalizing and subtracting Tsurf(norm) from Tdo(norm) shows the lead lag relationship between the surface and deep ocean responses to forcing and feed backs. The Tsurf(norm) minus Tdo(norm) provide a very basic indication of the shapes of the various responses due to the internal system lags. Some of the more pronounced pseudo-cyclic patterns are given names. The ~1000 year occurrence is not an oscillation, but likely a recurrent weakly damped decay pattern due to orbital forcing variations.
BTW, Tdo standard deviation is 0.82 for the past 500ka and Tsurf is 4.2. Why do ya think that is?
You may what to make a note of that.
They are not cycles as captdallas says. My precise comment was that there is variability at all scales. Why do you insist of frothing at the mouth about irrelevant points? Who gives a rat’s arse.
vukcevic | December 5, 2012 at 5:57 am |
During the last 100 years or so, solar cycle period was on average 10.54 years, while theHale cycle is twice as long. This means that Solar coronal mass ejections CMEs in the even-numbered solar cycles tend to hit Earth with a leading edge that is magnetized north. Such CMEs open a breach and load the magnetosphere with plasma starting a geomagnetic storm .
Geomagnetic storms hit the Arctic, induce strong currents, disturbing the Earth’s field and feed back into the oceanic currents, releasing some of the stored heat during the previous cycle (with less geomagnetic input):
Sigh. That is not how it works. There is a weak 22-yr cycle in geomagnetic activity, but it goes from solar maximum to solar maximum, not in sync with the odd-even numbering. Explanation in section 9 of http://www.leif.org/research/suipr699.pdf
The bit about oceanic currents is also completely wrong.
Hi Doc
Nice to here from you. I hope Japan trip was success.
Yes, the solar 22 year cycle is pretty weak, but the Earth’s magnetic field has strongish 22 year component too.
Coincidence is highly unlikely, so is the Earth’s 22 year magnetic ripple induced by the solar, or do they have a common source?
p.s.
your trip inspired this little ‘gem’
http://www.vukcevic.talktalk.net/NoaaD.htm
the most of people attending to this blog can reproduce it, and wonder at the mother Earth’s capacity to surprise.
All the mumbo-jumbo in this poster can be summarized as follows. If you postulate that there was some earth internal process that lifted earth temperatures from 1910 to 1940, and remove the effect of this putative process from the data, you get a continually increasing temperature curve in the last century or so (except for the last decade). We may paraphrase this by saying if it wasn’t for the fact that the temperature did not rise with a continually increasing curve, it would have risen with a continually increasing curve. Just like in tennis: If I hadn’t double faulted, I would have gotten my serve in.
Yes there is such a process from 1910 to 1940, but again the same process lifted temperatures from 1975 to 2005, for even greater amount as I show here:
http://www.vukcevic.talktalk.net/EarthNV.htm
4-5 prominent scientists (two are climate) have details of my calculations. They question mechanism but not result of calculations.
Dr. Pratt should treated both sections equally, then he would found out that is not exponential and nothing to do with CO2 formula.
@DR: In place of your summary I would summarize the “mumbo-jumbo” as
(i) representing HadCRUT3 as a sum MUL + SOL + DEC of low, medium, and high frequency components, with MUL as the component of primary interest defined as F3(HadCRUT3);
(ii) fitting a 9-parameter analytic function (namely AGW+SOL) to MUL with an R2 of 99.98%.
The latter should be construed merely as a hypothesis about multidecadal climate (namely that it can be modeled in this way) that is in the running with other hypotheses.
A random time series with 162 points when filtered with F3 can be expected to require at least 14 parameters to model it with an R2 of 99.98%. That this hypothesis can model F3(HadCRUT3) with only 9 parameters makes it an above-average hypothesis and therefore in the running to compete with other hypotheses about multidecadal climate. If anyone knows of a good alternative hypothesis that isn’t overfitted, i.e. that doesn’t use 14 parameters to model MUL (or whatever you prefer to define as multidecadal climate) I’d be very interested to see it.
Ordinary Fourier analysis would be an example of overfitting. Every dataset can be Fourier analyzed, and the result is only interesting when you can say something meaningful about the resulting frequency components. For a random 162-year time series filtered with F3, expect around 7 significant sinusoids specified with 15 parameters (one to specify the fundamental, the rest specifying amplitude and phase of each sinusoid, the frequencies are all determined by the fundamental).
Dr. V,
put this way, I’m more curious about your result than on first read through.
So, scratch me from the critic column and put me in the curious column, but not the convinced column
Put me in your column, David. I’m not convinced myself, though I’m certainly very curious. My analysis is nothing more than one way to decompose HadCRUT3 — it certainly doesn’t rule out the possibility of better decompositions. A better one would be great!
If only the real world was made up of physical processes that were limited to sine waves.
It isn’t. And even if we assume that there are X “significant” sources in a signal, Each X has it own waveform morphology, usually not even cyclic, which is composed of a fundamental and Y different “significant” harmonics. And of course it is entirely possible the fundamental is of zero magnitude (and the signal comprised of only harmonics). Now throw in the fact that signals that are modulating via frequency and/or amplitude (i.e. all real signals) exhibit frequency side lobes. Even signals that are cyclic, but have distinct phase changes exhibit a large amount of interference in lower frequency bands. And guess what? Real signals don’t line up with the center bands of each freq bin, making discriminating signals of close frequency near impossible to discriminate. I could go on.
The point is that doing frequency analysis on mixed real world signals with a bunch of unknowns is simply scrambled eggs. In my experience if you can’t see the signal with your eye in the timeline with this type of analysis, you aren’t likely to find it using frequency analysis. FFT’s are useful to more precisely measure certain types of cyclical signal characteristics of signals you know are already there, less useful for finding them.
Is the return from a frequency bin a fundamental, harmonic, side lobe, phase distortion? With this type of signal, probably all of the above and more.
You.cannot.unscramble.this.with.a.FFT. It’s not a very useful tool for this type of data.
Vaughn – what you have done with HadCRUT is worthless. You are obviously a computer whiz and are having fun applying your trade. But the dataset you are working with isn’t what it seems to be. It has numerous errors in it and that alone is enough to make your output GIGO. Furthermore, it is not worth including ill-defined climate cycles into your analysis. And those seismic events at the core-mantle boundary sound an awful lot like deus ex machina to improve your curve fit. Furthermore, you are still talking of sensitivity being 2.8 Celsius when it is exactly zero. That follows from Ferenc Miskolczi’s work who showed that water vapor feedback is negative, not positive as IPCC insists. They absolutely need that positive feedback to create those preposterous warming predictions of theirs. Miskolci showed that according to NOAA database of radiosonde measurements of infrared transmittance of the atmosphere the transmittance of the atmosphere remained constant for 61 years. At the same time CO2 concentration of the air increased by 21.6 percent. His theory had predicted that IR transmittance should remain constant, IPCC greenhouse theory required that it should go down. It did not go down which gives a decisive victory to Miskolczi. Hence, you may consider the greenhouse theory dead. Let me now explain how screwed up the data are that you worked with. There are three kinds of errors in it: historical errors, deliberate anthropogenic distortions, and unintentional anthropogenic errors. The most glaring historical error is distorted World War II climate. The early twentieth century warming started in 1910 and stopped abruptly with World War II cooling. But temperature curves show it as a heat wave, not cooling. HadCRUT has two peaks there, the last one showing a precipitous drop of temperature in 1944. That drop belongs in 1940. No one seems to know that the Finnish Winter War of 1939/40 was fought in minus forty Celsius. Or that General Frost and not the Red Army saved Moscow from Hitler. The Germans could see the suburbs of Moscow but their tanks were frozen in place, their soldiers were dying of cold in their summer uniforms, and their supplies could not move. Some heat wave. Apparently they all copied this fiction from each other. Next lets take unintentional anthropogenic errors. These are not unique to HadCRUT but are also found in GISTEMP and NCDC temperature curves. They are sharp spikes that extend upward from the temperature curve. They may at first seem indistinguishable from noise. They are of various lengths, some extending up by as much as 0.2 to 0.3 degrees. So why do I call them anthropogenic? Very simple – they all occur in the first two months of a year. No natural process can do that. For the satellite era I have identified such anthropogenic spikes at the beginnings of the years 1980, 1981, 1983, 1990, 1993, 1995, 1998, 1999, 2002, 2007 and 2008, plus others I am not too sure about. I suggest you verify that by observation. I do not know when this started. It is pretty obvious that they are an unintended consequence of some kind of computer processing that these data have been subjected to. They are in the exact same places in the other two datasets above. This commonality of anthro spikes in theoretically independent datasets bespeaks of a common origin. What kind of data processing was done, what its purpose was, or who authorized it, is a complete mystery. They are there and you just might be the person who can write a program to detect them so they can be eliminated. Knowing that vital climate observations have been secretly computer processed requires that the purpose of such processing should be made public and explained. Finally, the most serious rerror is anthropogenic distortion of temperature rise since the eighties. What they have done is to give the temperature curve an upslope called the “late twentieth century warming.” It does not exist. In the eighties and nineties global mean temperature was constant and there was nothing but a series of ENSO oscillations until the super El Nino of 1998 appeared. The step warming it brought was the only real warming of the satellite era. In four years global temperature rose by a third of a degree and then stopped. Its cause was oceanic – warm water carried across the Pacific by the super El Nino. It stayed warm but there was no further warming after that. The warmth did have an influence on such things as animal migrations but talk of continuing warming from Hansen & Co. is just rubbish. There is this further interesting twist to this, namely that the latest HadCRUT3 release shows cooling for the twenty-first century while the conferees in Doha are still babbling about warming. HadCRUT3 of course has inflated the climate by that phony late twentieth century warming, easily by a tenth of a degree or more. And so did GISTEMP, NCDC, and NOAA. But GISTEMP and NCDC have decided to become honest and their August release shows a revised section in the eighties and nineties with constant global temperature, like it should be. I had been harping about this ever since I published “What Warming?” where I showed that according to satellites, global temperature of the eighties and nineties had to be constant. But HadCRUT has not followed suit on this and still shows that phony late twentieth century warming. I could say, what do you expect from people who gave us the Climategate scandal?
Your claims are quite wrong as demonstrated by the following agreements.
http://www.woodfortrees.org/plot/gistemp/plot/hadcrut3gl
http://www.woodfortrees.org/plot/crutem4vgl/plot/best/from:1850
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1979/plot/uah/offset:0.25
Unfortunately the satellite data only begins in 1978. The particularly striking flat portion of MRES is from 1860 to 1950, which is strong support for my point that global warming can already be observed starting in 1860 as shown in Figure 2, Observed Global Warming or OGW, and follows a curve that is in remarkable agreement with what the greenhouse effect hypothesis should predict.
If you have an alternative widely accepted dataset that (a) covers the period 1860-1950 and (b) an alternative description of it that does not entail as strong a rise over that period as HadCRUT3 does I’d happy to evaluate your claim that global warming is not happening based on your dataset and analysis. But keep it short: if it is 10x as complicated as my analysis or more I’m afraid I won’t have time myself to evaluate it.
Vaughan
Through an evaluation of BEST and CET I demonstrated that global warming was apparent from the start of the instrumental era in 1660.
http://wattsupwiththat.com/2012/08/14/little-ice-age-thermometers-historic-variations-in-temperatures-part-3-best-confirms-extended-period-of-warming/
Both Giss and Hadley can be seen to be merely staging posts in the already long established warming trend and not the starting post.
tonyb
Tony,
I looked at your WUWT article and ran across the following.
BEST has been broadly level in recent years, which does not reflect the reasonable historic correlation between the ‘tendency’ of the two graphs as can be seen by following the trend lines since the start dates, albeit those of BEST seem at times to be exaggerated, perhaps reflecting Britain’s temperate climate.
Here’s annualized BEST over the most recent 20% of the whole BEST dataset, namely since 1970, at WoodForTrees. Where exactly are you claiming it starts to become “broadly level?”
Vaughan
You’ve taken that slightly out of context. The full quote was as follows;
“The crossover point of BEST and CET around 1976 –when BEST starts to rise steeply- may or may not therefore reflect that one record allows something for uhi whilst the other doesn’t.
CET has been in steep decline since around 2000.
http://www.metoffice.gov.uk/hadobs/hadcet/
BEST has been broadly level in recent years, which does not reflect the reasonable historic correlation between the ‘tendency’ of the two graphs as can be seen by following the trend lines since the start dates, albeit those of BEST seem at times to be exaggerated, perhaps reflecting Britain’s temperate climate.”
My point was that there is reasonable correlation between the two datasets (often surprisngly good at times) There was a crossover point betwen the two around 1976 and whilst BEST has been reasonably flat in recent years-the last decade-CET has shown a decline,
A measurement since 1970 as you have done was not my meaning of ‘recent years.’
We seem to have both datasets broadly agreeing with each other until recently. Perhaps BEST will follow the way of CET in the next year or two or perhaps something else is going on that has broken the link. If the latter we can examine the allowance the MET office make for Uhi which is not reflected in the BEST data.
tonyb
@climatereason: There was a crossover point betwen the two around 1976 and whilst BEST has been reasonably flat in recent years-the last decade-CET has shown a decline,
Thanks for clarifying “recent”, Tony. One decade, got it.
The plot at top left of these figures confirms that BEST has indeed been “reasonably flat” during the decade 2000-2010.
Unfortunately the three graphs immediately below confirm that BEST has been “reasonably flat” during the three preceding decades as well.
So I don’t understand the point of your observation that BEST has been flat during the most recent decade when exactly the same can be said of all three preceding decades.
(This is merely a way of visualizing graphically the point articulated statistically by Santer et al, and others even earlier, who they say that one cannot see global warming in a single decade. It’s like watching paint for a minute to see whether it’s drying.)
tony i’m not so sure I would rely on woodsfor trees representation of BEST.
I would check that he is grabbing the latest up to date data.
WRT to comparisons with hadcrut, we can show definitively that the had cru averging approach leads to higher uncertainty and bias when tested using synthetic data where the truth is known. Also, one can show how in some cases adding more data to hadcrut methods leads to worse performance.
WRT to CET. I’d probably have to llok more closely at there methodology for constructing the series, but I’d be surprised if it could outperform an approach known to be optimal. I’ll call that an open question.
So many people just accept CET because it suits them without doing a proper examination and testing of the methodology.
Here is a nice little factoid. are you aware there is no methodology paper for GISS or hadcrut or CET ( that I know of) that demonstrates the method they use does not introduce bias?
Folks might want to practice skepticism more consistently than they do.
For something to be “known to be optimal,” shouldn’t it at least have been published? Sharing pre-publication work is good and all, but if no response is given to peer review, it’s hard to understand how the results could be “known” to be right, much less optimal.
Or is this another one of those cases where someone decides they “know” something, thus it is “known” to be true?
@Vaughn
… strong support for my point that global warming can already be observed starting in 1860 … and follows a curve that is in remarkable agreement with what the greenhouse effect hypothesis should predict.
Yet even the IPCC, in their unfaltering commitment to CAGW, claim only the post-1960 warming is down to greenhouse warming.
@Memphis: the IPCC, in their unfaltering commitment to CAGW, claim only the post-1960 warming is down to greenhouse warming.
Indeed. The odds of an outsider like me changing the IPCC’s collective opinion on that in the near term probably isn’t very high.
@Vaughan
To what do you attribute the IPCC’s failure to seize on your even more juicy and clear-cut brand of alarmism ? Their mandate would clearly predispose them to run with this idea, so why haven’t they ?
Vaughan said, somewhat tongue in cheek (I assume)
“Thanks for clarifying “recent”, Tony. One decade, got it.”
If your favourite football team had won their league within the last decade you would say that was ‘recent,’ If they had won it in 1970 you would say it was a pretty long time ago! I was , as you must realise, pointing to the ‘recent’ relatively flat period over the last decade or so noted by most data sets. Cet has shown a decline.
Whilst interesting it is far too short a period to start talking about trends. My money would be based on an assumption that the warming we can observe-in fits and starts- over the last 350 years or so- will resume, if only because its such a long term trend.
I’m not sure I woud agree that looking at the previuous thirty years also demonstrates flatness. If so we might as well disband rhe IPCC as their role has become pointless.
I would however totally agree with your final point that you can’t see global warming in a short period. I have had graphed the entire extended CET period in ten year and fifty year blocks.
It is interesting to see how often the temperature of one decade is so different to the preceding or following decade that it could be called a climatic shift ( I have tried without success to get a proper definition of ‘climatic shift.’)
These sudden decadal changes tend to become ameliorated over the 50 year period with shifts rrestricted to 0.25C except the fifty year period commencing around 1660 which is by far the greatrest shift in the entire CET record. If you want to see the graphs just email me.
tonyb
Brandon. You shouldnt try to read between lines. because what you think Im talking about is not what I am talking about. The method I am talking about has been published many times and used many times.
If you think that kridging is not BLUE please collect your nobel after showing that.
Since CET is about .01% the area of the Earth, with a far higher level of technology during 1600-1800 than over 95% of the rest of Earth’s surface, it should not be surprising that such a tiny region so dense with technology would see global warming in the 17th century. This would not be due to CO2 however but more likely brown-cloud pollution localized to that region.
Tiny areas like Central England are not at all representative of how the temperature of the globe has evolved in recent centuries.
Furthermore, if CO2 had the postulated effect, it wouldn’t be physically plausible that AGW started before ~1960 – the emmited quantity was insignificant compared with the emissions after ~1960.
http://www.nature.com/nclimate/journal/v2/n1/images_article/nclimate1332-f1.jpg
Mosh said to me
“WRT to CET. I’d probably have to look more closely at there methodology for constructing the series, but I’d be surprised if it could outperform an approach known to be optimal. I’ll call that an open question. So many people just accept CET because it suits them without doing a proper examination and testing of the methodology.
Here is a nice little factoid. are you aware there is no methodology paper for GISS or hadcrut or CET ( that I know of) that demonstrates the method they use does not introduce bias?”
Not sure of your exact definition of ‘bias’ in this context but the original Manley paper and Parkers follow up were highly sceptical and discounted much information as possibly having a bias. I have linked to them before.
The original Giss/Lebedeff document is another matter. As far as I could see Hansen used many of the global data sets that Callendar had used in his 1938 paper on Co2. These were called into question at the time by various Met organisations one of whom called Callendar ‘an amateur’. (which may have been professional jealousy of course)
tonyb
Over the period covered by both records, CET and global temperature indices show very similar overall trends and variations.
Vaughan said
“Tiny areas like Central England are not at all representative of how the temperature of the globe has evolved in recent centuries.”
Sorry Vaughan, that isn’t correct. In my study of reconstructions by Lamb and Mann carried here;
http://judithcurry.com/2011/12/01/the-long-slow-thaw/
I researched many climate scientists (ancient and modern) who saw a very clear link between CEt and global temperature. For the sake of brevity in an already long article I discarded another ten or so references demonstrating this link.
I don’t want to claim for a moment that the link is infallible, but its often pretty close as can be seen in the graphs I posted earlier.
I side with Hubert Lamb in this correlation who said (in a slightly different context) that with this old data ‘we can see the tendancy but not the precision.’
tonyb
Steven Mosher:
And you shouldn’t make comments that are so vague as to be useless. But since you often do, we have to “read between lines” as best we can.
It’s true, I assumed you were referring to something someone has actually done, not a hypothetical kriging implementation that hasn’t actually been implemented. I suppose that was a mistake.
But that does nothing to redeem what you said. You said kriging is “known to be optimal.” For that to be true, there would have to be no approach that could work better than kriging. Nobody has ever done anything to show that is true. Kriging is useful. It is better than a number of other approaches. It is not known to be optimal.
Anyway, I’ll try not to read between the lines in your comment from now on. Instead, I’ll just stick to what you explicitly say, such as nonsensical things like:
If something is optimal, then by definition, nothing can out perform it. Like you, I’d be surprised if something that couldn’t happen happened.
By the way, I stand by my interpretation of Steven Mosher’s comment. He said a particular approach is “known to be optimal.” For something to be known to be true, it’d have to have been examined. As far as I know, nobody has ever examined kriging and decided it is “known to be optimal” for constructing global land temperature fields.
Now then, if BEST believes it has done so, Mosher’s comment would make sense, as would mine. If BEST has not done so, then Mosher’s comment wouldn’t make sense. Naturally, a response which assumes coherency from Mosher would be wrong if Mosher was incoherent.
So unless there is some body of work I am unaware of that shows kriging is the optimal approach for this problem, not merely a useful approach, my interpretation was the only one that makes sense.
I don’t think my ability to read between lines should be called into question when I come up with the only interpretation that makes sense.
Regarding optimality of kriging, see here for a brief treatment of its properties. Note in particular Cressie’s 1993 caveats.
Sorry about the formatting…verdammte WP. Lucia does a much better job.
[My apologies. Ihope this is easier on the eyes:
A couple of comments for the Steven Mosher, Vaughn Pratt, Brandon Schollenberger sequence…
It would seem that a concise definition of BLUE embedded in the context of kriging and other interpolation methods might help some folks reading along. From the start of Chapter 12 on ordinary kriging of Isaaks and Srivastava (An Introduction to Applied Geostatistics):
“Estimate” here refers to point estimation. (Universal kriging is also a BLUE.)
Vaughn Pratt calls attention to some limitations (Cressie 1993), a couple of which are worth noting and expounding on here. First, “no properties are guaranteed, when the wrong variogram [or correlation function — mwg] is used. However typically still a ‘good’ interpolation is achieved.” Though error estimation was not not done (more accurately, not reported or discussed) in the BEST study, kriging by its nature does a lot of error estimation. Indeed it was initially puzzling to me why those particular capabilities were not exploited by the BEST team. In a nutshell, I have concluded that perhaps this is because anisotropy is not considered in the correlation function (variogram). [Or given the amount of data being used, the crush of schedule may have lead to prioritizing activities in the first cut.] However, that is neither here nor there because only the (point) estimates are used in this round of BEST, and as noted above the interpolation done by the kriging would probably be OK. This does seem to impose some reasonable limits using the term ‘optimal’ in regard to the current BEST calculations but in fairness I think the improvement in ‘global temperature(?)’ methodology that is accomplished far outweighs that nit for the time being. Only one Pinocchio is assessed.
There are a couple of more points particular to BEST to make before leaving that topic of variogram (ugh, correlation). For a number of physical reasons, e.g., geography it is hard to believe that the use of an isotropic model (correlation/variogram) will ultimately remain viable. The range of the correlation is on the order of several hundred to a thousand kilometers. Consider the dimensions and orientation of mountain chain throughout the world: e.g., the Appalachian, Urals and Rockies are roughly on the order of a few hundred kilometer wide and thousand(s) of kilometers long.; also ridge-valley structure, on the order of 10s of kilometers occur. These changes both have orientation and occur at scales much less than the correlation range, and at ranges comparable to useful grid sizes. Care in needed. Also because the orientation of geographical features vary from region to region it would seem that the anisotropy is location dependent–a good reason to ignore it on the first cut.
The second point in regard to the correlation/variogram is that the data are clustered around urban areas. While kriging handles some effects of clustering in its weighing scheme, it is dependent upon the variogram/correlation and that can be very sensitive to clustered data. Also, I wonder if what sort of sample support** issues may be hidden in the data–this would impact all of the statistical treatments well beyond kriging. I would expect that when kriging gains more traction in the global temperature game these topic will be examined thoroughly. This is a lot of work–nobody should tell themselves otherwise.
Finally, Cressie, as do Ed and Mo, points out that there might be better nonlinear or biased methods, e.g., indicator kriging to generate a temperature pdf. But what is ‘better’? Perhaps ‘appropriate’?
A note to the unwary–correlation functions and variograms are related but are not the same. BEST uses correlation functions. (To make matters worse, the term variogram is usually referring to an entity called the semi-variogram.) Don’t sweat it here. This is a comment only.
Edim
You commented that essentially all of the CO2-induced AGW came after 1960.
If we assume the warming all came from CO2 we have
Co: 1750 – 280 ppmv (IPCC, based on ice core data)
C1: 1960 – 316 ppmv (Mauna Loa)
C2: 2012 – 393 ppmv (Mauna Loa)
ln(C1/C0) = 0.1210
ln(C2/C1) = 0.2181
So (excluding any time lags) CO2 warming after 1960 was theoretically around two-thirds of the total.
This is because of the logarithmic relation. The higher the concentration the lower the impact of an added ppmv of CO2..
But you’re right. The annual increase in forcing (and warming) from CO2 prior to 1960 was negligible.
That is why the early 20th century warming cycle (~1910 to ~1940), which is statistically indistinguishable from the late 20th century warming cycle (~1970 to ~2000) is difficult for the climate models to explain.
Max
Max,
There is a well defined consensus theory on temperature ‘anomalies’ and attribution since ~1900, there’s no need to guess:
http://earthobservatory.nasa.gov/blogs/climateqa/files/2010/05/natural_anthropogenic_models_narrow.png
There are many graphs like that and they all look the same, the divergence between ‘human’ and ‘no human’ starts in ~1960. This is plausible IF CO2 had an effect, and I think it doesn’t. If anything, it has a cooling effect (atmospheric IR radiation to space).
I don’t think the early 20th century warming is that dificult to explain (mostly solar?), but the early 21st century cooling will be more and more difficult to explain with the consensus science. They will try with natural variations, aerosols, OHC and ‘future warming’, but sooner or later it will all collapse. I think almost one third of all human CO2 is emitted since ~1998 and there’s been no warming since.
Edim: If anything, it has a cooling effect (atmospheric IR radiation to space).
Well, it certainly is true that those frequencies at which CO2 radiates strongly are radiated to space, so you’re right there. However increasing the level of CO2 decreases that cooling effect. This is because the more CO2 there is, the higher that radiation comes from. (The altitude it used to come from is now blocked above by the additional CO2.) But higher altitudes are colder, and colder objects, whether solid, liquid, or gas, radiate less strongly. So the upshot is that more CO2 weakens the cooling effect you refer to.
The same principle explains why a blanket or thick jacket keeps you warm on a cold day better than a thin one (assuming no wind, which adds the cooling effect of convection to that of radiation). The outer surface of a blanket is colder than that of a thin one and so radiates less heat away.
This is why CO2 can be viewed as a heat-trapping blanket: it works essentially the same way as a blanket (on a cold day without wind anyway). The back-radiation explanation of the greenhouse effect that people used to prefer until recently is less satisfactory because it has the kinds of problems I pointed out 16 months ago in an article on this blog, which was received at the time with only slightly more enthusiasm than Galileo’s heliocentric account of planetary motions: at least no one suggested I be placed under house arrest for it!
@vaughn and the CO2 blanket
Does more CO2 really mean a thicker blanket, or is it rather a denser blanket, wherein the mean free path for re-radiated IR from one CO2 molecule to another is now shorter ?
Vaughan, the Earth’s surface is free to cool non-radiatively (and it does so pre-dominantly, radiative cooling is secondary in average), while the atmosphere can only cool by LW IR radiation to space. Only the so-called GHGs can do this cooling – the bulk of the atmosphere insulates the surface. More than 90% of the terrestrial cooling to space is atmospheric radiation (GHGs and clouds), less than 10% is surface radiation.
http://science-edu.larc.nasa.gov/EDDOCS/images/Erb/components2.gif
Memphis,
The density of the blanket has little influence except at the outer edge because it has little influence on the lapse rate. That the troposphere is denser against radiative heat exchange means that a little less heat is transferred by radiation but that’s compensated by a little more convection (and latent heat transfer). That leads to changes in the troposphere, i.e. to feedbacks. The primary change is, however, that of the outer edge. There more CO2 leads to a change in the altitude of the level whose energy balance is controlled by radiation without a significant convective component.
The atmosphere acts as a blanket up to that altitude and raise of that altitude means that the mantel gets thicker.
The full picture is a bit more complex as some radiation can escape to space from all altitudes of the atmosphere and the surface. A bit unexpectedly an useful graph can be found from a paper that’s severely wrong in its conclusions, the 2010 paper of Miskolczi. Miskolczi has calculated the distribution of the altitude of the origin of the emission for a clear sky atmosphere. His method seems to be correct for this calculation and the result looks credible, but further I cannot tell whether it’s really correct.
From the linear version of the graph (on the right) we can see that a large part of the radiation comes from the altitudes 8-15 km. In this altitude range CO2 is the dominant emitter as there’s very little H2O left when the temperature is 210-240K as it is at these altitudes. Adding CO2 enhances the share of high altitudes while the radiation from lower ones is blocked more effectively.
I have noted before that Miskolczi should have paid more attention to this part of his calculation as that would have forced him to reverse his conclusions on the effect adding CO2 has. He was, however, fixed in looking at the surface balance which is very difficult to interpret and which he used to draw badly erroneous conclusions based on faulty logic.
The link should have been:
useful graph
Does more CO2 really mean a thicker blanket, or is it rather a denser blanket, wherein the mean free path for re-radiated IR from one CO2 molecule to another is now shorter ?
All of the above. Certainly denser, and the mean free path is shorter. But because the altitude for the “photosphere” for any given frequency is rising with increasing density, the blanket is also thicker when thickness is measured from the ground to that frequency’s photosphere.
@Edim: More than 90% of the terrestrial cooling to space is atmospheric radiation (GHGs and clouds), less than 10% is surface radiation.
Quite right (maybe as low as 6%). Did I say something that would suggest otherwise?
Vaughan Pratt
Just semantics, but at 393 ppmv, let’s call that “CO2 blanket” “a bit less dispersed” (rather than “denser”).
Otherwise folks get the wrong impression that there’s a “dense” CO2 blanket up there (which there isn’t).
Max
Good point, Max, at least on a non-technical thread. However I was treating this as a technical thread where the audience can be assumed sophisticated enough to know that “acidification” refers to reducing pH as opposed to decreasing the pH below 7. Your objection to “denser” is even less appropriate here since there is no scientific threshold of that kind between “dense” and “not-dense.”
+1
Especially about the step-change associated with the super El Nino in 1998 which is, by even a casual glance, largely the only warming in the late 20th century record and it all happened in a period of about 4 years. That certainly isn’t CO2. Pratt’s manipulations simply mask what happened by smoothing the series so step changes become curves. This is why you do not smooth a time series then use the smoothed data as input into a subsequent analysis.
@DS: This is why you do not smooth a time series then use the smoothed data as input into a subsequent analysis.
To be consistent you should object to the Vostok ice-core data on the ground that it smooths out anything faster than a couple of centuries. And you should object to the monthly Mauna Loa data, which smooths out the daily data. (Didn’t we already have this conversation a long while ago?)
You also need to distinguish between smoothing data and analyzing data into frequency bands. My decomposition of HadCRUT3 as MUL + SOL + DEC does the latter. This is lossless analysis because it can be inverted, namely by summing the three bands to recover HadCRUT3. Convolution with multiple wavelets (how I analyzed HadCRUT3) is a practical method of losslessly analyzing signals into frequency bands.
Incidentally the authority you cite on not smoothing a time series, William M. Briggs, is only at Cornell during summers. He says on his website, “I’d rather be teaching, so if you hear of a school (especially a faithful Catholic college) that needs somebody, let me know.” Briggs’ objection to analysis by filtering makes no sense, it is a fundamental part of both signal processing and image processing. If one followed his advice not to input filtered data into a subsequent analysis art critics could not analyze jpegs.
Everything is more complicated than you think. (anon)
Compared to science involving prediction as verification, historical investigations study past events looking to the authenticity of an
event, eg ‘how’ or ‘why’ or ‘what sort of event’ took place. Evidence
is of the kind, identification of a problem situation of the period or
place. Empiric data includes cross referencing or contextual data,
primary evidence may include physical and written records of the
event. Written accounts may be more, or less reliable depending
on the likely bias of the writer, cui bono, eg the public statements
of political leaders as players are more suspect than their private correspondence to a trusted associate.
Tony Brown and EM Smith discuss historical data, climate science
and number crunching on this thread, (5/12 @4.03/4.26am) In my
view, considering the last 16 years of no warming climate, science
has been unsuccessful in its role of prediction. Michael Mann’s
tree ring hockey stick, already criticized as a poor proxy for climate
didn’t predict this.
Tony Brown’s ‘Long Slow Thaw’ based on CET records and later
supported by C Loehle’s multi proxy climate study, cross
references voluminous empiric data on weather across regions,
UK and Europe eg frost fairs, crop failure data, famine reorts and
farmers’ accounts of seasonal shifts in planting times. These
provide strong contextual confirmation supporting CET on the
Medieval Warming period, Little Ice Age etc.
They offer what Winston Churchill termed ‘taking a military map
cross bearing’ on a situation and re bias, the anecdotal evidence
is not by generals, but by farmers and other people who had
nothing to gain by their comments on weather, such as ‘ This
year we have had no summer and thecrops have failed.
vukcevic | December 5, 2012 at 3:19 pm | Reply
Coincidence is highly unlikely, so is the Earth’s 22 year magnetic ripple induced by the solar, or do they have a common source?
Read my explanation http://www.leif.org/research/suipr699.pdf
Your ‘findings’ are spurious and the ‘physics’ is wrong. Try to learn something.
Vaughan Pratt
Thanks for your patience and good humor.
I think that this is a fantastic exercise. Playing with your spreadsheet may become a new addiction for some of us.
Thanks, Robert. That was certainly my hope.
Everyone starts at Level 1. Unlocking the seven locked sliders gets you to Level 2. ;) From there on I know I’m dealing with serious commenters.
Level 3 is ocean and atmospheric dynamics and level 4 unlocks the secrets of the universe. God is about to be reveaved.
For you maybe, CK, but for me level 3 is mucking with the formulas in the cells while level 4 is reorganizing the columns.
Disappointing. Perhaps we will wait for level 5 to refine the ‘explanation’.
I was thinking more along the lines of challenging the explanation than refining it. It evolved to where it is by challenging it myself, but I seem to have stalled there so it’s time for others to challenge it by coming up with a better hypothesis of how HadCRUT3 should be analyzed into components. HadCRUT3 = MUL + SOL + DEC, but obviously that’s not the only possible analysis. Is there another good one that indicates different conclusions?
Vaughan, I am a OpenSource user so the spread sheet don’t work, but since HADCRUT 4 has NH, SH and Tropics, wouldn’t it be interesting to compare the three?
Have you tried –
HadCRUT3 = MUL + SOL + DEC + TOARI
I understand that they have found the missing energy.
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=CERES_MODIS-1.gif
Vaughan Pratt
I’d second Robert’s “thanks” for posting this.
It is a fascinating statistical study that gets folks to thinking, even though many of us might not agree with the conclusion reached.
Max
Doha is Kyoto’s last gasp as the UN is ditched for the MEF.
‘Therefore, to completely escape from the discordant voices of a plurality and the ‘shackles’ of CBDR, the US may move away from the UNFCCC framework and seek an alternative policy platform. MEF seems to be a good candidate: it was proposed by the US, covers some 85 percent of the global emissions, and most importantly, makes no differentiation between its mere 17 members who would otherwise span both Annex I and Non-Annex I divisions. Through shifting its focus to work under the MEF, it is possible for the US to reach a consensus with the other 16 members instead of nearly 200 under the UNFCCC. And at the same time, the US can shift some of the burden which would otherwise have been borne by the Annex I countries to emerging economies such as China, India and Brazil. There are, therefore, logical reasons for the US to advocate the shift from UNFCCC to MEF.
However the consequence of the US abandoning the UNFCCC will be more far-reaching. Given its immense political and economic influence, if the US does take ‘leadership’ to move the talks from UNFCCC to MEF, many other parties with similar positions and interests (such as the Umbrella Group) might also follow. This may potentially trigger a mass exodus of the non-EU developed countries, which will seriously impair the integrity of the UNFCCC framework, and may even eventually destroy its efficacy all together.
And instead of a UNFCCC convention which will capture the view of all parties (albeit to different extent), MEF will at best produce a unilateral, lukewarm treaty that will preclude the interests of the nations which are desperate for urgent global-wide mitigation and adaptation efforts. For the US, this might be a victory: the commitment is now palatable at the domestic level, and developing countries like China and India are also on the mitigation boat together. But for the world, as we lose the UNFCCC, which is “the one and only place where formal negotiations and, above all, decisions take place and where treaties are negotiated” [1], the impact will be serious and irreversible.
Therefore, it is important for the US to stay committed to efforts under the UNFCCC framework, which will of course entail the US to not only have a greater domestic policy action, but also a long-term vision that sees “the pursuit of equity [not] as an obstacle, [but] as an opportunity to ensure all countries take on greater efforts.”
However, despite these speculations, exactly how the US’s position will play out in Doha remains uncertain. I will be following the US delegation and their negotiations closely, and report back on any first hand updates on the progress (which I sincerely hope, are progressive).’
The MEF set up by George Bush and John Howard has the benefit of being flexible and inclusive – as well as in ditching the opportunists and hive-bozos. So sad too bad.
A monomaniacal focus on humanity’s contribution to atmospheric CO2 has been Western academics’ Pyrrhic victory over reason. The mania reached a zenith with the obsessive compulsive melt down of Al Gore following his defeat by George Bush. Historians understand that for the 1000s of years before that it has been the Sun that captured the interest of scholars. Outside Western CO2-phobia the Earth’s climate is seen as the result of a holistic process; we don’t really understand it so we call it nature and we know that nominally it’s the Sun that is the cause of it all.
@Vaughn
http://judithcurry.com/2012/12/04/multidecadal-climate-to-within-a-millikelvin/#comment-
We know quantitatively, albeit roughly, from the work of Tyndall in the 1850s the extent to which CO2 itself … blocks the passage of thermal radiation; these days we infer this much more precisely from the HITRAN tables of spectral absorption/emission lines.
@Arno Arrak
http://judithcurry.com/2012/12/04/multidecadal-climate-to-within-a-millikelvin/#comment-
The theory of Ferenc Miskolczi says that the IR transmittance of atmosphere should not change when more CO2 is added to it … Using NOAA database of weather balloon observations Miskolczi was able to demonstrate that the IR transmittance of the atmosphere did not change
for 61 years while carbon dioxide percentage increased by 21.6 percent.
So who to believe ?
Actually Tyndall’s gear wasn’t sensitive enough to measure CO2 absorption of “calorific rays” as thermal IR was called back then. CO2 absorption band is narrow and his best, most stable source of calorific rays was at the boiling point of water which doesn’t generate a lot of energy in CO2 absorption bands as it’s too hot. He worked with other gases that exhibited much stronger absorption and most particularly for the greenhouse effect it was water vapor that held the greatest interest and was easy to measure. Tyndall had some particularly ingenius methods for drying his gases to avoid water vapor contaminating the results.
Be that as it may it doesn’t speak to Miskolczi’s work which you’d know if you knew anything at all about it. Miskilczi’s “saturated greenhouse” posits that as atmospheric CO2 increases an equal but opposite decrease occurs in atmospheric water vapor so that the net effect is no change in greenhouse efficacy. This is no way denies the IR absorptive properties of greenhouse gases so any mention of Tyndall are irrelevant and only serve to give notice that the mentioner doesn’t know his ass from his elbow about Miskolczi’s hypothesis.
Using NOAA database of weather balloon observations Miskolczi was able to demonstrate that the IR transmittance of the atmosphere did not change for 61 years while carbon dioxide percentage increased by 21.6 percent.
How exactly was IR transmittance measured?
And if this is indeed possible, can they not just measure the TOA radiation imbalance, and promptly settle whether or not it moves with CO2 levels or not?
The accuracy of the balloon sounding data Miskolczi uses is questioned by the usual suspects as to having precision and accuracy required for the task. Funny, ain’t it, how amongst the warmists contrary data is rejected but data from the same source that is not contrary is accepted without question.
Radiosondes (I’ve launched many of them and was a technician responsible for calibration and repair of the equipment used in the early 1970’s) return a constant stream of temperature, pressure, and humidity. Miskolczi found in the balloon sounding record that as atmospheric CO2 rose absolute humidity declined in direct proportion such that the extra greenhouse effect from CO2 was exactly cancelled by less greenhouse effect from water vapor. If the sounding data is not somehow discredited he’s got an airtight (pun intended) case.
More info:
THE STABLE STATIONARY VALUE OF THE EARTH’S
GLOBAL AVERAGE ATMOSPHERIC PLANCK-WEIGHTED
GREENHOUSE-GAS OPTICAL THICKNESS
by
Ferenc Miskolczi
Reprinted from
ENERGY &
ENVIRONMENT
VOLUME 21 No. 4 2010
http://www.friendsofscience.org/assets/documents/E&E_21_4_2010_08-miskolczi.pdf
Spencer explains in this blog post and ensuing discussion why Miskolczi’s claims are totally unjustified.
The post is lengthy but the essential points come perhaps best up in the discussion where Spencer replies correctly to comments that try to defend Miskolczi.
Stated simply: Miskolczi looks at differences that are known to be small and finds them small. Then he concludes without any real justification that they are exactly zero and this totally unjustified assumption leads to his conclusions. Observing that the differences are small is fully in agreement with the main stream theory and therefore cannot contradict to the least.
The whole Miskolczi paper is totally worthless and based on a combination of misunderstanding empirical data and making false assumptions.
No, that’s not what Spencer wrote. He wrote that he did that review reluctantly because he didn’t understand some of Miskolczi’s claims. He wrote that he doesn’t disagree with the conclusion supported by radiosonde data but that he disagreed with Miskolczi’s theory behind the negative feedback from water vapor. Spencer points out that suspiciously high radiosonde humidity data during the 1950’s and 1960’s might not be correct and if not then there’s no drying effect in the atmosphere subsequently.
Furthermore Spencer goes on to say that negative feedback from water vapor is a hypothesis that both he (Spencer) and Richard Lindzen have both proposed but the theoretical explanation behind it differs from Miskolczi’s.
Now Pekka, you must weasel your way out of what you wrote since it’s clearly in error.
I have some reservations about Spencer’s response where he states that greenhouse gases allow the atmosphere to radiatively cool itself.
Really, Roy? I was taught that all matter with a temperature above absolute zero radiates. Given that nitrogen appears in the Periodic Table it is ostensibly matter then it must radiate.
I think what Roy meant to say is that greenhouse gases allow the atmosphere to be radiatively warmed. Absent radiative warming it will still warm through conduction and convection and it will cool radiatively because all matter above absolute zero radiates and I’m pretty sure the nitrogen in our atmosphere is matter and it has a temperature above absolute zero therefore it radiates a continuous black body spectrum characteristic of that temperature.
Perhaps you missed this exchange in your shallow perusal. Perhaps you intentionally missed it. Perhaps your brain is weasel-like in size and strength as well as attitude. I’m not sure. But in the interest of making sure you mistakes are not believed by others here it is explained.
Ferenc Miskolczi says:
August 7, 2010 at 3:03 PM
Dear Roy,
Thank you very much for your time and effort to comment my recent E&E article:
Miskolczi, F., 2010, Energy and Environment, 21, No.4, 243-272.
But why do you confuse people? In this article we are not talking about competing greenhouse theories. The main point of the paper is that in the last 61 years the global average infrared optical thickness of the real spherical refractive inhomogeneous atmosphere is 1.87, and this value is not changing with increasing CO2 amount. This means that no AGW exists based CO2 greenhouse effect.
This is a very simple statement. To conquer this statement you must come up with your own global average atmosphere and optical thickness, and show the methodology of its computation.
It is irrelevant what you or K. Trenberth, R. Lindzen, or the RC gurus like G. Schmidt from NASA or R. Pierrehumbert, P. Levenson, ect. may guess, assume or believe about the physics of greenhouse theories. Even my theory which supports the 1.87 value is irrelevant. Here no useless radiative budget cartoons or GCMs, or assumed feedback processes or arbitrary constants are needed. You do not need to worry about what the global h2o, temperature and pressure field is doing and what is the relationship among them. The atmosphere and its radiation field knows exactly what it should do to obey the laws of thermodynamics, or how to obey the laws of the conservation of energy, momentum and mass, or how to obey the energy minimum (entropy maximum) or Hamilton principles on local, regional or global scale.
If you really want to know what is going on with the global average IR radiation field and you or your experts have some knowledge of quantitative IR radiative transfer, you (or the others) may compute precisely this physical quantity using only first principles and real observations. There is no other way around. The true IR flux transmittance, absorption or optical depth is fundamental for any or all greenhouse theories.
If you do not trust my 1.87, compute it yourself, see how it is changing with time and verify or falsify my computation. Here there are no theories to chose, but the correct straightforward computation of a single physical quantity which gives the accurate information about the absorbed amount of the surface upward radiation. I am patiently waiting for your results. It is not very easy, but you or your group may give a try. If you can not do this with your resources, then further discussion of this topic here is useless.
After we agree on this issue, we may start our debate on the theoretical interpretations of the results that was outlined in my 2007 Idojaras article, or on the questions how to relate the absorbed surface radiation to the surface temperature or to the downward IR flux density.
Ferenc
Spencer replies:
Roy W. Spencer, Ph. D. says:
August 8, 2010 at 6:13 AM
If you read and understood what I posted, Ferenc, I agreed that the *observational* result from 61 years of radiosonde data of a constant GHE (tau=1.87) is indeed intriguing, and possibly even true. (That it depends upon high humidities from the earliest sondes in the 1950s and 1960s, though, will not convince many people because there are so many instrumentation problems that affect long-term trends.) This is indeed a useful contribution, as I previously stated.
But you have not addressed what I *was* objecting to, Ferenc: People are using your work to claim that Ed=Aa, and I was discussing in detail why that might APPEAR to be the case, but cannot be the case for a greenhouse atmosphere.
You could help clairify things by answering the following question:
If atmospheric layers A and B each contain greenhouse gases, under what conditions will we find that the rate of absorption by layer B of layer A’s thermal emission equal the rate of absorption by layer A of layer B’s emission? Your answer to that question could potentially remove all my objections to this key issue.
In answer to Spencer’s question above, Ferenc understandably refuses to take the bait which would just lead to an irrelevant tangent about local thermodynamic equilibrium (LTE). Miskolczi is a physicist. Spencer’s question is insulting. There is very likely a language barrier and Spencer should perhaps have assumed that Miskolczi is a physicist, knows the difference between LTE and GTE and simply inferred something that Miskolczi never meant to imply.
Anonymous says:
August 14, 2010 at 7:56 PM
Dear Roy,
I am very sorry, that you feel that you have to comment something that you do not understand. I admit that because of the very technical nature of the paper, it is difficult to digest.
In this debate the most important thing is that you and I must have a common understanding of the physical laws and the terminology. I suggested you that, the best way to proceed is that you and I compute the same physical quantities – for example for your favorite global average atmospheric structure – and when we agreed on tau, Ed, Su, Eu, OLR, etc., then we start to analyze the relationships among them. I do not really care what you believe. I only care what you know for sure and what you can prove.
If you do not mind, I shall not answer or elaborate your quiz. I did answer such questions forty years ago at my first astrophysics course at the university. However, you may easily answer your question yourself, if you figure out what Eq. 7 means.
On the other hand you say: “…People are using your work to claim that Ed=Aa, and I was discussing in detail why that might APPEAR to be the case, but cannot be the case for a greenhouse atmosphere….”
Think about – qualitatively – the new Trenberth-Fasullo-Kiehl cartoon: 1.87=-ln(1-Ed/Su) or Ed=Su*(1-exp(-1.87) ) or Ed=Aa. ( Ed and Aa are global average measured quantities.) If you agree that tau=1.87, in your view does this mean that the Earth’s atmosphere is not a greenhouse atmosphere???
We arrive again at the same problem. To make quantitative statements on the degree of anisotropy in the Ed field you must produce numbers. Those numbers will tell you what is the physical meaning of the spherical emissivity (fudge factor ??), and you will see, that in monochromatic radiative equilibrium it is required by the law of the conservation of energy.
And finally, I think you should assume that everybody joining this discussion has his own independent and decent scientific view of the topic.
Ferenc
Then Spencer, poor fragile thing, feigns injury:
Roy W. Spencer, Ph. D. says:
August 27, 2010 at 6:23 AM
….everybody but me, apparently.
David,
If I can recollect correctly our earlier discussions we do agree that the surface radiative balance is not essential because convection and latent heat transfer fill up the balance anyway. This is just one way of noting that the Miskolczi analysis concentrates on something that’s not the most essential point. Based on the earlier discussion we seem to also agree that Aa>Ed although rather small.
The argumentation between Spencer and the supporters of Misckolzi including Miskolczi himself get very confusing.
The Miskolczi side in the argumentation alternates between admitting that Aa=Ed is not based on theory and claiming that it’s still true based on empirical data. At one point of the paper Miskolczi presents correct arguments based on the physical understanding that the equality is not true when there are temperature differences. That occurs in the discussion related to Figure 5. of his paper. Thus he seems to be well aware that the relationship is always broken when the atmospheric temperature differs from the surface temperature (the size of this difference is given by the theory he is using in his calculations). This is certainly one of the two totally decisive points in understanding the radiative balance of the surface. The other point is the transmissivity of the atmosphere that he calculates. (The two components are defined in his Figure 1.)
It’s clear that he presents many parts of the radiative balance of the surface basically correctly but misses some equally important ones and uses false arguments to defend his results. He does that to the point that he contradicts explicitly his own understanding in the discussion of the (approximate) equality Aa=Ed and its significance.
All the above is of little importance for understanding changes in the strength of GHE because the surface radiative balance is not the right place to look at that. Changes in transmittance are one component in the strength of GHE but not the most important one. The more important parts concern the radiation from atmosphere (both from gases and from clouds). Here again Miskolczi does one interesting part of the calculation. In Figure 6. he shows the altitude profile of the point of emission in his model. He shows, how the emission originates over a wide range of altitudes with a significant contribution from altitudes 8-15km, where the amount of water is small and CO2 is the main source of emission and where the temperature is falling with altitude (Figure 2). A change of CO2 concentration affects this distribution. This change is one of the main mechanisms for the influence of the CO2 concentration on GHE. Here Miskolczi comes close to presenting an useful result but fails to bring it to conclusion.
All the above is written accepting that the empirical analysis Miskolczi is valid and that the model he is using describes all essential properties of the atmosphere. It’s seen that there are internal contradictions in his paper and that he dismisses the probably most important effect that he could have easily calculated. There are, however, many more questions. It’s certain that looking only at clear sky conditions cannot describe all essential factors. Clouds have an essential share in the emission that escapes to the space. What happens to that must also be calculated. What one can really learn from the radiosonde data on the changes in H2O concentration is an interesting issue not answered conclusively by this analysis.
All in all the Miskolczi paper is an odd mixture of interesting calculations which use empirical data and a detailed radiation model and conclusions largely unsupported by the results of the analyses and in some cases internally contradictory.
Refining his calculation, concentrating fully on what it tells about the OLR rather than surface radiative balance and adding somehow clouds appears to be a valid way of estimating radiative forcing.
Thanks for applying some innovative ideas to data analysis, Vaughan.
The approach reminds me of the various transforms that stand alongside the conventional Fourier analysis. The choice of a sawtooth kernel function is similar to the approach with wavelet transforms. The idea is to use these kernels to reproduce the profile with fewer parameters than a fourier series.
Vaughan said
“Tiny areas like Central England are not at all representative of how the temperature of the globe has evolved in recent centuries.”
Yet Pratt goes right on ahead and blithely uses a temperature record extending back to 1850 like it was global in coverage and accurate to hundredths of a degree. The intellectual dishonesty in that is breathtaking.
It seems to be a continuing confusion that I’m claiming to explain actual temperature (whatever that might mean). Firstly it’s HadCRUT3 as observed temperature that I’m using, exactly that, not some proposed improvement to it as you’re implying exists. Secondly all I’ve done is separate HadCRUT3 as a sum MUL + SOL + DEC, and MUL as a sum SAW + AGW + MRES, backed up with a spreadsheet to make it easy to verify the accuracy of my claim. It’s all there for you to audit. If you find an error in it then and only then can you accuse me of deliberately putting the error in there.
If there is any intellectual dishonesty here it belongs to those accusing me of doing something that I’m not doing. That’s dishonest.
> If there is any intellectual dishonesty here it belongs to those accusing me of doing something that I’m not doing. That’s dishonest.
A lack of plasticity might be a simpler hypothesis.
David Springer 6//12 3.05 am … Ironic ain’t it)
Vaughan Pratt | December 5, 2012 at 5:36 pm | Reply
“Unfortunately the satellite data only begins in 1978.”
You go to analysis with the data you have not the data you wish you had.
Write that down.
Vaughan said
“Tiny areas like Central England are not at all representative of how the temperature of the globe has evolved in recent centuries.”
Interesting. So we can discount anything we think we know about global temperature taken from ice cores. Thanks for clarifying that, Vaughan. LOL
David
Yeah, ice cores are well known to be highly representative of the tropics.
Tree rings taken from a tiny area are also scientifcally proven to be highly representative of the global temperature. Both are well known to be accurate to fractions of a degree over hundreds of years.
tonyb
climatereason, you should support people taking quotes out of context. David Springer fails to quote the paragraph prior to what he provides:
Notice how he refers to “a tiny region so dense with technology.” In his next paragraph, he says, “Tiny areas like Central England are not” representative of global temperatures. The context makes it clear he when he refers to “tiny areas” he is referring to “tiny regions… dense with technology.” That is quite different than what you and Springer portray.
Misrepresenting people to score cheap points is a pathetic tactic. People shouldn’t do it.
Brandon said
The context makes it clear he when he refers to “tiny areas” he is referring to “tiny regions… dense with technology.” That is quite different than what you and Springer portray.”
Sorry, I genuinely don’t get your point. Vaughan says this;
“Tiny areas like Central England are not at all representative of how the temperature of the globe has evolved in recent centuries.”
Are you saying that Vaughan believes that CET is a useful (but not infallible) proxy for a global temperature? No matter how you parse it my reading is that Vaughan thinks that such a tiny area has little merit as any sort of global proxy whether or not it is ‘dense with technology.’. If he meant otherwise I will apologise.
tonyb
climatereason:
No.
The key is in the word “like.” One can say “tiny areas like x” to mean areas the size of x, or tiny areas that share certain traits with x. As in, an area like x that is tiny.
When you have two possible interpretations, you look at context. Two sentences prior to the quote, he said “a tiny region so dense with technology.” His next sentence explained why the density of technology in the area would make it unrepresentative. It was at this point he said, “Tiny areas like Central England are not at all representative.”
Your interpretation requires us ignore his explicit explanation that comes immediately prior to the sentence in question and focus solely on the word “tiny.” I don’t think that makes any sense.
Brandon and tony b
“Tiny areas dense with technology” as a description to discount the meaning of the CET record carries two connotations for me.
Tiny – yes, but as tony has pointed out, there are climate influences which could make it more representative than Italy, for example.
Dense with technology – only applies after the mid 19th century; the historic record, to which tony refers, probably has no distortion from technology (or urbanization, for that matter).
But, again, the CET record (even as extended by Tony) is the only real continuous regional temperature record we have prior to the mid-19th century so, ignoring paleo data (which are arguably less representative than the CET) it’s the best “proxy” we have for a global temperature prior to ~1850.
And since this record covers a time span prior to industrialization (and CO2 emissions) it gives us a picture of how climate changed naturally, which in turn gives us food for thought considering the natural versus anthropogenic attribution of recent climate change.
If I’m not wrong, I believe this is tony’s point regarding the significance of CET.
Max
record
Brandon and tony b
My biggest problem with the CET as an indicator of anthropogenic warming is the start and end points.
http://en.wikipedia.org/wiki/File:CET_Full_Temperature_Yearly.png
It starts during the coldest part of the LIA in the middle of the Maunder minimum and ends today, after a 20th century with solar activity at its highest in several thousand years (see the Lean curve).
http://farm9.staticflickr.com/8202/8246230123_71547c34c5_b.jpg
There are “bumps and grinds” in the CET record but the overall warming trend is 0.9°C over 350 years or a decadal warming rate of 0.026°C per decade.
I believe that the extension back before 1659 made by tony b is an important addition, because it shows that prior to the Maunder minimum, when solar activity was higher, temperature was warmer. IOW it shows that solar influences may have been responsible for a large part of the warming after the Maunder minimum ended.
This leads to the basic question both Jim Cripwell and I asked Dr. Pratt on how solar forcing was handled in his analysis.
I believe if someone wanted to do so, they could make an analysis similar to that made by Pratt, but removing the “CO2 – or total GHG – signal” (without feedbacks) , ending up with a correlation between natural solar forcing and temperature over time, IOW treating the GHG effect as “noise” and the solar effect as the principal driver. (But I’m not going to do this study, as I don’t think it would tell us much more that Pratt’s study does.)
Max
Max,
When you bring up the solar activity you should note that it has not risen for 50 years but rather gone a little down.
That means that the solar activity has not risen at all over the period of strong apparent AGW signal. Pretty poor for a supposed explanation. 50 years is already long enough to be climatically significant. The overall picture fits nicely with the main stream interpretation that worming up to the 1940’s maximum had a significant solar component but what has happened thereafter not so. The sun does, however, explain part of the recent plateau. Again what we know about the sun agrees well with the main stream views and supports them to a significant degree.
Pekka said, “That means that the solar activity has not risen at all over the period of strong apparent AGW signal. Pretty poor for a supposed explanation.”
Ah, but with a little smoothing and a lag or two…you can “rescue” that theory :)
Pekka
You are right that solar activity has slowed down after reaching the highest level “in several thousand years” in the late 20th century (Solanki 2004).
The Wolf numbers for the late 20th century solar cycles were:
http://www.warwickhughes.com/agri/Solar_Arch_NY_Mar2_08.pdf
152: SC 18 (1945-55)
190: SC 19 (1955-65)
108: SC20 (1965-75)
158: SC 21 (1975-86)
160: SC 22 (1986-96)
Average Wolf number of these late 20th century solar cycles was 154.
Prior to SC 18 the Wolf numbers were all significantly lower than this average, as was SC 23, which just ended. SC 24 is starting off very inactive and is projected to be even weaker than SC 23, so it looks like the period of “unusually high solar activity” is over.
How this looks longer term can be seen from the Lean curve of solar irradiance:
http://farm9.staticflickr.com/8202/8246230123_71547c34c5_b.jpg
It is clear that the late 20th century saw a period of unusually high solar activity as Solanki suggests.
It is also clear that this period of high solar activity has ended.
Max
Pekka
You wrote:
Not so, Pekka (see my previous post).
The “unusually high level of solar activity” (Solanki) occurred over the second half of the 20th century, as can be seen from the max. Wolf numbers of the solar cycles or Lean’s curve on TSI.
Several independent solar studies (which I can cite if you are really interested) have suggested that around half of all the warming since industrialization can be attributed to the “unusually high level of solar activity” in the 20th century.
I do not know how much “time lag” (if any) these studies have built in, but the correlation with temperature doesn’t look that bad to me.
Of course, correlation does not provide evidence for causation.
Max
Pekka
There is another factor one must consider when comparing natural (solar) warming with AGW.
AGW is “driven” by the change in concentration of GHGs, supposedly in a logarithmic relation at the sort of levels we might see.
Solar warming is driven by the absolute level of solar activity, by TSI plus additional mechanisms we do not yet fully understand
The second half of the 20th century saw solar activity at record highs historically (and for “several thousand years”, according to Solanki 2004).
So it could well NOT be correct (as IPCC claims) that:
“most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”
Until we truly understand ALL the mechanisms by which the sun drives our climate, it is premature to conclude that AGW was the principal driving force of global temperature over the late-20th century IMO.
Max
Max, “Until we truly understand ALL the mechanisms by which the sun drives our climate, it is premature to conclude that AGW was the principal driving force of global temperature over the late-20th century IMO.”
Kinda the never ending quest. Understanding the oceans would be a better goal. The legitimate “smoothing” and time delays are in the oceans and water cycle.
Max (second attempt)
The lag between solar cycles and the ocean SST (Atlantic and Pacific) is ~ 15 years.
Solar activity input is twofold:
1. TSI – relatively constant (changes Glob Temp ~ + or – 0.1C)
2. Geomagnetic factor, where Hale cycle and the Earth’s magnetic field go in and out of phase, which is far greater than normally considered, and it is source of the 60sh year cycle.
I’ve done some calculations from the existing data and emailed it to Dr. Leif Svalgaard from Stanford University, one of the world most prominent solar scientists.
He found results so ‘unpalatable’ that he thought it was necessary to make a very rare visit to this otherwise very respected blog, and declare result spurious.
Note: he didn’t challenge accuracy of calculations, but interpretation.
Calculation shows that solar cycles could be both warming and cooling, depending on the orientation of resultant geo-magnetic vector.
http://www.vukcevic.talktalk.net/EarthNV.htm
Well, this is weird. Tony says CET indicates warming there in the 1600s, I offer a possible explanation (high tech concentrated in the region where the temperature is being measured), and somehow Tony thinks I’m contradicting him.
I see no contradiction between CET being correlated with global temperature and CET showing regional warming attributable to local industry. Neither has to override the other.
My point about CET not being representative is that while global changes can obviously influence CET, thereby creating a correlation, the converse is far less likely because CE is only .01% of the planet. By all means expect a correlation between CET and global temperature, but don’t interpret warming observed in CET as global warming.
Just because the sun never sets on the British Empire doesn’t mean it never sets on Central England.
Vaughan Pratt
See my query/post above on distinguishing anthropogenic warming from the null hypothesis of natural which includes an “accelerating warming” due to the ~1500 year cycle identified by Loehle and Singer, and an exponential increase in CO2 only gives a linear response due to the logarithmic effect of CO2 concentration.
David L. Hagen | December 6, 2012 at 10:32 am |
Yes. I haven’t raised that point in a long while. The notion that CO2 induced warming is only relevant after 1950 is false. Due to the decreasing GHG efficacy, part for part, of CO2 the small annual amount added with the beginning of the industrial revolution circa 1750-1800 (when the steam engine became widely deployed) have the same effect as the big bits being added now. Anthropogenic generation of CO2 happens to have grown inversely to its decreasing ability to warm the atmosphere since the beginning of the industrial revolution. This yields a roughly linear rate to increasing temperature. In other words ignoring the non-linear GHG efficacy curve of CO2 and the rate of anthropogenic deposition is just another attempt to hide a couple of inconvenient declines. Ignore the man behind the curtain. CO2 only became an anthropogenic greenhouse gas after 1950. Before then it somehow doesn’t work that way.
David L. Hagen
Of course he did. There’s a reason he cuts off the analysis in 1995. Continuance of the curve fails after that. If you go to the poster:
http://fallmeeting.agu.org/2012/files/2012/12/GC23C-1085.pdf
there’s an extra bit shown in figure 3 that explains the data past 1995 is an “end effect artifact of filtering” with the implication I guess that it should be ignored. How convenient. The actual temperature data reveals that’s not an artifact as there is no significant trend beyond that date. Pratt, with a wave of his hand and 21 year filter, discounts “the pause”. Isn’t that just precious?
Vaughan
Thanks for your clarification. Not sure I see your nuancing, but I think we agree that for some reason CET is a reasonable (but not perfect) proxy for global temperature when you say;.
“By all means expect a correlation between CET and global temperature, but don’t interpret warming observed in CET as global warming.”
In this respect the reasonable but not perfect correlation with BEST seems to indicate a steady world wide warming (although there are bits that are cooling and I remain dubious about the validity of a global temperature) that has been continuing in fits and starts for some 350 years. This also correlates with observations of sea ice and glaciers. Manley observed that in general glaciers had ceased their advance by 1750.
My original point was that the warming is nothing new and to try to attribute it to co2 is I think incorrect. I do believe that humans are a substantial noise in the climate system (especially locally with regards to Forestry and agriculture) ) but the sound of co2 within that louder noise is very muted.
Incidentally, yours was an intriguing and nicely written paper and well supported by data, irrespective of whether I agree fully with it or not.
tonyb
@Pratt
Instead of relying on your poor powers of inference why not just take a peek at CET data yourself? It’s not hard to find and I already linked to it once.
Now twice:
http://en.wikipedia.org/wiki/File:CET_Full_Temperature_Yearly.png
Yes there was a rapid temperature increase from 1690 to 1730. It was preceded by a rapid decline of same magnitude from 1660 to 1690 and was followed by a rapid decline from 1730 to 1760 back to a baseline where it then embarked on a drunkard’s walk for the next 100 years.
The period from 1660 to 1760 with an approximate 60-year cycle (1.5 cycles) looks suspiciously like the Atlantic Multidecadal Oscillation to me which then reappears in the record after 1880 for another ~2 cycles taking us to the present day. But it’s probably just sheer coincidence that the 60-year AMDO keeps popping up in all the temperature records huh?
David Springer wrote:
You’ve misunderstood – the filtered AGW curve in fig.3 isn’t based on HadCRUT3 observational data. It’s clearly stated just below fig.3 on the poster (and in Vaughan Pratt’s excellent spreadsheet) that
AGW(y) = 2.83 * log_2(287+ 2^((y-1834)/28.6)))
Springer
Once again, poor topic knowledge is your undoing.
See Parker et al. (1992) here. Please note: it’s a 10.8Mb pdf of a scan of the original. I’ve retyped this from the introduction. Please read this carefully:
Repeat: no reliable data before 1770. ‘Warming’ very likely an artefact.
Massive over-interpretation of unreliable data = deeply unwise. Corrosive to credibility.
Self:
Well, ok – three of the AGW formula’s parameters are derived from HadCRUT3, but the plotted AGW curve is mostly insensitive to changes in the last decade of the HadCRUT3 data (so changing the ‘pause’ into a steep ‘decline’ or ‘ascent’ barely affects fig. 3)
BBD
Hope this lands up somewhere near your post on CET.
I call into the Met office frequently to use their archives and library. David Parker is still around. He wrote a monthly series to 1772 whereas Manleys intention was to create a monthly one to 1659. Many people have reinterpreted the indoor temperatures (although many records were taken externally) including Camuffo and Jones who got a substantial EU grant for doing so under the Improv project.
DE Bilt borrows some of the Early CET record but no other one is old enough to overlap back to 1660, hence the need for other records.
There are lots of material (diaries, crop records, payments to the poor etc) to demonstrate the ups and downs of the climate from 1660 onwards including the sharp drop in temperature and its subsequent recovery. I have seen many of these records myself-some are in the Met Office Archives with annotations by Manley himself.
Of course we could bring out the broader argument that ALL old temperatures records are suspect for one reason or another. I wrote about it here;
http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%E2%80%93-history-and-reliability-2/
This includes those from Hadley and Giss as the Stevenson screen was not widely used until well after 1880 and many of the observers were untrained (many of the earlier observers had better credentials than the later ones as it was considered a scientific job.)
Modern stations also often leave something to be desired, so personally I wouldn’t bet the house on the reliability of the instrumental record at all, but its the only thing we’ve got.
As Lamb observed ‘we can see the (temperature) tendancy but not the precision.’
Do we know the temperature back to 1660 in fractions of a degree? Of course not and that becomes even more true when trying to construct a global temperature. What we can reasonably safely say is that its been gently warming -in fits and starts- for some hundreds of years with substantial downturns in the early and late 1600’s. Warming is not a new phenemonon
tonyb
Cap’n
Sure there’s more to it than just solar influence (“oceans and water cycle” incl. clouds). Maybe these are even linked.
All of these natural factors need more work.
The analysis by Dr. Pratt has essentially filtered these all out as “noise”, with an exponential CO2 curve as a result.
To me this is an oversimplification.
One could just as well filter out the signal from GHGs as “noise” to leave the natural factors as the principal “signal” (which would be just as wrong).
The reason this is wrong is because the climate system is too complicated and we just don’t know the answers yet – especially concerning all the many natural climate forcing factors.
That’s basically my point.
Max
BBD,
you re typed that from the original?
man you are a good typist.
hmm.
steven
Yes, I typed it from the original, as stated. Yes, I can type. I’ve been doing it for twenty years. It’s not difficult. Why the ‘hmm?’
tonyb/climatereason
The CET data pre-1770 are generally held to be unreliable. There are, I believe, more modern references that corroborate this. You don’t actually provide any substantial reason why everyone from Parker on is wrong to state that the CET data pre-1770 are unreliable, so that’s the state of play I’m afraid.
As a general rule, I don’t place much weight on global reconstructions before ~1900 as I understand that earlier data is not considered as reliable as C20th data. Steven would know more about this than me.
BBD
Parkers is a daily series, Manleys is a monthly one. These are the two most scrutinsed temperature series in the word and have been taken apart, referenced, used as the basis for such as de bilt and generally examined for many years by very many people. As such both are ‘reliable’ as circumstances permit but see my article that pointed out the drawbacks of any temperature series
As I say I wouldn’t want to believe either to fractions of a degree but when combined with the numerous other records available we can be pretty sure that the warm and cold bits are pretty accurate but we do not know exactly HOW cold or HOW warm.
tonyb
BBD
Huh?
If you’ve got “no reliable data” how can you conclude that “warming was very likely an artefact”?
Answer: you can’t.
Max
David, global temperatures are figments of AGWers imagination. NH temps are going down…. down…. down. That’s where most of the land mass is anyway.
The percentage of the earth’s surface is a fair criticism but the level of industrialization of England’s midland regions is not. Without modification smokestack emissions exhibit a net cooling effect not a warming effect. Lest Vaughn forget the mid-twentieth century “global cooling” is blamed on sulfate particulates. Lately Hansen has tried to blame “the pause” on particulate emissions from China’s rapidly and hugely increased expansion of coal-burning electrical generation.
Pratt just makes crap up as he goes along. He might be very knowledgable in arcane computer-related subjects but his knowledge outside that is shallow at best but he acts like it isn’t and substitutes what he believes are educated guesses for actual knowledge.
And by the way… here is CET graph
http://en.wikipedia.org/wiki/File:CET_Full_Temperature_Yearly.png
Where it can be compared to independent data I see no marked difference. It appears representative of HadCRUT from 1880 onwards. And where Pratt claims there should be no surprise in greenhouse warming in the English Midlands prior to 1880 the actual data show no warming prior to 1880. So that’s just more stream of consciousness bullchit from Pratt too as he either didn’t actually examine the data before commenting on it or ignored what he saw or can’t interpret what he saw. In any case it’s yet another poor reflection on him.
@David Springer: Without modification smokestack emissions exhibit a net cooling effect not a warming effect.
Certainly sulfate aerosols cool, being reflective. However other pollutants, especially black carbon, are not reflective and at low altitudes warm. (At high altitudes lapse rate kicks in to reduce their warming effect.) An efficient power plant produces relatively little black carbon, but back when England was obtaining most of its industrial power by burning up all its forests, it seems unlikely that efficiency was on anyone’s mind until most of England’s forests were gone. So I would question your certainty that the emissions from England’s energy sources in the 17th and earler centuries had a net cooling effect.
> Certainly sulfate aerosols cool, being reflective.
Sulfate aerosols are to be emulated.
David Springer
Wrong again. High latitude ice cores are proxies for global temperature. The key indicator is the ‘heavy’ oxygen isotope δ18O. Less = colder.
See here for details.
You misinterpreted what I wrote. No surprise. See here for clarification. I was mocking Pratt not Proxies. Maybe the same two letters at the beginning of each word caused your confusion.
But I can mock ice core proxies too if you so desire. Sealing time of air bubbles at best about 70 years and mixing with ambient air through diffusion all that time, chemical changes thereafter, different diffusion rates for different gases thereafter… ice cores are a target-rich environment for casting of doubt about how well they perform as global temperature proxies. But I generally don’t do that as there are far richer target environments in more recent years. All the juiciest manufactured evidence is in the past 70 years (since 1950) and ice cores don’t seal air bubbles younder than that even if we were to ignore the fact that the bubbles are averages of air composition over the sealing interval which, if you know what the Nyquist rate is, can’t be used except to reconstruct records with precision to more than about 140 years.
No David, you betrayed your ignorance and now you are trying to cover up your gaffe.
To make matters worse, you seem unaware that the δ180 analysis of ice cores is carried out on *water ice*, not gasses trapped in the core.
You are hopeless.
‘GIS = Greenland Ice Sheet. As you would know if you were not somewhat vague about paleoclimate. It was the *region* that experienced extreme warming at the end of the YD. The *region* you persistently confuse with the entire planet. Or perhaps this is deliberate misrepresentation.’
So earlier it was the region and here it is global? Just what are we supposed to make of that? Other than that he freely invent whatever seems to suit his AG grouthink narrative.
Chief Kangaroo
I’m trying to keep it simple for David. But for you, the detail. See Shakun & Carlson (2010).
Here’s a teaser from the abstract:
;-)
BBD | December 6, 2012 at 5:22 am |
As I already explained I was mocking Pratt not ice cores. I also explained I don’t have any real beefs with ice core data but if you want to state something specific I’m sure I can find something to cast doubt upon it as very little in this debate is writ in granite, confirmation bias is rampant, overconfidence abounds, the race to publish by inexperienced youngsters on the tenure track is heated, and pal review let’s just about anything that supports the consensus view get published while simultaneously quashing anything contrary.
Maybe you should read the climategate emails instead of relying on the whitewash that were called investigations on your crappy little island.
BBD | December 6, 2012 at 5:22 am |
Suggest you clue yourself in on what can be determined from oxygen 18 isotopes by the way. It’s something like tree rings. Lots of factors effect it other than temperature and the current relationship between temperature Oxygen-18 in GIS doesn’t hold true for the past. Factors such as the season the snow fell, the source of the moisture, and the migration path of the ice. The distance the water vapor traveled, the admixture of vapor from different sources. Documented below.
http://books.google.com/books?id=n-Fv4vYIQcIC&pg=PA361
So is that the best you got? LOL
Springer.
Spare me the so-called ‘Climategate’ conspiracy theory misdirection.
Really? This was you, just upthread:
Words fail me.
Let’s keep the focus on your bloopers. First, you slip up in a pool of ignorance and go face down. Splat.
Then you get up, dripping in the stuff, and have the brass neck to pretend that it was all a joke. In the process, you manage to slip up and fall over again – this time on your arse.
Now you have struggled back to your feet again and are *still* trying to pretend that it was all a ‘joke’.
It is painfully obvious that you have no idea what you are talking about. And even when confronted with evidence that would make a tart blush, you keep right on going.
It beggars belief.
Now, take a hint. Stay away from paleoclimate.
Posting links to books you’ve never read won’t get you out of this. Nor will pretending to knowledge you do not posses. Does the name Kurt Cuffey mean anything to you? Of course it bloody doesn’t. I know how the GIS isotopic temperature reconstructions have been validated – you don’t. Stop pretending.
You know I have seen your single reference for localised rather than global impacts – but this is an area of ongoing research. And hardly likely to be definitively resolved any time soon.
It was really your point scoring above with local and then point scoring below with global. Typically absurd misdirection.
You don’t have any depth of knowledge on anything. Springer is a dozen times more subtle and that’s amazin’. You’re a shopkeeper with a facile AGW space cadet narrative superficially in the objective idiom of science.
Local versus global effect. Since energy does like the path of least resistance and different regions have different heat capacities and heat transfer rates, the study of local versus global climate impact should provide a great deal of job security.
http://web.mit.edu/karmour/www/Armour_JCLIMrevised_2col.pdf
This guy Armour may have realized that.
Captain Kangaroo
When are you going to admit that you have been caught using paleoclimate as a sandbox by somebody who demonstrably knows more about it than you do?
Come on man. Out with it.
Blah blah – you demonstrably know squat. When are you going to admit that you are a shopkeeper with nil scientific chops at all? Nothing but lies, misrepresentation and trivialities. You trying to prove you are not a shopkeeper but in fact a leading paleoclimate investigator in a mask? (Who was that masked paleoclimatologist?) It isn’t going to work – you are an idiot with your underpants on the outside of your trousers.
Let me again introduce some sanity from actual paleoclimatologists under the auspices of the NAS.
‘Now imagine that you have never seen the device and that it is hidden in a box in a dark room. You have no knowledge of the hand that occasionally sets things in motion, and you are trying to figure out the system’s behavior on the basis of some old 78-rpm recordings of the muffled sounds made by the device. Plus, the recordings are badly scratched, so some of what was recorded is lost or garbled beyond recognition. If you can imagine this, you have some appreciation of the difficulties of paleoclimate research and of predicting the results of abrupt changes in the climate system.’ http://www.nap.edu/openbook.php?isbn=0309074347
The wrong trousers as it happens to be. ‘The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values. Nor is that to be seen as a defect. Of course choices between competing values are not made by relying upon scientific knowledge alone. What is wrong is to pretend that they are.’ http://www.lse.ac.uk/collections/mackinderCentre/
Chief Kangaroo
But you keep getting shown up, again and again. And all you do is fulminate and post irrelevant quotes (again and again).
Luckily, you aren’t the person who gets to decide who is talking bollocks here.
You should reflect on something further. If a mere layman can show you up as a know-nothing when it comes to paleoclimate, then you aren’t doing very well.
It’s obvious that you are never going to work this out for yourself, so I’m obliged to prompt you. This would be further cause for embarrassment, if you were capable of the necessary self-awareness.
Spare me the so-called ‘Climategate’ conspiracy theory misdirection.
Generally the only people who drag out the stupid and tired old “conspiracy” strawman to refer to the systemic bias and corruption in government climate science clearly exposed in Climategate (and then also in the corrupt ‘investigations’ the institutions involved commissioned to exonerate themselves),
are those who seek to promote their underlying totalitarian leanings by trying to blind us to the motivation for this endemic corruption – the obvious vested interest government has in promoting CAGW – the opportunity it offers to expand its reach through more taxes and controls.
You don’t need a “conspiracy” to explain why an organisation acts to promote it own interests. That is the norm. It would require a conspiracy if it didn’t act to promote it own interests; iow, for government climate science to strive to be objective – now that is something that would require a conspiracy. And we don’t believe in those much, do we ?
BBD, The pretense that David wasn’t joking about the ice-cores is plain moronic (ditto your desperate and dreary ‘story’ to support your misinterpretation).
I guess the famed British sense of humor is much overrated. Either that or I greatly underestimate your British dissembling (euphemism: reserve).
memphis
Only a moron would make this claim with the contrary evidence in plain sight.
The science of isotopic analysis is not a “desperate and dreary ‘story'”, nor is it ‘mine’. This sort of imbecilic and transparent misrepresentation is the blog equivalent of self-harming.
BBBD When are you going to admit that you have been caught using paleoclimate as a sandbox by somebody who demonstrably knows more about it than you do?
+1 on that arsecount
BBD | December 7, 2012 at 3:32 am |
Chief Kangaroo,
But you keep getting shown up, again and again.
The problem here, is that BBD is the only person who believes BBD’s claim.
BBD, The pretense that David wasn’t joking about the ice-cores is plain moronic
> Only a moron would make this claim with the contrary evidence in plain sight.
And with zero contrary evidence anywhere at all, only a moron would claim that.
Perhaps you’d care to tell us what you carefully misinterpreted as “evidence” he wasn’t joking.
> (ditto your desperate and dreary ‘story’ to support your misinterpretation).
Your bad faith here is legend; ever considered becoming a government consensus climate scientist?
This time you slyly pretended to think I was referring to ice-cores etc, when obviously the ‘story’ in question was your feeble story about David falling on his face etc (in your own mind, of course) .
Max
The lag between solar cycles and the ocean SST (Atlantic and Pacific) is ~ 15 years.
Solar activity input is twofold:
1. TSI – relatively constant (changes Glob Temp ~ + or – 0.1C)
2. Geomagnetic factor, where Hale cycle and the Earth’s magnetic field go in and out of phase, which is far greater than normally considered, and it is source of the 60sh year cycle.
I’ve done some calculations from the existing data and emailed it to Dr. Leif Svalgaard from Stanford University, one of the world most prominent solar scientists.
He found results so ‘unpalatable’ that he thought it was necessary to make a very rare visit to this otherwise very respected blog, and declare result spurious.
Note: he didn’t challenge accuracy of calculations, but interpretation.
Calculation shows that solar cycles could be both warming and cooling, depending on the orientation of resultant geo-magnetic vector.
http://www.vukcevic.talktalk.net/EarthNV.htm
Say Tony, ironic ain’t it )
Say, Peter, and in the southern hemisphere, down in the Antarctic,
ice is up, up, up! Hoopla!
Yay!!! Chief says that Oz will be cooler over the next decade or three ;) Not sure about the tropics though!
Beth
Yep. Ice is up (not only here in Switzerland) but also in the Antarctic. The poor penguins must be struggling.
And in the Arctic, despite all the wailing and lamenting and gnashing of teeth for those cute, cuddly polar bears, it’s déjà vu all over again.
http://farm9.staticflickr.com/8338/8248714491_aa61fbbd96_b.jpg
Max