by Judith Curry
UPDATE: comments on McKitrick’s paper
With 39 explanations and counting, and some climate scientists now arguing that it might last yet another decade, the IPCC has sidelined itself in irrelevance until it has something serious to say about the pause and has reflected on whether its alarmism is justified, given its reliance on computer models that predicted temperature rises that have not occurred. – Rupert Darwall
The statement by Rupert Darwall concisely states what is at stake with regards to the ‘pause.’ This seriously needs to be sorted out. Here are two recent papers that contribute to setting us on a path to understand the pause.
HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series
Ross McKitrick
Abstract. The IPCC has drawn attention to an apparent leveling-off of globally-averaged temperatures over the past 15 years or so. Measuring the duration of the hiatus has implications for determining if the underlying trend has changed, and for evaluating climate models. Here, I propose a method for estimating the duration of the hiatus that is robust to unknown forms of heteroskedasticity and autocorrelation (HAC) in the temperature series and to cherry-picking of endpoints. For the specific case of global average temperatures I also add the requirement of spatial consistency between hemispheres. The method makes use of the Vogelsang-Franses (2005) HAC-robust trend variance estimator which is valid as long as the underlying series is trend stationary, which is the case for the data used herein. Application of the method shows that there is now a trendless interval of 19 years duration at the end of the HadCRUT4 surface temperature series, and of 16 – 26 years in the lower troposphere. Use of a simple AR1 trend model suggests a shorter hiatus of 14 – 20 years but is likely unreliable.
McKitrick, R. (2014) HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series. Open Journal of Statistics, 4, 527-535. doi: 10.4236/ojs.2014.47050. [link] to full manuscript.
JC comment: I find this paper to be very interesting. I can’t personally evaluate the methods, although I understand the importance of the heterskedacity an autocorrelation issues. The big issue with length of the pause is comparison with climate model predictions; I would like to see the climate model simulations analyzed in the same way. I would also like to see the HadCRUT4 results compared with Cowtan and Way and Berkeley Earth. I also seem to recall reading something about UAH and RSS coming closer together; from the perspective of the pause, it seems important to sort this out.
UPDATE: The blog Musings on Paleoecology has a post on McKitrick’s paper Recipe for a hiatus, that critiques McKitrick’s method. McKitrick posted a comment:
Hello Richard
Thank you for your interest in my paper. Let me make a couple of observations.
McKitrick uses a regression technique that is supposed to be robust to heteroscedasticity (unequal variance) and autocorrelation to find the trend in the temperature time series.
I use OLS to find the trend. The HAC method is used to compute the robust confidence intervals. I can’t tell if by your phrase “supposed to be” you are dubious about the robustness of the VF method but if you look at the article cited (V&F 2005), it contains all the power curves, null rejection rates and size estimates you are seeking.
What you are referring to in this post is a null distribution around Jmax. In 100 simulations assuming AR(2) around a positive trend you show that a 1995 or earlier start date occurs 10% of the time. It would be helpful if you also verified in each of those simulations that all the conditions of the definition were met (that the trend CI includes zero across the entire time subsample and applied in both the NH and SH.) Assuming that those things are the case, and you were to get roughly the same answer in 1000 or 10,000 simulations, what you are saying is that under the assumptions of your null, a pause of 19 years is now in the lower 10% tail of the null distribution. And by the looks of it in your Figure, in another 3 years it will be in the lower 5% tail. That’s an interesting additional bit of information on the topic and I encourage you to publish it, especially if you also add in the UAH and RSS computations as well.
However the problem with this kind of estimation–and what I expect a stats journal would point out– is that if what we really want to know is whether Jmax is significantly different from zero, you need a null that assumes it is zero and works out the corresponding distribution. And the difficulty with that is the well-known ‘Davies problem’ in which the parameter to be estimated is not identified under the null. There are simulation methods for handling this problem, which Tim Vogelsang and I briefly review in our new paper comparing models and observations in the tropical troposphere, again using HAC-robust methods (http://onlinelibrary.wiley.com/doi/10.1002/env.2294/abstract). We also outline a simple bootstrap method that gets around the simulation problem, but you’d need to verify whether you need to use a block bootstrap since you have assumed an AR2 error structure. You might get a wider or narrower CI around Jmax than the one you drew above, it’s hard to tell, especially since it will likely be a non-standard distribution.
Return periods of global climate fluctuations and the pause
Sean Lovejoy
Abstract. An approach complementary to General Circulation Models (GCMs), using the anthropogenic CO2 radiative forcing as a linear surrogate for all anthropogenic forcings [Lovejoy, 2014], was recently developed for quantifying human impacts. Using preindustrial multiproxy series and scaling arguments, the probabilities of natural fluctuations at time lags up to 125 years were determined. The hypothesis that the industrial epoch warming was a giant natural fluctuation was rejected with 99.9% confidence. In this paper, this method is extended to the determination of event return times. Over the period 1880–2013, the largest 32 year event is expected to be 0.47 K, effectively explaining the postwar cooling (amplitude 0.42–0.47 K). Similarly, the “pause” since 1998 (0.28–0.37 K) has a return period of 20–50 years (not so unusual). It is nearly cancelled by the pre-pause warming event (1992–1998, return period 30–40 years); the pause is no more than natural variability.
Published in Geophysical Research Letters [link] to full manuscript.
The conclusion states:
“Unless other approaches are explored, the AR6 may simply reiterate the AR5’s “extremely likely” assessment (and possibly even the range 1.5–4.5 K). We may still be battling the climate skeptic arguments that the models are
untrustworthy and that the variability is mostly natural in origin. To be fully convincing, GCM-free approaches are needed: we must quantify the natural variability and reject the hypothesis that the warming is no more than a giant century scale fluctuation. ”
JC comment: I like Lovejoy’s general approach, but convincingly rejecting a centennial scale giant fluctuation requires more robust paleo proxy reconstructions. Lovejoy identifies a magnitude of the natural fluctuations of ~0.4C, which is the largest such estimate I’ve seen.
JC reflections
The climate community is in a big rut when it comes to climate change attribution – as I’ve argued in previous threads, climate models are not fit for the purpose of climate change attribution on decadal to century timescales. Alternative methods are needed, and the two papers discussed here are steps in the right direction.
We will not be successful at sorting out attribution on these timescales until we have more robust paleo proxy data. The paleo proxy community also seems to be in a rut, with continued reliance on tree rings and other proxies having serious calibration issues.
The key challenge is this: convincing attribution of ‘more than half’ of the recent warming to humans requires understanding natural variability and rejecting natural variability as a predominant explanation for the overall century scale warming and also the warming in the latter half of the 20th century. Global climate models and tree ring based proxy reconstructions are not fit for this purpose.
It is not the arguments of the skeptics they are fighting; rather mother nature is rejecting their pathetic dogmatism. Thuus far skeptics have been proved right just by being properly skeptical of the massive, entrenched yet unjustified hubris of the climate cabal.
And lets not forget the public trough in which the alarmist feed.
Waiting for Webby to proclaim Mother Nature performed an own goal.
Anthony Watts posts McKitrick’s paper with major figures:
New paper on ‘the pause’ says it is 19 years at surface and 16-26 years at the lower troposphere
Actually things are really pretty simple. The return periods can be estimated from the multproxies, and the mulitproxies are quite reliable up to 125 years or so (the time scale of the corresponding warming period since about 1880). Indeed, using the same methodology as described in my comment below (about the accuracy of the instrumental series), the three I used agreed to about ±-0.07 oC at 125 year time scales (the issues about medieval warming are indeed where there is disagreement- but below about 125 year scales all the multiproxies agree about the amplitudes of change to about the level indicated. Indeed one of the three multiproxies that I used was based on boreholes so that it has no paleo calibration issues, yet for 125 year statistics, it is very close to the others).
You can then estimate the return periods of natural fluctuations from graph:
http://www.physics.mcgill.ca/~gang/Anthro.simple.jpeg/Simplified.Anthro.GRL.figs.return.small.3.8.14.jpg
(other stuff can be seen at: http://www.physics.mcgill.ca/~gang/Lovejoy.htm or:
http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthropause.GRL.final.13.6.14bbis.pdf)
A cursory glance at the figure:
http://www.physics.mcgill.ca/~gang/Anthro.simple.jpeg/Simplified.Anthro.small.forcing.jpg
shows that it was only in 2012 that the globe recovered from the enormous pre-pause warming (1992-1998, return period 20 – 30 years) and the temperature finally went below the long term trend line at the far right of the figure.
Indeed, if there hadn’t been a pause, the temperature increase would have been so large as to invalidate the anthropogeneic warming hypothesis!
According to a new stochastic analysis (under review), assuming that emissions and other anthropogenic forcings continue to grow at their current rates, the pause will have to continue until 2019-2020 before it can be used to reject the anthropogenic warming hypothesis with 95% certainty. Until now, the pause is exactly as expected and will not be very unusual for another couple of years- assuming that it hasn’t already ended….
Shaun
Is 125 years really long enough?
The globe has been warming for over 300 years. In 2006 Phil jones examined the Extraordinary warming from 1700 to 1740 which came to an abrupt end in that year with one of the most savage winters in the entire record. The 1730’s were the warmest decade until the 1990’s fractionally eclipsed it.
He came to he conclusion that natural variability was much greater than he had previously realised. That natural variability is not reflected in the period since 1880 so I am not sure it is an adequate period to ascertain the potential levels of natural variability.
Tonyb
Shaun Lovejoy: Until now, the pause is exactly as expected and will not be very unusual for another couple of years- assuming that it hasn’t already ended….
It is a shame that it had not been “expected” until after it had been well-underway for a while, and had been disputed by famous scientists. Of course it is merely the “expected value” of a model developed post hoc and not yet tested against out-of-sample data. Had the IPCC expected it 15 years ago, or Hansen in 1988, their prospective global mean temperature graphs for the early 21st century would have been much different.
How long is the pause now “expected” to last? That was Prof Curry’s question, rephrased in terms of expectancy.
As I indicated in a comment below, the pause will have to last until 2019 or 2020 before it’s probability gets down to the 5% level which would be needed to reject the anthropogenic hypothesis at the 95% level.
Tonyb: Is 125 years really long enough?
While we await Prof Lovejoy’s response to this and my comment below, my answer is “No”. It is too short to estimate the recurrence times between large excursions such as the Roman Warm Period and Medieval Warm Period.
To Tonyb:
There are two issues for the time scales and one issue for the spatial scales. For the spatial scale, it is important to consider global scale (at least hemispheric) temperature changes – the changes over small regions (such as central England) are always much larger (not much averaging). For the time scales, first, if you want to know the statistics of 125 year changes, then you only need probabilities of 125 changes. But, second, to get this information, you’ll need long pre-industrial records, that’s where the multiproxies come in. However – a commonly misunderstood point – they do not need to accurately measure changes at periods longer than 125 years. They can totally disagree with each other about the medieval warming for example (800 years changes): taking differences is a high pass filter.
That being said, direct use of the existing multiproxy statistics are insufficient to measure changes of the order of 0.8- 1o C these are much much larger than any of those observed over 125 years (again, for global spatial scales). Assumptions have to be made about the nature of the extremes. This is done all the time in statistics – usual the bell curve is chosen. With this choice, the change since 1880 is between 4 and 5 standard deviations – probabilities of about one in a million. That’s why it needs a more refined analysis. In the end, one finds that extremes occur about 10 – 100 times more often than the bell curve would allow, that’s why the probability of the change since 1880 being natural is so high – about one in a thousand!
Dr. Shawn Lovejoy: Dr Tony Brown made a rather convincing case on this blog a few years ago- http://judithcurry.com/2011/12/01/the-long-slow-thaw/ -that the multiproxies are probably not as reliable as the CET data. Seems that the only argument (from Mann) against using the CET data is that it’s not global enough, but the counter is that they match the NH instrument data and are close to global.
H/T Tony
All of England is 0.04% of the globe and the Central England Temperatures (CET) represent an even smaller part of the earth. It may be more reliable than multiproxies, but it isn’t very useful for our purpose which requires global scale values. The CET variability is very much larger than the global scale variability. For example, the “Friends of the Earth” tried to debunk my paper by refering to a 0.9o C change in CET from 1663-1763 (this is roughly the global change from 1880- 2004). However, over the same period, the global scale temperature change was only 0.21 oC (a typical century scale natural variation) which proved my point rather well. This is a much smaller variation because while the CET might have increased by 0.9o C, other regions decreased their temperature, partially offsetting the increase. The globe contains more than 2500 regions the size of the CET and many did not change temperature by the same amount – or even in the same direction during this period!
So you second plot allows one to estimate the ‘equilibrium climate sensitivity’ for a 2x[CO2] as we know 280 to 390 ppm gives a change in the ‘equilibrium’ global temperature of 1 degree.
Thus, your analysis generates a ECS of 2.1.
The equilibrium climate sensitivity is a theoretical/model concept, I do not pretend to estimate it. The fact that the effective climate sensitivity is not so different from estimates of the climate sensitivity to CO2 is presumably that contributions from methane and other GHG’s roughly cancel out the cooling due to aerosols.
But the nice thing is that all this is irrelevant to the conclusions (it just makes them more plausible since compatible with a totally different approach).
DocM, he is careful to call it ‘effective climate sensitivity’ because it uses CO2 as a proxy and because it takes no account of ocean delay.
The ocean delay was taken into account – the delay was bounded between zero and twenty years using cross-correlation analysis and then the zero and twenty year delayed sensitivities were used. The uncertainty in the delay is the biggest cause for uncertainty with this method.
Furthermore, in the paper he shows that the fit works just as well for up to a 20-year lag where the ECS becomes over 3 C per doubling because adding the lag means more warming for a given earlier CO2 increase.
“Is 125 years really long enough? . . . ”
Not really, though it is more rational than many of the shorter proposed periods. However, it sturdily ignores much longer period natural variation like Dansgaard-Oeschger events, which can span a millennium or more. And it ignores the overall cooling trend of the Holocene, spanning the last 8,000 years or so. Not to mention ice ages and even longer patterns that are all known contributors to what we laughingly call global climate over geological spans. Still, it does head in the right direction, and it does pose the very appropriate point that until natural variation of climate is understood, anthropic effects are going to be lost in the noise.
Dr. Lovejoy’s 95% confidence is what bothers me. One chance in 20 of being wrong is gambling odds and I have even thrown dice series that were far, far less likely. So, ideally, rather than two-sigma significance, which is the QC level that the less effective industrial quality control systems shoot for, how about a four or five sigma certainty? We might run our lives on the precautionary principle, but we really can’t do science that way.
It is 4 -5 standard devations, but rather than one chance in about a million due to the frequent extremes, a Gaussian is not appropriate so that this translates into a one in a thousand chance.
I get different results using a longer air temperature series than giss.
Kinda concerned about borehole data.
And it’s similarity to other paleo data indicts the latter.
Steven Mosher, “I get different results using a longer air temperature series than giss. Kinda concerned about borehole data, and it’s similarity to other paleo data, indicts the latter.” ( lightly edited :)
How about posting a comparison?
Shaun Lovejoy: As I indicated in a comment below, the pause will have to last until 2019 or 2020 before it’s probability gets down to the 5% level which would be needed to reject the anthropogenic hypothesis at the 95% level.
Let me try again: How much longer do you expect the pause to last?
I did partially answer, I’ll give a more detailed one. If we assume that the anthropogenic effects continue as at present, there is a 67% chance that the pause will end before the end of 2016 and a 95% chance before 2019 – 2020, Is that good enough? This is the result of stochastic forecasting (work in progress).
Captian
Lovejoy says he used the 2010 giss global and
NH.
It’s unclear whether he used land ocean or land only.
If he used land ocean we have a problem
The proxies he used moberg and Wahl are also
Problematic.
II will have to dig more but I show amplitudes higher than his.
Since he used NH the Ljungqvist, F.C. 2009.northern extratropics might be interesting.
Ljungqvist, F.C. 2009.
https://lh4.googleusercontent.com/-F7C4538ir3Q/VAUbfr9hyOI/AAAAAAAALcI/uKTnyeFx_c4/w774-h484-no/2000%2Byears%2Bn.%2Bextra%2Btropics.png
You know, with that land amplification and all.
If I’m extrapolating correctly from this chart:
– Trend: +0.2C / +0.1 log-Co2.
-> So if 2000 is at .4 on the X-axis, (the canonical 2050/2x-CO2 is 1.0) we’d expect 0.6 * 0.2 = +1.2C warming.
This is closer to Lindzen-estimates than AR4/5-estimates; isn’t warming *very unlikely* to be less than +1.5C ? In other words, why so skeptical? Maybe I’m reading it wrong?
Your calculation is about right (you used an effective sensitivity of 2.00o C/doubling which is close to the slope of the graph which is 2.33). However the main uncertainty in this approach is the lag. The graph and slope indicated are for a relation between CO2 and temperature at an annual scale. If you lag the CO2 and response by up to 20 years then the relationship is just as linear but the slope (sensitivity) increases to 3.73o C/doubling which is a fair bit more. That explains the uncertainty range range in my papers.
The argument for the lag is that most of the heating goes first into the ocean, however, the feedbacks work both ways so things may not be so simple.
Speaking of borehole temperature…
“For central Greenland (Cuffey et al. 1995, Cuffey and Clow 1997, Dahl-Jensen et al. 1998), results show a warming over the last 150 years of approximately 1°C ± 0.2°C preceded by a few centuries of cool conditions. Preceding this was a warm period centered around A.D. 1000, which was warmer than the late 20th century by approximately 1°C. ”
Surface Temperature Reconstructions for the Last 2,000 Years ( 2006 )
http://www.nap.edu/openbook.php?record_id=11676&page=81
Shaun: Thank you for your reply. Tony has a paper that compares CET to BEST and I don’tsee the 0.9 C between 1663-1763. Here is an excerpt from the paper:
“According to studies made by a number of climate scientists, CET is a reasonable proxy for Northern Hemisphere -and to
some extent global temperatures- as documenoted in ‘The Long Slow Thaw’. However, as Hubert Lamb observed, it can ‘show us the tendency but not the precision’.”
http://wattsupwiththat.com/2012/08/14/little-ice-age-thermometers-historic-variations-in-temperatures-part-3-best-confirms-extended-period-of-warming/
For the values of temperature change from the CET,form 1663-1763, I’m just quoting from the Friends of Science site.
It seems that you are arguing that the centennial scale trends of the CET may be indicative of global scale trends. While that may be the case, there is clearly a large reduction in the amplitude of global fluctuations with respect to the CET fluctuations precisely because of the huge amount of spatial averaging that goes into the global average compared to the CET values. The numbers I quoted (0.9o C for CET compared to 0.2o C for the northern hemisphere) illustrate this well. Using classical statistics – which assume independence – if the global scale temperature is 2500 times larger than the CET area, we would expect 50 times smaller fluctuations (square root of 2500) whereas in reality it is only a factor of 4 – 5 times smaller. This difference is because of the spatial correlations that act over long distances (scaling, nonclassical statistics).
Rls
Thanks for your references to two of my papers. I think Mosh also appears to be a bit startled by the low temperature variability built into Shaun’s work. Having looked at boreholes and tree ring data in great detail they are not data that gives me confidence that they should be used as highly accurate proxies. They can not show the annual and decadal variability of the real world.
I wrote about the relative unrepresentative stability of such proxies compared to observational data in this article
http://judithcurry.com/2013/06/26/noticeable-climate-change/
In looking at a global average supplied by proxy data or instrumental data covering only the last 125 years we are not looking at the real world over the last many hundreds and thousands of years, where variability is at times much greater than proxies or the modern record appear to illustrate.
I admire Shaun’s work and his vigour in supporting his highly theoretical findings covering such a short time scale, but my question remains . How can you put forward a case for in effect, only human made warming, when in the past it has been both warmer than today and cooler than today illustrating the range of natural variability that is possible that surprised even Phil Jones back in 2006.
tonyb
Tony: Thank you. Judith says “We will not be successful at sorting out attribution on these timescales until we have more robust paleo proxy data.”. And you know how to do this. Are any of the consensia listening to you?
We should work with raw data during periods they are available and not proxies. NASA GIST mean global temperature vs. Mauna Loa CO2 data cross plotted have zero correlation during two thirds of the reporting period from 1959 to present including 1959 to mid/late 1970s and 1998 to 2014 to date – during these two periods the X-Y data are a double aught shotgot pattern – no correlation. Focus should be to understand and explain. Then, how does one conclude that extension of no increase / hiatus to 2019-2020 is required to refute the hypothesis. But what is required to prove the hypothesis is true.
Shaun Lovejoy: Is that good enough?
Yes, that’s what I was after. Thank you.
The argument for the lag is that most of the heating goes first into the ocean, however, the feedbacks work both ways so things may not be so simple.
the argument does not hold water in the SH,as the growth rate of Co2 has decreased in the SH ie the lag rate in matching MLO has increased from 18 months to around 4 years in the SH midlatitude stations a difference of around 7ppm in the 21st century.
Shaun, is there any difference in the rate at which the ‘average’ temperature recovers from a cooling event vs. a warming event?
There is a long term memory in the process so that it’s the entire past temperature history that determines the answer (work in progress). But there appears to be a positive/negative symmetry (at least as far as I can tell).
@Tony
Paleo-temperature from ice boreholes are supposedly very accurate and need no calibration because it’s temperature readings all the way down.
The only caveat is they become increasingly smeared (average of longer and longer time periods) as depth increases. So a temperature anomaly of 1C greater than late 20th century happening in the year 1000 means that high temperature must have persisted for many decades.
Nothing unusual is happening today in the NH hemisphere if Greenland is as good a proxy for the entire NH then as it is now.
Shaun: The graphs in the link at my last reply to you does indeed show greater fluctuations for CET, however starting about 1780 the moving averages of both sets closely follow each other, with CET about 0.5 C higher. Can’t this correlation be used to give CET greater confidence and greater usefulness?
Also, you stated “the mulitproxies are quite reliable up to 125 years or so”. How is this known?
Thank you for your time and patience.
From the original Lovejoy post and various replies, it is practically impossible that warming since 1880 could occur without human contributions but a pause of current duration from entirely natural causes is possible.
Am I the only one to see the asymmetry to this argument?
Additionally, if the pause is natural, that means that nature could be responsible for at least half of the 20th century warming (having countered recent warming from man). What does that imply for models and co2 sensitivity?
The problem is that for 50 years, atmospheric science has been based on incorrect physics originating from Carl Sagan. This is to confuse radiative emittance with a real energy flux, the vector sum of emittances. There is net zero surface IR in the self-absorbed atmospheric GHG bands.
There are two other major errors. The first is to misinterpret the Tyndall Experiment. Even if there were surface IR to absorb**, there can be no thermalisation of GHG-absorbed IR in a gas because that would mean absorptivity would exceed emissivity, a breach of Kirchhoff’s law of Radiation.
The second error, from 1981_Hansen_etal.pdf, is the claim of a single -18 deg C OLR emitter radiating up and down; there is no such zone, -18 deg C being a virtual temperature for emission from 0 to ~20 km. with zero downward IR flux except for the stratospheric CO2 and O3 emission, very minor.
The IPCC climate modellers imagine that the mean heating rate of the atmosphere is 238.5 + 333 – 238.5 = 333 W/m^2. The 40% increase is imaginary. Processes in the atmosphere ensure there is mean zero warming by well-mixed GHGs apart from the stratospheric emission.
So, the pause is the norm: the previous heating in the 1980s and 1990s was about half AGW but not from CO2.
**The 23 W/m^2 absorbed in the atmosphere is over a few km, so local thermalisation (at aerosols) is minimal on a unit volume basis.
AlecM commented
I’ve been measuring the zenith with an IR thermometer (8-14u), and on anything but hot and humid days, it’s quite cold, 70-100F colder than surface air temps. Hot humid days as low as 60-65F.
Cloud bottoms however are much closer to surface temps, only 10 to 40-50 degrees colder.
During this winter I measured -80F zenith temps.
Even if you then add the Co2 band DWIR, the sky is still very cold, even in the middle of the summer (-40F).
AlecM – when a CO2 molecule absorbs a photon, it can collide with other air molecules and transfer the energy to them as kinetic energy. So, the point is you can’t just look at radiated energy in and out. You have to consider all pathways the energy can come and go. Meaning, you are wrong about the Kirchhoff’s law of Radiation part.
Reply to Jim2: your physics is wrong. Statistical thermodynamics shows that although there is thermalisation of the GHG-absorbed IR by the mechanism you quote, the ‘excess energy’ in the IR absorbed density of states, a discrete function of temperature, must be ejected from the local gas volume.
This is basic physics and Climate Alchemists, specifically Ramanathan, failed to do the science correctly.
AlecM – the heated air molecules, they aren’t ejected, they just bump into their neighbors more frequently, thereby raising the pressure. The additional pressure expands the heated parcel of air thus reducing its density, then it rises as a thermal. This isn’t the only scenario, but it is a common one.
jim2 commented
This warming should be present in any DWIR that a “regular” IR Thermometer would detect.
So, if the temp of the air is that cold, then why don’t you freeze to death?
Not surface air temps, zenith ir temp, the radiative surface temp that would be what the surface radiates to in an S – B equation.
You can .
http://www.kilty.com/freeze.htm
But I did want to add that the very cold temps are straight up, and as you move to the horizon the temps go up, I would presume that by the time it is pointed horizontal it will be at ground temps. So this would go into the S-B equation as well.
Dr. Curry
Here is my question as a layman (with a brief set up).
Back in the 70’s the prevailing scientific wisdom was that the earth was cooling.
Then in the 80’s a couple of scientists came up with a hypothesis that CO2 is a major driver of global temperatures. The scientific community seemed to say…”yeah, that sounds right.”
Then a few scientists came up with models to predict future global temperatures based on a direct relationship to CO2 levels.
Then came all the dire predictions of “global warming.” The concept really caught on with the public. Yes, they said, man must be driving up the earth’s temperatures with the increasing release of CO2 into the atmosphere.
Then, 17 years ago, the earth’s temperatures stopped rising even though CO2 levels continued to increase.
My question is this…. How can the scientific community be so certain that the trace gas CO2 is so influential in affecting the temperatures of the earth??? It seems to me (and several scientists whose opinions I have seen) that the proof is just not there.
I believe this is a very dark period for science, in which political and social agendas, and not science, are actually driving the debate.
Thanks for your time.
Dave Andrews
.
Dave,
The notion that CO2 affects global temperature goes back a lot further than the 1980s. Tyndall discovered the radiative properties of CO2 in 1859 and Arrhenius predicted that burning fossil fuels would cause global warming and made the first calculations of the amount of warming back in 1896.
There is a good summary of the history of climate science here.
http://www.aip.org/history/climate/index.htm
There is a good technical introduction to how CO2 and other GHGs affect the temperature of the earth here
http://scienceofdoom.com/2009/11/28/co2-an-insignificant-trace-gas-part-one/
I particularly like the figure from Goody & Yung which compares calculations of outgoing IR radiation with observations – that seems to me to be pretty good “proof” that our understanding of the way GHGs behave in the atmosphere is correct, along with the fact that the average surface temperature of the earth is about 15C and not -18C.
As for the “global cooling consensus” of the 1970’s, that is actually a myth. There were a small number of papers suggesting this and the idea got some traction in the media but the large majority of papers supported the notion of global warming caused by human GHG emissions. See
http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2370.1
Andrew adams
Regarding your last link and comment on global cooling. We have had this discussion before with one of the authors. William Connelly . The global cooling scare began in the 1960’s and was supported by many papers and the leading climatologists of the era including Hubert lamb and Budyko who wrote of their concerns in books as well as papers. By the time the 1970’s came around the tide wa already starting to turn against the notion of cooling as temperatures started to deviate from the previous concerns.
The CIA wrote a major paper on the subject in the 1970’s. There was great concern at the time and it is a myth to claim it was a small number of papers.
Tonyb
If the seventies global cooling thing is a myth, then it is a myth that at the time was supported in writing by current White House science advisor Holdren. Global Ecology pp. 64-78 (1971). Some media attention includes three cover stories plus two feature articles (both titled ‘Another Ice Age?’ 11/13/72 and 6/24/74) in Time. And a myth with sufficient traction that on 8/1/1974 Nixon ordered Commerce to set up a subcommittee on climate change (cooling, not warming) chaired by NOAA with an initial annual budget of $39.8 million.
I think your myth assertion is itself mythical. Merited an essay of its own in the next book.
Even if you think it isn’t a myth, it’s irrelevant. Unless you live in some cave somewhere, cooking and heating with fire, then you rely on science despite scientists having been in error in the past.
Logical fallacies are fallacious. I’m writing a chapter about that in my next book.
Tonyb,
“The CIA wrote a major paper on the subject in the 1970’s.”
It didn’t. It assigned someone to investigate, and he went and got an earfull from Klutzbach at Wisconsin. His report was tagged
“This document is a working paper prepared by the Office of Research and Devedlopment of the Central Intelligence Agency for its internal planning purposes. Therefore the views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official position, either expressed or implied, of the Central Intelligence Agency.”
My own story of the times is this. In 1976 I took up a job with CSIRO in Perth. The WA government was under pressure to extent their wheat transport facilities into more marginal areas, where cropping was now more feasible via mechanisation. CSIRO was asked for a climate outlook. I came from a Division with close contacts with Atmospheric Physics in Aspendale, so they asked me to ask them. The answer was unequivocal. The GHE was the future, and it would mean an expanded Hadley cell. The westerlies that sustain wheat in WA would move further south – lower rainfall. Bad investment.
The WA Gov’t took our advice, which proved right.
The “global cooling scare” is largely a myth, based on a few news mag articles.
Tonyb is correct, as far as I can see. The CIA wrote that paper, and it concerned cooling, but also cooling related disruption, including warm spells.
CIA, August 1974. And it might well be deemed a major paper. Simple as that. It was indeed tagged for internal use, but it was prepared by the CIA. Maybe they were impressed by the loss of much of the Soviet wheat crop in 1972. Some agricultural follies in China might have got them thinking also.
Obviously the agency got someone to do it. It was either that or wait for a million monkeys to type it by accident. The paper was theirs, under their letterhead.
Why fudge this?
Mosomoso
Yes, it was an official document. You often get disclaimers such as Nick cited. It doesn’t get away from the fact it was an official document drawing together a lot of research and mentioning some of the leading climatologists of the day whose academic work I then cited in my post.
People are trying to rewrite history which bearing in mind its only ever ‘ANECDOTAL’ seems a lot of work for nothing.
tonyb
Mr. Stokes, you are being a tad disingenuous to label the global cooling fad a myth. At least without mentioning Stephen Schneider’s paper on it.
An early numerical computation of climate effects was published in the journal Science in July 1971 as a paper by S. Ichtiaque Rasool and Stephen H. Schneider, titled “Atmospheric Carbon Dioxide and Aerosols: Effects of Large Increases on Global Climate”. The paper used rudimentary data and equations to compute the possible future effects of large increases in the densities in the atmosphere of two types of human environmental emissions:greenhouse gases such as carbon dioxide;
particulate pollution such as smog, some of which remains suspended in the atmosphere in aerosol form for years.
The paper suggested that the global warming due to greenhouse gases would tend to have less effect with greater densities, and while aerosol pollution could cause warming, it was likely that it would tend to have a cooling effect which increased with density. They concluded that “An increase by only a factor of 4 in global aerosol background concentration may be sufficient to reduce the surface temperature by as much as 3.5 ° K. If sustained over a period of several years, such a temperature decrease over the whole globe is believed to be sufficient to trigger an ice age.
For a compilation of media articles you can wander over to your favorite site WUWT: http://wattsupwiththat.com/2013/03/01/global-cooling-compilation/.
Or the National Science Board’s statement, “Judging from the record of the past interglacial ages, the present time of high temperatures should be drawing to an end . . . leading into the next glacial age.”
Some quotes: he continued rapid cooling of the earth since WWII is in accord with the increase in global air pollution associated with industrialization, mechanization, urbanization and exploding population. — Reid Bryson, “Global Ecology; Readings towards a rational strategy for Man”, (1971)
……or this:
This [cooling] trend will reduce agricultural productivity for the rest of the century — Peter Gwynne, Newsweek 1976
…..or this:
This cooling has already killed hundreds of thousands of people. If it continues and no strong action is taken, it will cause world famine, world chaos and world war, and this could all come about before the year 2000. — Lowell Ponte “The Cooling”, 1976
or this:
….If present trends continue, the world will be about four degrees colder for the global mean temperature in 1990, but eleven degrees colder by the year 2000…This is about twice what it would take to put us in an ice age. — Kenneth E.F. Watt on air pollution and global cooling, Earth Day (1970)
Tom,
You lecturing Nick Stokes is hilarious.
Just to remind you, this was the first comment on this little sub-thread;
“Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”.
That is a myth.
What a very instructive sub-thread. Often I read comments from tony, tom, mosomoso, and rud and think they sound like intelligent and well-informed people.
Then I read their comments in this thread and the paper Adam linked.
It is apparent that they are all, despite their intelligence and being informed, avoiding what Michael pointed out – the statement that spurned this sub-thread was the following:
“Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”.
None of them actually addresses the inaccuracy of that statement or the accuracy of Adam’s response:
“As for the “global cooling consensus” of the 1970’s, that is actually a myth.”
Talk about disingenuity! Why is it so important to them that they individual examples that don’t speak to the original inaccuracy?
From William’s paper:
Let’s look further. In his initial response, tony says this:
“The global cooling scare began in the 1960’s …”
And further down he cites in support of that claim, scientists speaking of a cooling phase, not evidence of scientists supporting a “global cooling scare.”
Rud speaks of funding of research into climate change. Not on point.
Mosomoso points out that well, the CIA wrote a paper, Not on point.
And Tom tells is that Steven wrote a paper. Not on point.
What an instructive sub-thread. Why is this question so important that our much beloved “skeptics” would offer responses that don’t speak to the inaccuracy of the original statement?
Even if it weren’t inaccurate, it would be fallacious to argue that because there was a consensus about cooling in the ’70s we should assume anything in particular about scientists who say today that there is reason to be concerned about a risk posed of significant climate change as a result of continued ACO2 emissions.
So this compulsion to avoid that actual inaccuracy is doubly interesting.
Sometimes being intelligent and well-informed just ain’t enough.
Joshua
I have taken the trouble to post 3 or 4 replies all with actual links or information that directly relate to the leading climatologists of the time and have quoted their words. I have their books in my Bookcase here and much more of their work stored on my computer. Who is it you disbelieve? Budyko? Lamb? The Cia? The 20 leading climatologists?
The scientists of the time were quite right to point to fears about cooling as after the warm decades preceding the downturn it looked for a time as if it might be severe and ongoing. To claim cooling was a myth is to deny the papers written, the conferences held, the books written and the general consensus of the time which gradually gave way in the 1970’s to a realisation that the cooling was turning to a warming.
Which papers and climatologists do you claim were lying?
It would be surprising if present day scientists weren’t warning about warming. Why do you think I would have a problem with that?
The problem is that I don’t believe they provide a sufficient historic context to demonstrate this, nor access to data that demonstrates it. Shaun quoting 125 years as proof of natural variability is not good enough, nor is the belief that we have for instance a good knowledge of global sea surface temperatures to 1850 enough to demonstrate this is so.
tonyb.
It’s plenty good enough, LOL. See Stadium Wave.
tony –
==> “Which papers and climatologists do you claim were lying?”
I am not claiming that anyone was “lying.”
I am pointing to the fact that none of what you have spoken about speaks to the evidence provided in William’s paper – evidence that supports a conclusion that David’s statement was inaccurate and Adam’s was accurate (the statements that spurred this thread). Can you show me that William’s evidence is inaccurate – that by a matter of some 7 to 1, published papers supported a conclusion of warming rather than cooling?
I am pointing out that you are equating cooling, or a cooling phase, with evidence of a “scare.”
And I am pointing out that this whole argument is takes place in the name of fallacious rhetoric – that even if there were a consensus towards cooling (which it appears their wasn’t), it wouldn’t tell us anything much of anything useful w/r/t evaluating today’s science. The whole argument is embedded in fallacious reasoning.
I actually find it unfortunate to see you stain your solid efforts at collecting evidence by signing on with this kind of partisan “skepticism.” I pretty much expect it of Rud, Tom, and mosomoso to some extent, but I like to think that you’re better than that.
Heh. Some 6 to 1. And of course, the neutral papers also speak to the “cooling scare” myth (as it refers to the scientific literature).
Joshua: I remember it and, not being part of the climate science community, learned of it through the media and word of mouth. And I can say this with confidence; the message was widespread and scary. However, like today, it was at the bottom of concerns for most people; it was at the time of watergate, recessions every three years, and high inflation. If you want to understand, perhaps the newspaper articles of the day would be the best source.
Which papers and climatologists do you claim were beating their wives, Joshua?
It takes a Physicist to model a unknown fear today.
“Physicist Alessandro Vespignani of Northeastern University in Boston is one of several researchers trying to figure out how far Ebola may spread and how many people around the world could be affected. Based on his findings, there will be 10,000 cases by September of this year and it only gets worse from there.
ebola-outbreak-model-2
(A model created by Alessandro Vespignani and his colleagues suggests that, at its current spread, Ebola may infect up to 10,000 people by September 24. Other models suggest up to 100,000 infected globally by December of this year. The shaded area is the variability range.)”
I guess.
Joshua
Despite being quoted chapter and verse of the cooling fears articulated by such as Budyko and Lamb you still harp on about no evidence.
Put your logical sceptical hat on for a moment. Here is what I said.
‘Regarding your last link and comment on global cooling. We have had this discussion before with one of the authors. William Connelly . The global cooling scare began in the 1960’s and was supported by many papers and the leading climatologists of the era including Hubert lamb and Budyko who wrote of their concerns in books as well as papers. By the time the 1970’s came around the tide was already starting to turn against the notion of cooling as temperatures started to deviate from the previous concerns.’
Imagine yourself to be a climate scientist. There had been a 30 year warming from 1940 then this suddenly reversed, with some notably cold winters. We then have 25 or so years of cooling. Its there in the graphs.
Do you look at the evidence and say its cooling, or do you ignore the evidence and claim warming that isn’t there?
William C took papers up to 1979. By that time the cooling concerns had long passed. Why would scientists still be writing papers about it by then? They would look foolish in light of the evidence showing a warming trend was well established.
Please note what I wrote; that this was a 1960’s scare based on the evidence. By the time the 70’s came round the evidence was showing something else to be happening and by the end of that decade warming was well established.
Of course William would find more papers if he went up to 1979. If he looked objectively in the literature of the 1950’s/ 1960’s era-with an overlap as it always takes time for an idea to lose momentum- he would find the general thinking was that there was a problem with cooling. By the end of the 1970’s there were very few making that claim.
tonyb
Joshua
I said here
http://judithcurry.com/2014/09/01/how-long-is-the-pause/#comment-623985
‘….Imagine yourself to be a climate scientist. There had been a 30 year warming from 1940 …’.
I did of course mean UNTIL 1940.
tonyb
tonyb,
It’s a long way from a few papers and some concerns to “prevailing scientific wisdom”.
Only takes one valid observation…
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0105948
to end all the speculation.
tony –
You raise an interesting point w/r/t to the timing. First, someone less importantly, let’s look at what was said.
==> “Back in the 70’s the prevailing scientific wisdom was that the earth was cooling.”
So technically speaking, in that the ’70s would mean from 1970-1980, even with your explanation the original statement would be inaccurate and Adam’s statement would be accurate.
But more importantly, (if anything can ever really be considered to be important in the silly squabbles), the underlying debate is whether (1) there was a prevailing ‘scare” promoted by scientists that there was catastrophic cooling on the horizon and, (2) whether that has relevance (in a meaningful sense as opposed to a partisan rhetorical sense) to today’s debate about climate change.
As for the first point…even if we were to break down the range of the 70s into the early 70s and the later 70s, or even if we were to extend that to the mid-60s or early 60s – there is a large gap between saying that some scientists tentatively spoke of a “phase” of cooling and saying that it was a “scare” being promoted by scientists. You seem (to me) to be deliberately jumping over a lower bar. That’s kind of OK, IMO, if you make it clear what you’re doing. The information that you add w/r/t the different time periods involved and how that relates to William’s data is relevant – but your approach to enlarging the discussion to include that point seems to me to be more in service of advancing a fallacious agenda. It would seem easy enough to break down William’s data by publishing date. Have you don so? If so, then it would seem that you could easily show me that for a different period of time than what David referenced, there was a “prevailing scientific wisdom” that the Earth was cooling – for at least a time-limited period But even that wouldn’t justify your “scare” language, as to do so you’d have to cross yet another higher evidence bar – to show (1) that the prevailing wisdom was that there would be a long-term cooling and that the prevailing wisdom that the long-term trend of cooling would be worthy of being “scared” of.
As for the 2nd point, I’ve said it multiple times. This entire discussion is rooted in a fallacious attempt by “skeptics” to argue that (if it were true that) scientists’ believing something 50 years ago has direct relevance to what they believe now – with the state of the science being completely different.
You know, tony, when I read the technical arguments of “skeptics,” sometimes they make sense to me. But I know that my ability to understand the technical arguments is extremely limited, so I have to use other tools to try to evaluate the scientific debate. One of those tools is to look for arguments made by participants in the climate wars that I can tell are fallacious – that I don’t need technical scientific skills to evaluate. This is one of those cases. I also add into that consideration of the general approach of certain individuals to debate. You’re actually one of those “skeptics” that I think is more reliable in your general approach.
When Rud or Tom or mosomoso employ fallacious rhetoric to advance their views, it is instructive as to the probabilities of the technical arguments they make. Not dispositive, of course. They could simply advance fallacious arguments in one area and make completely valid arguments in a more technical area. But it is evidence that helps me to evaluate probabilities. Overt politicization might more information about probabilities. Refusal to acknowledge the symmetry of motivated reasoning is more information about probabilities. Prevalence of view among highly educated and specialized “experts” is also information about probabilities. Consistency in application of criteria (say, about uncertainty) is also information about probabilities.
So when someone who I generally consider to be more reliable within that matrix of probabilities makes what seems to me to be weak arguments in the service of fallacious rhetoric – I have to ask myself why he does that – and whether or not is is more information about the probabilities. If a more reasonable “skeptic” (IMO) is willing to engage in weak and fallacious debate, then does it say anything about the larger spectrum of “skeptical” arguments? I think it does.
Of course, the same processes apply on the other side of the debate with “realists.” Yes, I do tell them that. But I can guess that predictably, some of my much beloved fans like mosher and Don will sign on to dismiss the probabilities to which I speak by fallaciously attacking me and personalizing the argument. Such identity-aggressive and identity-defense behaviors are well-predicted by the empirical evidence that supports the role of motivated reasoning in highly polarized debates like the debate about climate change. It’s funny that smart and knowledgeable people fall into the trap of thinking that whether or not I apply standards evenly speaks to whether or not other people do,.
But as always, I do thank you for the respectful exchange. It is one of the reasons that I hold your opinions in higher regard than the opinions of many of my much beloved “skeptics.”
And tony –
One more point (because the previous comment wasn’t long enough and my fans just can’t get enough of my long comments)…
It’s also rather interesting that even if it were true that there was a “prevailing scientific wisdom” in, say, the mid-60s there where was a “scary” long-term imminent cooling, if anything the fact that such a view was relatively quickly corrected because of increased information and a longer data period is all that much more reason to reject much of what I see in “skeptical” arguments about the state of the science of climate change. But then again, trying to extrapolate from those events is more fallacious than meaningful, and something only likely to be done by someone engaged in motivated reasoning… so I won’t make that argument here.
Heh.
I like ‘relatively quickly corrected’. Now? Not so much.
=============
Joshua
I think you are paying me a back handed compliment. Here is the paper in question;
http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2370.1
entitled ‘the myth of the 1970’s global cooling consensus.’ It is a well written paper but from the title we can see what view point the authors are putting over.
See table 1.
As I have said at least three times on the thread it wasn’t a 1970’s thing. The concerns started in the 1950’s and built up in the 1960’s when the trend had become obvious, As Budyko said
‘in the 1940’s the warming trend was overcome by a cooling trend which intensified in the 1960’s and in the mid 60’s the mean air temperature of the Northern Hemisphere (once again) approached the level of the cold seasons of the late 1910’s’
However this strong trend had tailed off by the 1970’s so a search of this last decade for evidence on cooling is rather pointless as by that time the concerns were about warming and as the decade wore on more papers explored the emerging warming trend..
It is a shame that a detailed search of the earlier papers and books was not made for the article, some of which have been quoted here including Lamb and Budyko, who changed their minds concerning the prevailing climate state in the early 1970’s as new evidence came along.
tonyb
tony –
No backhanded compliment. Unlike most of the “skeptics” I encounter here, you are willing to exchange views in good faith, and you tend to avoid conflating personal attacks with logical arguments. That really does increase the validity of your perspective, IMO – particularly if it is w/r/t arguments being made where I can’t evaluate the technical merits of different viewpoints.
On the topic of the sub-thread – AFAIAC, you still have not addressed the points that I’ve made (some of them a couple of times). Not sure that there’s any further for us to go on this one unless you do.
Joshua, it’s funny to watch you employ the tactics you claim to abhor in others.
Tony, myself and others did not claim that global cooling was prevailing scientific opinion in the 70s. But that’s a convenient strawman for you to pin your obsessions to.
What we claim is that there was widespread media coverage of a potential cooling with potential damaging impacts. This was in part fed by peer reviewed papers including one by Stephen Schneider and in part fed by grave pronouncements from national associations.
As others have pointed out to you, I too was there and I remember the coverage quite well.
We don’t obsess on it–that seems to be your schtick, although it’s a blessed relief for you to be whining about us instead of Judith–so please continue.
We do note that some of the current whining by alarmists has a familiar ring to it. Perhaps that grates on you most of all.
But I guess when fallacious reasoning leads to felicitous conclusions it’s acceptable.
In Search Of… The Coming Ice Age
https://www.youtube.com/watch?v=L_861us8D9M
Watched it in 1977. Also watched Tomorrows World, spreading soot on Arctic ice to melt it. Dick Emery doing is sketch on surviving the ice-age in his Inuit costume.
Above i show why Tyndall’s Experiment has been misinterpreted, also why net surface IR emission has been exaggerated 5.1x.
Correction; the exaggeration of net GHG-absorbed surface IR is (396-40)/23 = 15.5x!
Hey Joshua, check out the Stephen Schneider comment
on a coming ice age @ 21.19 on the video that Doc Martyn
posted. Guess those geologists, meteorologists et al, (not Al)
and people stuck in snow on the roads were quite worried at
the time.
We’d be worried too if the cold hits now what with greater
populations to be fed, more people crowded in high rise
living and dependent on heating, and reduced energy
efficiency as intermittent solar and wind energy replace
fossil fuels thanks ter the back ter the, er, golden age,
campaigning by Greens.
Dave Andrews,
Actually the predictions of global cooling were communicated mostly in the media and in some policy circles. The scientific community as a whole did not articulate a predominant view on the future direction of climate. The behaviour of CO2 as a “greenhouse” gas was well known by the 1970s, but it was not clear when and if it would become a dominant driver of climate. In 1965 a report to the Johnson Administration anticipated warming due to CO2: Environmental Pollution Panel (1965), Restoring the Quality of Our Environment, 133 pp, President’s Science Advisory Committee, Washington D.C.
see
http://climatechangenationalforum.org/fears-of-freezing-the-1970s-are-calling-they-want-their-climate-policies-back/
wrhoward
It is not correct that the concerns over cooling were a media creation.
Lamb, Mitchell, Budyko, Ladurie etc etc sparked the concerns.
As Budyko himself says (who seems to have subsequently changed his mind about cooling as did Lamb-as scientists should do when new evidence comes to light) in his book “The earths climate past and future’ pages 148 ;
‘it was generally accepted that a tendency towards climatic cooling appeared during the last few decades; since the sign of temperature fluctuations changes relatively rarely, the scientists concerned with climatic change almost UNANIMOUSLY (my capitalization) believed that the temperature would continue to decrease in the near future…Lamb 1973 mentioned that more than 20 forecasts of the early 70’s concerning climatic change predicted a cooling trend in the next few decades, but (then) indicated a lack of sufficient scientific grounds for these forecasts and two years later obtained the FIRST (my capitalization) evidence of a possible climatic change towards warming.”
(The temperature cooling can be seen in the Willett/Mitchell curves of the time)
Budyko continues;
‘in the 1940’s the warming trend was overcome by a cooling trend which intensified in the 1960’s and in the mid 60’s the mean air temperature of the Northern Hemisphere (once again) approached the level of the cold seasons of the late 1910’s .”
To summarise, here is what seems to have happened; As you know there was a very substantial warming from the 1920’s to 1940’s. This reversed itself. By 1962/3 the dropping temperature made Callendar himself doubt his greenhouse theory. Budyko, Lamb and an almost ‘unanimous’ agreement of climate scientists believed we were heading into a significant cooling phase . Lamb eventually pointed out in 1973 that the cooling was not sufficiently long lived to be a scientifically meaningful climatic trend of at least 30 years. The widespread scare of cooling changed into a scare of warming as temperatures started to recover.
Here are a couple of additional links and a quote;
“The second important group analyzing global temperatures was the British government’s Climatic Research Unit (CRU) at the University of East Anglia, founded by Lamb in 1971 and now led by Tom Wigley. Help in assembling data and funding came from American scientists and agencies. The British results agreed overall with the NASA group’s findings — the world was getting warmer. In 1982, East Anglia confirmed that the Northern Hemisphere cooling that began in the 1940s had turned around by the early 1970s.
http://www.aip.org/history/climate/20ctrend.htm
Also see;
http://earthobservatory.nasa.gov/Features/GISSTemperature/giss_temperature2.php
So the 25 year long (very real) cooling scare was most rife during the 1960’s and came to an end in the early 70’s.
tonyb
wrhoward, it should be fairly easy to find the letters to the editors of the media where the bulk of the scientists were trying to set the record straight. I’d love to see the collection from that time period. Are you aware of one? I wouldn’t think it would be very flattering to the climate science community if they were perfectly willing to let a few alarmists manipulate the public in such a way and not try to stop it. Probably be better to just claim to have been wrong then but right now.
Steven ” it should be fairly easy to find the letters to the editors of the media where the bulk of the scientists were trying to set the record straight. I’d love to see the collection from that time period. Are you aware of one?”
My point was that *as a community* climate scientists did not communicate a prevailing view. Today, the prevailing paradigm is that global warming due to accumulating greenhouse gases is the main climate risk we face. This view is communicated widely through IPCC, national academies, scientific societies, government science agencies, and the media.
In the 1970s climate scientists as a community were more divided, and had not come to a predominant view on which way the climate’s trajectory would proceed from the 1970s. Some studies had already suggested GHG-driven warming would dominate climate, others anticipated cooling as some of the posts here show.
http://climatechangenationalforum.org/fears-of-freezing-the-1970s-are-calling-they-want-their-climate-policies-back/
see also:
Peterson, T. C., W. M. Connolley, and J. Fleck (2008), The Myth of the 1970s Global Cooling Scientific Consensus, Bull. Am. Meteorol. Soc., 89(9), 1325-1337, doi:10.1175/2008BAMS2370.1.
Weart, S. R. (2010), The idea of anthropogenic global climate change in the 20th century, Wiley Interdisciplinary Reviews: Climate Change, 1(1), 67-81, doi:10.1002/wcc.6.
http://scienceblogs.com/gregladen/2013/06/04/the-1970s-ice-age-myth-and-time-magazine-covers-by-david-kirtley/
climatereason
“It is not correct that the concerns over cooling were a media creation.”
Indeed they were not a media *creation* but there were media stories communicating the concerns. Nothing like the current coverage of climate change, but stories were there.
The notion that in the 1960s and 1970s climate scientists were *unanimous* about anything is incorrect.
http://scienceblogs.com/gregladen/2013/06/04/the-1970s-ice-age-myth-and-time-magazine-covers-by-david-kirtley/
The little ice age of the mid 20th century was widely discussed in the astrophysics community. eg Sagan
Although the radiative transfer time scale of photons from the core to the photosphere of the Sun is long, all the models identify epochs of anomalously low neutrino flux with epochs of anomalously low surface luminosity; and the temptation to connect the present “ice age” on Earth with low solar luminosity, L, has been unsuccessfully resisted.
http://www.nature.com/nature/journal/v243/n5408/abs/243459a0.html
wrhoward, I’m aware of the argument you were making. I’ve seen it before. I don’t have a problem with your argument being that the majority of the climate science community sat in silence as they watched the general public being told things they felt were in error.
Tom,
You lecturing Nick Stokes is hilarious.
Just to remind you, this was the first comment on this little sub-thread;
“Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”.
That is a myth.
wrong spot.
Michael
Various of us have quoted the leading climatologists of the time and relevant contemporary papers to demonstrate that what we remember is exactly what happened. Global cooling was a hot topic advanced by (97.5%) of the climate establishment of the day. By the early 70’s it had turned round to generally agreed warming.
tonyb
Michael, perhaps you have a different definition of lecture. Reminding Mr. Stokes of what was written by one of the leading lights of the Consensus is not a lecture.
I would show you what a lecture is, but you’re not worth the words.
Run along now–Joshua is downthread and up. You’ll find him.
Tom, it was one paper.
That’s not “the prevailing scientific wisdom”.
Mikey, the point is it wasn’t just Schneider–if you can read you’ll see evidence here on this thread. There were at least seven papers in a two-year span that were peer reviewed predictions of global cooling. Not to mention a lot of media play and concerned talk from respected institutions.
Yes Michael, there’s evidence.
Of course, the evidence doesn’t refute that it is a myth that there was a “prevailing wisdom” of cooling.
But there’s evidence. Look at the evidence. There were 7 papers.
They reflected a minority opinion of a cooling phase, and weren’t scientific reports supporting a “scare,” and yes, the question of “prevailing wisdom” and “consensus” were what the sub-thread started with, but…..er….uh…..there were 7 papers.
Tom, are you disingenous, or just a foo1?
But Michael, there were 7 papers.
OK then, I concede……ignoring the 50 or so that said cooling was not likely in the future.
So we’ve gone from it being a myth to not being prevailing wisdom.
Even f**ls can make progress. Mikey–you found your Joshmate! See? All is well…
==> “So we’ve gone from it being a myth to not being prevailing wisdom.
It is a myth that it was the “prevailing wisdom.”
Once again, we go back to what spurred the subthread:
That is a myth. As Adam said.
As Michael said.
And that’s without even getting into the myth of a “scare” among scientists.
What an instructive sub-thread!!!
Glad to help Tom.
LMAO. This is the progress. Your position remains a despicable torturing of the truth. Fully supported by a bunch of puffed-up crap. Used-car level stuff.
The common theme to both scares was to blame industrial society.
Hey, AnthroCO2 is like an injection of sugar into a hypoglycemic patient.
==================
JCH, this is how they do everything. Quotes from the media? Doesn’t matter. Expressions from august institutions? Piffle. Academic papers, including one by Saint Schneider? Ignored.
This is how they do everything. Which may explain why they’re getting their keisters handed to them.
When was the last time you heard a skeptic complaining about the state of climate communications?
kim doubles down.
Popcorn please!
Plants have certainly revived, Michael. Had we an easy metric it could be shown in the Animal Kingdom, too.
Gaia stirs restlessly on the gurney. “Air, air’, she mumbles. Of course she means Carbon Dioxide. That Oxygen mask doesn’t really do much for her.
==========================
Tom Fuller | September 2, 2014 at 9:31 am |
” Piffle. Academic papers, including one by Saint Schneider? Ignored.”
Poor Tom.
What are ignored, are the many caveats in the paper, instead just quoting the abstact.
JCH –
==> “LMAO. This is the progress. Your position remains a despicable torturing of the truth.”
It might help if you were a bit more specific.
I’ve seen a video of Stephen Schneider from that time, bell bottoms and all, in which he solemnly tells us that we simply don’t know enough to tell which way climate is going. He has fooled a lot of people since, but first he had to fool himself.
===================
“What are ignored, are the many caveats in the paper, instead just quoting the abstact.”
The realist’s dictionary:
Caveat – see also “exit strategy.” In climate science, a “caveat” is all-important from roughly 1960-1979 and applies only to papers and text books that describe cooling. Caveats are plentiful in modern climate papers and texts, however to notice them is to risk being labeled a “denier.” Following general trends in politicized science, expect a phase shift in the next decade or so where today’s caveats become all important and only “deniers” note the caveats in papers produced in the 1960s and 1970s. The basic rule is that climate science is always 100% correct, especially when it is 100% wrong.
Joshua writes- “It might help if you were a bit more specific.”
Imo, when it comes to purely poltical discussions, being specific seems to occur infrequentlt as it can lead to people pointing out where a person was wrong.
“Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”
From the mid-1940s to ~ the mid 70s the data said the earth *was* cooling. So if what you mean by “prevailing scientific wisdom” is that scientists recognised what the data indicated, then in that sense your statement is correct.
Maybe now is the time to divorce policy from climate modeling, and consider both the other risks of CO2 and what can be done about it while minimizing the risks to the economy. Climate is hardly the only non-linear system involved.
Print your own castle in the future…what future,
http://www.3ders.org/articles/20140826-minnesotan-world-first-3d-printed-concrete-castle-in-his-own-backyard.html
mad scientists say.
What is needed is a discussion of the risks and benefits of CO2.
I believe we need a CO2 level of about 600 PPM.
Reducing CO2 will require more water and land to grow food, which will lead draconian population controls to avoid starvation by 2100 since we are currently wasting land to grow fuel.
The problem with Lovejoy and similar studies is the Argos system and satellites were not deployed in the Roman Period or even the middle ages.
We do not have good data from previous warm periods or even previous natural cycles.
His claimed accuracy for mean global temperature anomaly is ±0.03 K.
I’m not sure how you get that from historic data. Climate scientists can’t even agree on whether there was a significant medieval warming.
Totally unwarranted, considering how little we know about bio-feedbacks and risks to crops from emergent weeds and animal (primarily insect) pests. (Also, IMO, fungal/lichen).
Depending on how accurate Salby’s work is, if the atmospheric pCO2 has really been constrained through the Pleistocene the way current “consensus” science says it has, raising it significantly above that level carries a real risk of non-linear responses from the ecosystem(s). The assumption that “more CO2 is better” for unprotected crops, just because it works in greenhouses is totally unwarranted. Greenhouse crops are much better protected…
Paleo helps you there, AK. It’s safe.
==========
No it doesn’t. AFAIK current “consensus science” says it’s been since before the beginning of the Pleistocene the the global pCO2 was higher than 400ppm.
Don’t you think most of the biome has had plenty of time to adjust, and evolve new features without the risk from a higher global pCO2? Granted, there are localized places/times where it’s much higher, naturally. But those are exception situations, where opportunistic populations (species, sub-species, or strains) can take advantage.
So when the entire globe crosses some tipping point or other, what is the risk that one or more of the opportunistic populations will experience a population explosion?
Such opportunistic explosions can result in much more rapid evolution of the strain(s) involved.
The risk isn’t from failure to grow, it’s from too much growth, and evolution, of the wrong things.
And parallels from earlier epochs don’t matter, because they weren’t the same populations.
AK – I see no reason to assume that evolution would produce the “wrong things.” Why would a change in CO2 produce proportionally more “wrong things” than any other change?
Splain that one.
Meh, tipping points, all over paleo.
We can’t have evolution of the wrong things. Nope, nope, nope.
Those ‘unsame’ populations still have the genetic machinery to adapt.
Relax, everything is for the best in this best of all possible worlds.
=============
Jim2 beat me to ‘wrong’. Some people think humans are ‘wrong’.
=========
AK – I am unaware of any benefit from limiting CO2 that justifies starving billions of people.
AK = Alarmist Klaptrap
I’m not sure how you get that from historic data. Climate scientists can’t even agree on whether there was a significant medieval warming.
Real Scientists agree there was a Medieval Warming. Real historians also agree that the Vikings moved to Greenland and grew crops there.
CO2 Alarmists need the warm periods and cold periods of the past to not be true. That is why the hockey stick was promoted. The IPCC cannot still use the hockey stick because too many people really do know that Temperature does vary naturally, in a cycle the goes up and down and repeats. That the IPCC used the hockey stick even once does demonstrate they are not really about real science.
Science is always skeptic. Look for real scientists outside the Consensus Clique. There are many really good scientists.
“Wrong things” for our crops.
I listen to both jim2 and AK. We’re all ignorant fools.
===============
Crop plants genetics are managed by humans, I would say those are the least of our worries WRT to any warming that might occur. Cooling would be very much harder to deal with, though.
That being said, over time we will see cheaper energy via some mix of nuclear fusion and fission. We just have to put the enviro-not-sees aside, put government back in its rightful place, and continue with the technological progress that afforded us the great standard of living we now enjoy.
I might point out that the modern era in the US was ushered in by independent, monopolistic capitalists. They brought us the coal, petroleum products, steel, and electricity upon which the modern era was built. No matter how you slice it, billions of people have benefited from their efforts.
We’ve been doing fine at 350-400.
Actually, we have done much better at 350-400 than at the 280 we had before we started using a lot of carbon fuel.
Most every real study on how green things grow indicate that more CO2 will be even better.
Only comparisons of quite different estimates based on different methodologies are capable of quantifying the accuracy of global temperatures.
My accuracy estimate comes from four instrument based data annual global mean data sets since 1880. One of them (the 20th Century reanalysis) uses only monthly Sea Surface Temperatures and pressure data from stations. It uses no land based temperature series so that there are no issues about heat islands, changing the locales of stations etc .
One simply calculates the mean of the four series and then the differences of each from the mean. One then analyses the amplitude of the fluctuations in these differences as a function of time scale. One finds that the fluctuations in the differences as roughly constant as a function of scale corresponding to the cited figure (±0.03o C see http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/ESDD.comment.14.1.13.pdf). Notice that this is actually a surprising result since one would expect that the fluctuations at longer and longer scales would instead diminish (due to the increased amount of averaging in the longer time periods).
This lack of converge is an indication of the nontrivial measurement issues (they don’t follow classical statistics), yet the actual level of uncertainty is still much lower than that needed to check the anthropogenic warming issue which is closer to a 1o C fluctuation.
Shaun Lovejoy: yet the actual level of uncertainty is still much lower than that needed to check the anthropogenic warming issue which is closer to a 10 C fluctuation.
Ten C?
Kim says, “Some people think humans are ‘wrong’.”
jim2 thinks American humans are ‘wrong’
AK “Don’t you think most of the biome has had plenty of time to adjust, and evolve new features without the risk from a higher global pCO2?”
For a good part of the late Pleistocene pCO2 has fluctuated between ~ 180 ppm (glacial stages) and ~280 (interglacial). Most extant species (obviously not individual organisms), including us, have adapted to this range. We have now taken pCO2 well beyond that Pleistocene range, more into Pliocene levels.
A good thing? A bad thing? I don’t know, but I think it will be interesting.
Lüthi et al. (2008), High-resolution carbon dioxide concentration record 650,000-800,000 years before present, Nature, 453(7193), 379-382, doi:10.1038/nature06949.
(Fig 2 shows the whole 800,000-year CO2 reconstruction)
Seki, O., G. L. Foster, D. N. Schmidt, A. Mackensen, K. Kawamura, and R. D. Pancost (2010), Alkenone and boron-based Pliocene pCO2 records, Earth Planet. Sci. Lett., 292(1-2), 201-211, doi:10.1016/j.epsl.2010.01.037.
“during the warm Pliocene pCO2 was between 330 and 400 ppm, i.e. similar to today. The decrease to values similar to pre-industrial times (275–285 ppm) occurred between 3.2 Ma and 2.8 Ma”
O we are all ignorant fools, sometimes even
radiant fools … the glare that obscures.
Good enuff fer Socrates,
good enuff fer Thurber,
good enuff fer serfs.
Beth: +1
Beth: We are like 5 year olds playing soccer; just playing the ball.
AK | September 1, 2014 at 8:13 am | Reply
“…the other risks of CO2…”
Please articulate the other risks you are referring to.
thanks
I have many times, for instance just now here.
They biggest risk associated with CO2 is that if we ever reduce it, green things will not grow as well and it will require more water and life on earth will get more difficult.
The action of the sun on the biome inevitably virtually permanently sequesters CO2. Gaia is very grateful that she has evolved a species(Human Carbon Volcano) which is supplementing the historically failing natural vulcanism which is failing to keep CO2 in an optimal range.
==============
What’s the optimal range? I dunno, but we are pretty clearly below it.
=============
PCT perhaps this is helpful:
http://matadorco2.com/wp-content/uploads/co2_enrichment_chart3.jpg
The chart shows that photosynthesis shuts down at 200 PPM (C4 plants can function with less CO2). There was a terrestrial photosynthesis crisis before the current interglacial. At 280 PPM – a near starvation level – plants need larger amounts of water than they do at more normal CO2 level. When plants first evolved the CO2 level was 17.5 times (1750%) higher than the current 400 PPM and 25 times (2500%) higher than 280 PPM.
http://www.csiro.au/Portals/Media/Deserts-greening-from-rising-CO2.aspx
http://russgeorge.net/wp-content/uploads/2014/06/Global_greening_map1.png
The deserts are greening because the increased CO2 and the reduction in water consumption.
If we don’t increase CO2 before the interglacial ends the result will be catastrophic.
The only effective source of CO2 is mankind. Nature tends to bind CO2 and remove it from the atmosphere. It is fortunate we came along when we did because by increasing CO2 we can save life on this planet from extinction.
Human Carbon Cornucopia.
=====================
“I’m sorry Mr. President, there’s simply too much carbon pollution in this salad for it to be politically safe for consumption……would you like me to taste the lobster instead?”
“…what can be done about it….”
“A strange game.The only winning move is not to play.”
If I understand your
“AK | September 1, 2014 at 9:42 am |’Paleo helps you there, AK. It’s safe.’ No it doesn’t.”
post, the risk you cite from increased levels of CO2 is greening of the earth. I know of no science supporting a “tipping point.” Presumably the risk is unwanted flora would increase as well as beneficial flora. Unless there are creditable studies that identify specific and significant risks of unwanted flora, I personally would call that an unknown not a risk. Particularly since starvation, which is a know and immediate problem and not a risk, would be mitigated by “greening” of crops. Unlike Pangloss I do not believe “we live in the best of all possible worlds” and there is anything special or sacrosanct about the current levels of CO2 or temperature.
While not wishing to indorse a specific lever of CO2, I agree with the sentiments of
“PA | September 1, 2014 at 8:44 am | Reply
Reducing CO2 will require more water and land to grow food, which will lead draconian population controls to avoid starvation”
I do concur with you comment
” AK | September 1, 2014 at 8:13 am | Reply
Maybe now is the time to divorce policy from climate modeling”
Other than “greening” of the earth, did I miss any other risk you are concerned about?
The risk of any specific “unwanted flora” is an “unknown unknown”. The probability that some of the new, opportunistic flora would be unwanted competitors to our crops is probably beyond specific evaluation, but certainly not zero.
As to how much competition? Probably not much, given 5-10 years to develop treatments. What if we start getting 1-2 new competitors every year?
A bigger risk, however, is the emergence of new, fast-breeding insect pests among the new wild flora, with abilities to compete with us (and our domestic animals) to eat our crops.
AFAIK there’s no real consensus among students of eco-evolution, but there’s good evidence than many evolutionary innovations occur via sudden, wild, population explosions. Moving the pCO2 far outside the bounds it’s maintained since before the start of the Pleistocene increases the chance of that. By how much? Nobody knows. Could be negligible, could be a major risk.
Note that I said the Pleistocene because of its glacial oscillations, with repeated evolutionary stress with every phase switch. AFAIK best estimates of last time the pCO2 was much over 400ppm go much farther back, but IMO it’s the Pleistocene glaciation that has produced the highest evolutionary stress on all land populations, resulting in evolution of new characteristics without the need to allow for higher global pCO2.
And more CO2 in the atmosphere isn’t the only issue. That fossil carbon is going somewhere, and we don’t know how much, if any, damage it’s doing along the way. Increased deposition of organic carbon in the deep ocean, and on its floor, when it oxidizes, may well be contributing to progressive anoxia of the deeps. Whether that could have any evolutionary effects that could return to surface and bite us (in the near future, longer term probably isn’t an issue) we just don’t know. Not probable perhaps, but not impossible.
My position has long been that the climate debate can be made redundant by looking at the bigger picture. What the world really needs in the long run in order to develop the technology to support a 10+ billion poverty free population is practically free energy available for everybody. Obviously, fossil fuel cannot do this. If we can’t agree on climate change, why not try to agree on developing nearly free energy, whose side effects include stopping CO2 emissions as well as possibly solving poverty problems. It’s something we need to tackle eventually anyway.
Fossil fuel is a primitive technology, but we musn’t forget that it’s been a necessary step on the ladder towards superior technology. The trouble is that we should have been well along the road towards the next steps by now, namely nucluar power, but I think fear has been holding us back. If the world had made sense, environmentalists would be fighting for funds to develop better nuclear power, possibly even fighting for extending the life of old plants despite the waste issues if they really fear the effects of CO2.
Steinar
I have said many times that we need a CERN or Apollo type programme of concerted effort over a short time scale of a decade or so using the best brains and facilities to create alternative forms of cheap energy sources which would likely include storage technology
Tonyb
Steinar, it is doubtful that total future arable land can support more than about 9.1 billion. And that is IF meat calories do not increase, best practices spread where they are not used (requiring wealth and machinery and GMO crops everywhere), and yields continue to improve on top of best practices through hybridization and such, and pests do not evolve (weeds, root worms, and wheat rust already have). Read the detailed calculations on water, virtual water, and food production in Gaias Limits.
And by then, petroleum cost for transportation fuel will have risen enormously independent of CAGW or not and the ‘war on coal’. Two previous guest posts about that here. One on the IEA projections, and one on Maugeri from Harvard.
@Steinar Midtskogen…
Exactly my point. The risks from dumping all that fossil carbon into the system certainly don’t justify killing the Industrial Revolution. Only a socialists (and perhaps not all of those) would say otherwise. But there are approaches that would enable “rolling back” the pCO2 from a peak of 500-600 ppm to around the 400 it is today, later in the century. If, in the light of experience and better science, it’s deemed necessary.
And, IMO, we shouldn’t stop now. “Full steam ahead” with gas. Once “carbon-neutral” sources become available, all the investment in infrastructure to store, transport, and use it will continue to pay off. Using for generating electricity, as well as direct heat.
Actually, we are! Using that big nuclear reactor in the sky.
@Tonyb…
I doubt it. I’ve posted many links here (CE, not this particular post) to pages demonstrating the very rapid course we’re making towards solving those problems. It might speed it up a little, but the current situation seems to be driving a great deal of innovation. Via the “free enterprise” path, although with various incentives, financial and otherwise, from the political situation.
@Rud Istvan…
It’s past time that we stopped using natural land for agriculture. “[A]rable land” should be created in factories, used in enclosed structures (greenhouses, etc.), and occupy sunlit areas not suitable for natural growth. Existing “arable land” should be allowed to revert to wild ecosystems.
All these problems are much more easily addressed with enclosed agriculture. Water can be recycled. Pests and competition can be excluded, without constantly needing new pesticides, with their attendant costs and health risks.
Maybe. Or maybe Joule Unlimited’s new process, and others like it, will have crashed the price so much it’s no longer worth digging it out of the ground. Or, most likely IMO, something in-between.
@ Steinar Midtskogen and Rud Istvan
Rud, this quote from Steinar nails it: “What the world really needs in the long run in order to develop the technology to support a 10+ billion poverty free population is practically free energy available for everybody. ”
Your reply to Steinar: “Steinar, it is doubtful that total future arable land can support more than about 9.1 billion. ” misses the fact that as Jerry Pournelle is fond of saying ‘Cheap, plentiful energy is the key to freedom and prosperity.’
Given cheap, plentiful energy, arable land is a non-issue. With a large enough supply of cheap energy, ALL of our food can be grown indoors, under ideal conditions for the crop of interest, using desalinated water from the oceans.
Where do we get this cheap energy, postulating the exhaustion or outlawing of fossil fuels? Nuclear, ignoring political pressure and using whatever technology was most effective from an engineering standpoint, including breeder reactors, thorium reactors, or whatever else makes engineering sense, could come close. Other things like Rossi’s Ecat (should it actually turn out to be real) and similar technologies, whatever. Fusion? So far it is always 40 years in the future (According to jim2, a company called Helion claims that it is here now. Is it?) It won’t be solar and windmills.
The point is that as Steinar points out: Given cheap, abundant energy, the entire surface of the Earth is ‘arable’.
The tall pole in the tent right now is that we don’t have it and every attempt to develop it is blocked by the ‘environmentalists’ who are so concerned for our welfare that they are willing to kill off 90+% of the human race for our own good.
AK: That idea of enclosed agriculture is an idea I’ve long had, but not in the context of reduciing land use. My idea is to use vacant multi-storied buildings in the US inner city poverty areas, to help the poor find work. I read that there is such a project ongoing.
@rls…
Perhaps, near term. Longer term, looking farther out, there are several things I foresee that will, IMO, lead to a switch to enclosed agriculture.
First, there’s a “sleeper technology” based on using inflated structures. Today, such structures have to maintain their mechanical stability through simple negative feedback: push on it enough to distort its shape, and it will push back, in proportion. But if you supply such inflated structures with “muscles” in the form of internal tensile members with some sort of active tension systems, and you supply “brains” in the form of active control systems to react to stress (especially wind stress) in proportion to the stress rather than needing to distort the structure, you can build it using much less material, while still making it more robust. The key to making such structures very cheap is the control systems. IT. And IT follows” Moore’s Law”.
Another potentially very useful technology is “light pipes”. These can be hollow tubes of inflated plastic, aluminized on the inside. They can be made very light, cheap, and moderately robust, while easily and cheaply replaceable when the occasional storm takes them out.
Combining multi-story inflated structures with light pipes, and collection technology using similar cheap, replaceable inflated structures, agriculture could become very much like a factory. No tractors, just overhead cranes. Hundreds of square kilometers of “arable land” squashed into a few square kilometers of building, with the light being gathered by cheap, replaceable structures.
Of course, as Solar energy continues its exponential cost reductions, it might make more sense to use internal lights powered by solar cells. This is a use, one of very many, for solar energy that isn’t impacted by its intermittent nature.
And the whole shebang can be put on/under the oceans, in areas that currently have almost no life. No impact to the environment/ecosystem(s), no impact to land wanted for other purposes, no significant price for surface area, near term, except what politics places in the way of it. Thus making the solution of the agricultural problem(s) political rather than technical.
And notice how robust such a system would be to “climate change”.
Mr. Istvan, much of what I have read on food production is in sharp contrast to what you write. The FAO states that cereals production is increasing 1.5% annually without bringing any new net land under the plough. As population increase is 1.1% (and dropping), I find it difficult to see the rationale for your pessimism.
Most agriculture outside the West is practiced at something close to medieval levels of technology. There’s lots of room to improve productivity before looking for new land to till.
Annual Energy Outlook Projections and the Future of Solar Photovoltaic Electricity by Noah Kaufman
Of course, the key driver of uncertainty here is the extent to which the enormous subsidies granted solar PV will continue. The rest is just foam on the wave.
AFAIK those subsidies aren’t any greater in proportion than those granted to oil. And, in its time, coal. And certainly nuclear.
And when it comes to subsidies, don’t forget the existence of “nation-states”, and their hybrid descendants. Such political entities have strong strategic incentives to support the development of “appropriate” energy resources.
And don’t forget, also, that subsidies can take many forms. (As the highly unsuccessful feed-in tariffs have demonstrated.) Even intellectual property (e.g. patents) are really a form of subsidy, when you come down to it. As were the 19th-20th century military adventures in support of oil resources. Resources for nations and businesses with national strategic implications.
Before the nifty new methods take over, it would be good to understand the uncertainty principle again. You can’t, cannot, distinguish a cycle from a trend with data short campared to the cycle. The eigenvalues of the discriminating matrix explode.
An out exists in the case of strong signal to noise, called supergaining in the beam forming field, but it amounts to overwhelming the exponential explosion with signal.
rhhardin
Re: “uncertainty principle . . .You can’t, cannot, distinguish a cycle from a trend with data short [compared] to the cycle.”
How then are we to reliably predict and validate models, to identify whether a “pause” in global temperature is the consequence of a 22 hear Hale cycle, a 60 year ocean oscillation, a 1,500 year cycle, or the descent into the next glaciation?
How can we ensure we are achieving enough global warming to prevent the next glaciation? e.g.,
e.g.,
“Impact of anthropogenic CO2 on the next glacial cycle” Carmen Herrero, Antonio Garc´ıa-Olivares, Josep L. Pelegr´, Climatic Change (2014) 122:283–298 DOI 10.1007/s10584-013-1012-0
What if they are wrong? A one mile high glacier would have rather a major impact when it ploughs through a city.
Statistical downscaling of a climate simulation of the last glacial
cycle: temperature and precipitation over Northern Europe
Clim. Past, 10, 1489–1500, 2014 http://www.clim-past.net/10/1489/2014/
doi:10.5194/cp-10-1489-2014
If the previous model is now disrupted with anthropogenic CO2, how can we reliably modify it without experiencing the changes? – which could be catastrophic cooling!
How then are we to reliably predict and validate models
It won’t be by distinguishing cycles from trends.
I’m not in charge of the constraints. I just report them.
“The next 20 kyr will have an abnormally high greenhouse effect which, according to the CO2 values, will lengthen the present interglacial by some 25 to 33 kyr.” Whatever discount rate you use, that would have to be beneficial. What possible dangers from further warming could possible justify not extending the interglacial by 25,000+ years? Who would support reducing GHG emissions if this is well-validated and disseminated?
heteroskedasticity
http://www3.wabash.edu/econometrics/econometricsbook/chap19.htm
jim2 – heteroskedasticity
supercalifragilisticexpialidocious still leads, though hetero… is right up there.
I’m worried about the future of humor in climate science. If it’s like the past, we’re all gonna die.
Justin
Being stationary is a very strong requirement, which may not apply. I’d bet but do not know that he also needs gaussian.
Um, no. Go read the definition of “heteroskedasticity” again.
I am encouraged to see people taking the base CO2 climate sensitivity as the null hypothesis. Gavin Schmidt (among many others) seems to claim that high climate sensitivity should be the null hypothesis and claims of lower sensitivity have the burden of proof. Thar claim is central to his criticism of Dr. Curry’s 50-50 default position.
But using a model as the null hyopthesis is exactly backwards from how competent science is done. In the recent discussions, my respect for Gavin Schmidt as a scientist has certainly gone down.
When does “a pause” become a trend? Considering that the basis for the claims of “Climate Change/CAGW is based primarily on a record of 30 years or so?
Pause, pause, pause, pause; goblins bugger cause.
================
How long is the pause?
As shown in the following data, the two previous hiatus period last for about 30 years (from 1880 to 1910 and from 1945 to 1975).
http://www.woodfortrees.org/plot/hadcrut4gl/mean:60/detrend:0.9/from:1880
From this pattern, the current hiatus period should last until 2030.
Girma, until we understand causation there is no pattern. It could just as easily go up,down or continue on as is with no one the wiser.
Statistical forecasting does not require any causation. It just assumes the previous pattern continues. There is a clear pattern of an oscillation of 60 years period in the GMST data as shown above.
angech: Girma, until we understand causation there is no pattern.
Until there is a pattern, we have no understanding of causation.
Simple pattern analysis predicted the pause, and predicts that it should last until “about” 2030. So far so good for simple pattern-matching. Should the pause last until 2030, all the better for evaluating claims about causation, where the hypothesized causes will be judged by how well they match the pattern.
Statistical forecasting does not require any causation. It just assumes the previous pattern continues. There is a clear pattern of an oscillation of 60 years period in the GMST data as shown above.
Statistical forecasting does not require any causation.
The last ten thousand years are a really good pattern for the years ahead.
Agent Provocateur, cute graph. Thanks.
There is nothing magical about any given period of mean.
Still this one is fun:
http://climatewatcher.webs.com/NCDC_TrendOfTrends_34.gif
Also note how the green trend graph matches AMO:
Contrast the Golden Rule of Forecasting: Be Conservative, Armstrong, Green & Graefe, 2014
WRT Lovejoy.
1. The multiproxies are only a little over 100 years. Wouldn’t he need at least a 200 year period to reliably detect a 100 year cycle? And what of cycles longer than 100 years?
2. The assumption that all anthropogenic warming is due to CO2 is wrong and will exaggerate the residual warming that he attributes to natural variability. But this problem is probably overshadowed by the short proxy periods.
He can use stochastic estimates of solar and volcanic forcing which are the main components of natural variability to extend the effective record length before anthropogenic effects began. He uses 1500 as a starting point. Its standard deviation is about 0.2 C.
Estimates of solar and volcanic forcings are Wild Animal Guesses, right now.
=============
I claim the longest pause at 100 years from 1820 to 1920.
http://curryja.files.wordpress.com/2013/06/graph09.png
However, a pause is an average that disguises the many ups and downs between the start and end point.
Also, the pause will be different according to your location. We would be better looking at Koppen classifications rather than some non existent ‘global’ average whether temperature (or sea level)
tonyb
Hi Tony, as long as the procedure to obtain a “global temperature” is well defined, I see no problem with it. Mathematicians have been conjuring abstract concepts for hundreds of years to good effect.
Jim2
How useful is a global weight average or a global height average? They wouldn’t tell us anything about other trends and the fact that many people are starving whilst others are overeating for example would be lost.
Similarly with a ‘global’ average we are missing out on the nuances of what places are warming, which are cooling and which are static and over what period, all of which would tell us much more interesting things than a bald global average.
Paging Mosh. Come in Mosh.
tonyb
Global average is more uncertain when one changes the past, the trends and invents the numbers over the unmeasured oceans, southern hemisphere and arctic an Antarctic. What would be the error bars of the observations when added together and how can the absolute delta be estimated within 0.01 degress?
Too made up to initiate massive actions.
Scott
Tony,
Please don’t convolute issues. Having a global average in no way precludes the use of regional climate metrics. I mean really!
And the state of various societies is related, but certainly they have other problematic metrics other than climate. Vicious governments, cultural elements that prevent modernization, … the list is long and the interactions complex.
Jim2
You said
‘Having a global average in no way precludes the use of regional climate metrics. I mean really!’
Correct, but we don’t really look at these regional climate metrics do we as we are too fixated on the global average. We could learn much about the climate by examining the interrelationship between the various climate states we can observe.
tonyb
tony –
==> “How useful is a global weight average or a global height average?
Please explain.
Is there some parallel w/r/t weight and/or height to the concept of a global climate? Do you not think that there is anything that affects the climate on a global scale? If so, then how to you explain winter/summer alterations, or ice ages, or global temperature patterns, etc.? There are mechanistic links to climate on a global scale. There are no such links to height and weight on a global scale, (except, perhaps, in that global climate might affect nutrition on a more or less global scale).
How is your analogy useful?
Joshua
It was useful because it was an easily understood analogy.
Tonyb
tony –
==> “It was useful because it was an easily understood analogy.”
Seems to me that analogies are only useful when they are: (1) instructive and, (2) analogous.
You compared a phenomenon where there is no global-scale mechanism to a phenomenon where there is a global-scale mechanism to suggest that there’s no value in a global-scale metric.
So I am asking whether your analogy is either instructive and/or analogous. Are you conceding that neither is the case, but that it is “useful” anyway simply because it is easily understood?
But hey, if it’s useful for you….
Joshua
If you backtrack you will see I mentioned global temperature and sea level as not being useful averages. At which
Pint I introduced analogies to point out that global averages miss the nuances.
Let’s assume you are hydrological engineer looking after two projects to build sea flood defences.
In One location the sea level is falling and has been for years. In your other project it has been strongly rising at let’s say 10mm a year.
Now, do you take the ‘global ‘ average of around 3mm a year in order to plan or do you look at the data relating to the dynamics of each project?
If you don’t want to get fired and either under engineer or over engineer your defences with the resultant cost or safety implications you would ignore the global average in both cases as not being relevant to your individual cases.
Global averages have their limitations if you want to look at nuances. We are all concerned with what is happening in our locality whether that relates to temperatures that may be falling not rising or sea levels which don’t conform to the norm.
Tonyb
climatereason | September 1, 2014 at 11:31 am |
Tony, follow the url in my name, I have broken surface measurements into various sized blocks.
Mi Cro
Thanks for the info. Will look at it tomorrow.
tonyb
Was that a pause in cooling before warming started?
koppen DEPENDS on the accuracy of the data. its not an explanatory variable. its an effect
you are mentioned tony
http://neven1.typepad.com/blog/2014/09/ever-sailed-to-85n.html#more
Its the PDO and then the AMO. This is not only going to last, but there is likely to be a return to the satellite measured temps at the start of the era, when the PDO suddenly flipped. I have written extensively about this idea since 2007. The fact is that the “warming” is a distortion of temps brought about by major natural cycles, not the least of which is the PDO/AMO cycle. However the bulk of the warming has been in the arctic winters, which has very little effect on the number one GHG, WV . The cooling PDO leads to less frequent and less strong enso events, and an overall reduction in water vapor over the tropics ( hence the global downturn in ace.. spoke about this and showed mixing ratio flip at heartland). This in turn drives yet another dagger into the heart of the trapping hot spot, quite the contrary the reduction of wv over tropics in an area where there is much bang for the buck will lead to the opposite. This is clearly seen in the NCEP Temps since the PDO flip.. the spiking for enso events but the overall downturn that has begun
http://models.weatherbell.com/climate/cfsr_t2m_2005.png
The pause will end when the AMO which will flip to cold in the coming years has matured and the PDO warms again. The test of the theory is already on. The basic equation is the sun/oceans/stochastic events effect far far greater than co2, which is rendered as immeasurable in the noise, except to those who think they have found the needle in the haystack holy grail of climate, at the cost of 165 billion to the American taxpayer ( and God knows how much other treasure and effort, given all we have lost)
Its simple..like any forecast.. watch and verify
Hi Joe,
What do you make of the record-breaking (in the satellite era) of global ocean surface temperature?
If you have a rising trend away from the ending of a mini ice age , every year will be a record year. The point is we have no satellite record of the medieval warm period or Roman Holocene to compare it against. Therefore your obsession with record years are meaningless. All we can say that satellite record registered a rise in temperature that has now plateaued, nothing more or less. The satellite record simply isn’t long enough to say otherwise
Down, Kevin. It is a satellite era record and I made that very clear. I’m well aware satellites didn’t exist during the LIA or MWP.
What bothers me is that the sat record DOES include 1998.
The concatenation of cooling phases of the oceanic oscillations. I’ll bet I got that from you around seven years ago.
Also, Cheshire Cat Sunspots. He’s grinning for a reason.
====================
The effect of the PDO and AMO can be eliminated by looking at the last time these were in the same phase and taking the temperature difference since then, e.g. 1950 if they have a 60-year cycle. Or you could go back to 1910 when the sun was also less active like now. Someone needs to try these methods out and see what they get. It is surprising that the natural variability proponents have not tried this yet, as it seems kind of obvious to do.
Three times in the last century and a half temperature rose at the same rate, and only in the last of these was CO2 rising. Phil Jones heself told me so.
==========
The previous times it rose then fell, consistent with perturbations around the mean, while this time it rose, went flat then rose again. Quite different when you think about it, almost like something else is happening too.
I thought “the science is settled.” That’s all you hear from the warmists.
You’ve not been listening
What about the pause (actually a decrease) from 1941 to 1976?
Saying the pause is no more than natural variability is pointless unless one knows what natural variability is.
But natural variability is purely an expression of no knowledge, of uncertainty.
Adding the uncertainty factor, upsilon or Y, or uncertainty monster as Judy prefers says we do not know what is going on.
Hence natural variability is as long as a piece of string, indeterminable.
All the heteroskedasticity and attribution in the world is pointless without understanding the causes.
There are forced and unforced components of natural variability. Lovejoy accounts for the forced components from solar and volcanic effects assuming that the unforced components get smaller with the averaging time. This is most obvious with the ENSO cycle that has an amplitude of 0.5 C but is effectively removed by a 10-year average. Other longer cycles can be removed by longer averaging times, but are smaller anyway. As seen by the rapid restoring time after an El Nino the background radiative (Planck) response is a strong constraint to bring surface thermal perturbations back to the mean (like the dog on the elastic leash analogy), but that mean is changing with time too.
So you think the ‘lag’ is <<10 years for 'forcings' like ENSO but much bigger for CO2; as in TCS and ECS being very different due to a 'lag'?
ENSO is not a forcing, it is a state that starts with ocean temperatures. CO2 takes a lot longer to affect ocean temperatures because it affects them from outside.
JimD that was completely authentic climatic thermodynamic/kinetic meanless babble; you should have a Chair for that word salad.
OK, I will just say ENSO is not a forcing.
A cloud is a forcing, the wind is a forcing, but a combination of wind and cloud isn’t.
DocM, then your definition of forcing has no value in climate. What isn’t forcing?
In 1976 all you heard from the scientific world was that global cooling was happening. Now, 38 years later, it’s just the opposite from the “97 percent.” When did logic, common sense, and the scientific method escape the field of climatology? When did dogma and hard-headedness take over? Follow the money. It is truly amazing… and sad… to watch.
Logic, common sense and the scientific method has seemed to escaped you daveandrews723. I would suggest finding it by learning about that which you write. A good place to start would be the IPCC reports which are available online.
Too bad the Working Groups can’t cash the check the Summary for Policymakers wrote.
===================
“We may still be battling the climate skeptic arguments that the models are
untrustworthy and that the variability is mostly natural in origin. To be fully convincing, GCM-free approaches are needed: we must quantify the natural variability and reject the hypothesis that the warming is no more than a giant century scale fluctuation. ”
What happened to “lets look, in a dispassionate, scientific way at all the evidence we have, try to assess what’s missing and then see what hypothesis that leads us to.”
Is it a bird, is it a plane…?
We don’t want to share code and data with skeptics – they’ll just try to find something wrong with it.
The skeptics seem incapable of developing their own code, and their foray into the data (BEST) didn’t work out so well for them. Watts had a separate data effort that McIntyre didn’t want his name on, and has been delayed as a result.
But jimmy, the pause is still killing the cause.
Jim D. Show me where McIntyre said that.
Watts gave him a manuscript and he wanted to check for himself, and nothing more was heard because they disagree on TOBS.
http://climateaudit.org/2012/07/31/surface-stations/
In the link you posted, SM says he will continue to work on Tob. So, I’m not sure what your issue is. I don’t know what has transpired since, but again, I don’t see anything SM said that indicates he won’t help Anthony.
It’s been two years now…
Mosher knows a lot more about this, but until Watts uses TOBS McIntyre will not likely be on board.
Jim D said “The skeptics seem incapable of developing their own code, and their foray into the data (BEST) didn’t work out so well for them.”
Why should sceptics pay again to develop code? It’s already our tax pounds/dollars that went into the current stuff. Frankly, what is most upsetting is how bad the forecasting has been given the amount of money that has been spent. If my company had subcontracted this task out to the IPCC, I’d be looking for a new contractor by now!!
How can they not have reduced the number of models by now. Which is the most skilful and why. Which is the least skilful and why?
Value for money…not.
The kiss of death for the idea that climate science is in fact science is the high degree of paranoia regarding the notion that others should independently examine the models on which all the alarmist claims depend.
The skeptics want to see if they can replicate the results claimed with the code actually used.
Anyway, if skeptics did develop code, and found something different from what the radical warmists like, the latter would just dismiss it as “oil-industry funded FUD”. Probably without bothering to try to replicate the results.
AK commented on How long is the pause?.
in response to jim2:
I developed my own code to look at temp data, no pre-processing other than doing some minor cleaning of incomplete records, no homogenization, just straight averaging of various sized areas, plus I look at rate of change over a 24 hour period (yesterday’s warming, last nights cooling).
And you know what? It shows no global warming trend, there are some areas that have warmed, some that have cooled. It also shows that nightly cooling is the equal to the previous days warming, and seasonal warming is based on the changing length of day.
It also shows how poorly the past and some remote areas were sampled, one can make a good case that surface records prior to the 70’s at worst, the 50’s at best do not adequately sample surface temps and a global average is mostly made up.
The reaction to this is Mosh tells me it’s just wrong, and because it does not agree with the published series the people who matter, IMO don’t know what to make of it, and more or less ignore it.
I’ve made a dozen different views of surface data that represents hundreds of areas that have been analyzed, all of it is available (follow my name), including the code. It shows a very different picture of surface temps, basically global surface temps do not show a warming trend.
@ Mi Cro…
That was actually Jim D.
‘Course, I have a hard time telling them apart. Both future-blind in many ways.
“Mosher knows a lot more about this, but until Watts uses TOBS McIntyre will not likely be on board.”
Anthony wont use TOBS. he has selected stations where the metadata, if trusted, indicates that TOBS was not applied.
results are pending, but I dont know if steve will be in the paper.
other guys have been brought in
“The reaction to this is Mosh tells me it’s just wrong, and because it does not agree with the published series the people who matter, IMO don’t know what to make of it, and more or less ignore it.”
it’s not wrong because it disagrees.
Here is what you need to do.
1. describe your method is words and math.
2. supply the code for your method.
3. demonstrate that your method WORKS to quantify a trend in synthetic
data.
4. Publish.
It’s easy, pick any open journal.
The basic problems with the models is their inbuilt super-sensitivity to doubling of CO2.
If the TCR is significantly less than one, which is what the pause means.
Then the theories of Hansen and Mann go out the window.
Now the response of the atmosphere to CO2 doubling is theoretically robust but practically limp.
Many reasons for negative feedbacks can be posited with one overwhelming good reason.
Life has existed for several billion years on earth. Most of the CO2 extant that we have to burn has been made by life forms.
The carbonate in rocks and bones, the carbon based molecules in fossil fuels all owe their origin and deposition to life that has been sustained for billions of years in a non hostile environment due to feedbacks that keep the atmospheric and sea temperatures in the Goldilocks range of life.
A few years of minute CO2 rise will be forgotten by the planet as mechanisms like. Extra cloud and moisture production negate any possible major temperature rise.
We call this natural variation meaning we do not understand it, yet.
The sea levels rise and fall like tides with the atmospheric CO2 content too and wash away those coastal dwellings that are insignificant to nature.
Let’s hope Al Gore is in his Tennessee palace then instead of his San Francisco shoreline manse.
==================
Facts never matter on Climate Etc., but his beach house in Montecito in on East Mountain Drive. Why? Because it is on the side of a mountain.
Do you know of a beach house?
“…the IPCC has sidelined itself in irrelevance until it has something serious to say about the pause and has reflected on whether its alarmism is justified…”
Thankgod this isn’t just more pointless hyperbole from some free-market ideologue……oh, wait….
As soon as I see “alarmism” I know it’s a polemic. You’d think that Judith, with her consistent focus on the use of “denier,” would eschew similar polemics.
But I have to admit, I’m beginning to doubt whether that might happen.
What’s not OK for the goose is just fine for the gander, eh?
Well, considering that over 90% of the models were wrong to the point of alarmism (not to mention that anyone who pointed this out was called “anti-science”), if the shoe fits…….
Whether the term alarmist is accurate is one thing. Then there’s the question of whether it is polemic (and whether or not people who decry the use of “denier” should so comfortable with polemics). And then there’s the question of whether you or someone else should be anti-science.
Conflating the three is childish, unskeptical, and not productive.
sorry….whether you or someone else should be calledanti-science.
Joshua imagines that pragmatic realists should take space cadets seriously. It is not so. Having got it all so spectacularly wrong – we should play their game all the harder.
It is the model problem all over again – it is not that the models are wrong but that the solution – one of many feasible – pulled out of the arse end is determined by specious expectations.
The argument is not that the models are wrong but that they inherently – by fundamental laws of nature – are incapable of predicting the future.
The thing about alarmism is that it doesn’t intentionally compare alarmists to skinheads who pretend the Holocaust never occurred.
In fact, by comparison, it’s a little wimpy.
One member of a vast chorus. Can’t you hear the music?
=============
‘Alarmist’ is just fine, considering that what most skeptics deny is the alarm.
=================
On the contrary, most ‘sceptics’ seem nothing but alarmed. How alarming is the EPA to you kim?
Eric, Eric, Eric, you are just devastating.
==========
If the shoe fits. ‘Alarmist’ will stand far longer than ‘Denier’.
===========
‘Alarmist’ is apt, ‘Denier’ is not.
=======================
We know kim is an alarmist
Beware the coming cold apocolypse.
Yup. Nature is capable of far more alarming cooling than man is capable of with warming.
Seeking balance, yessirree.
==========================
If we are false-footed into mitigating a warming that isn’t happening instead of adapting to a cooling that is happening, there’ll be Hell to pay. Aren’t you alarmed at that bill?
We’ll know a lot more in a few years which way temperature is going(ducks incoming from the chief hydrologist).
So patience, my dear. Meanwhile, what do we do with all these white elephant models? They’re eating into house and home.
======================
An ice-age is an imminent as a perpetual motion machine.
kim the kold catastrophist.
We’re at half precession cycle and AnthroCO2 just might get us over the hump for another 11,000 years of Holocene. I fear the effect will not be strong enough. We’ll see. Well, not me.
===========
Well…
The acolytes of catastrophic anthropogenic global warming treat historic data with an approach reminiscent of creationists.
The acolytes of catastrophic anthropogenic global warming appear to be members of a religious or quasi-religious group.
Andy West wonders whether this meme will ultimately be constructive or destructive. My take? Catastrophic Anthropogenic Global warming is a destructive meme, and Anthropogenic Global Warming is a constructive meme.
A warmer world sustains more total life and more diversity of life.
Simples, ain’t it? When will they ever learn? When will they ever learn?
==========
Yes indeed – we’re on long term cooling trend, a short term dimming of the sun,and still global av surface temps stubbornly refuse to go down.
Curious.
kim | September 1, 2014 at 10:48 am |
“A warmer world sustains more total life and more diversity of life”
Of course…..temperate rainforests have nothing on the Sahara.
Heh, Michael, I thought the sun didn’t have much influence.
Re: regions. Sure there will be regional costs. In total, there will be net benefit. Observe the bottom line.
===========
Kim – there is a better term for it since “anthropogenic” global warming (the pattern of the last 100 years) is really about increasing low temperatures and lengthening growing seasons… beneficial anthropogenic global warming (BAGW).
Yes, PA, anthropogenic warming will eventually be seen to be a good thing.
==============
Easily the safest way to geo-engineer a better world for the biome.
==============
kim | September 1, 2014 at 10:58 am |
“Sure there will be regional costs. In total, there will be net benefit. Observe the bottom line.”
Based on what?…..besides wishful thinking.
Paleo. Warmer is always better than cooler.
====
Paleo what??
120m higher sea-levels???
Nutty kim.
Hyperbole doth not become thee. We’re not melting those two icecaps with man’s feeble contribution.
=========
What’s the magnitude of the orbital forcing that you’re so alarmed about??
Nutty kim.
Half precession is a risk. That was the end of recent interglacials.
==========
kernal kim.
========
Yikes! Always getting my ‘e’s’ and ‘a’s’ mixed up.
======
Way better: Not minding my e’s and a’s.
==================
kim thinks it prident to geo-enginenr today for events likely thousands of years into the future, but cries ‘alarmist’ for people concerned about the coming century.
That’s ‘skepticism’ for you.
‘prudent’…..but prident is interesting too.
Well…
1. “Of course…..temperate rainforests have nothing on the Sahara.”
Rainforests are “tropical” rainforests and much of the Sahara is tropical (below 23° latitude). Temperate rainforests are on the west side of coastal mountains (they are forests where it rains a lot – not self-sustaining rainforests in continental interiors). Per another post the Sahara is greening (CO2 reduced water consumption to the rescue). Low CO2 helped make the Sahara a desert. 1/3 of the land area is desert – shrinking the deserts will be a boon for mankind (and wildlife, and plants, etc.)
2. “120m higher sea-levels???”.
http://www.cmar.csiro.au/sealevel/images/fig_hist_1.jpg
Well, yeah – but you are measuring from the pre-Interglacial period. The sea level was higher the last interglacial. The GMSL is computed with +0.3 mm to account for the 0.4 mm the sea is sinking (look up GIA, yup, 0.3 mm of that GMSL is fudged). If the sea is sinking 0.4 mm per year we have pretty much tapped out the sea level rise.
BAGW could be interpreted as bhagawat – a path to achieving salvation through loving devotion to a particular deity, open to all persons irrespective of sex or caste – which in this case one might take as the positive impact of those humans who embrace life in all its diversity and understand the great benefits it brings. “We have seen the gods, and they is us!”
It is astonishing that Michael can write ‘120 meters of sea level rise’ and think that anyone should pay attention to anything he writes.
Don’t blame me for your ignor@nce Tom.
I love RSS for measuring the pause, it drives the warmists crazy. Instead of discussing it rationally as in here is a proven data set, adjusted as best we can for a prolonged period showing no warming it suddenly becomes an object of derision and hatred.
This more than anything else shows when rational debate has been sacrificed on the alter of expediency.
Unfortunately the satellites cannot stay up for ever.
Bob Tisdale has posted the latest SST charts. These show that we have a new SST high, largely due to the rapid (sunlight fuelled) warming of the N.Pacific. So tin hats and perhaps a stick to chew on as the extremists make the most of this. So is the pause over? Discuss.
From Bob’s blog – too funny!
Preliminary Note: An “alarmism warning” indicates alarmism is imminent. On the other hand, an “alarmism watch” indicates alarmism might occur, but that’s all the time.
OMG – it’s a HOCKEY STICK!!!
So, when the North Pacific was colder the sun was not shining as much?
We do not understand how the sea develops hot and cold spots and blaming the sun is not helpful.
Cloud formation or lack of is more relevant to a new area of heat in the ocean. The sun was not suddenly hotter or colder and was not the cause of the change in SST.
angech,
I believe that Bob Tisdale’s perspective is that changes in SSTs are sunlight fuelled rather than LWIR fuelled. I don’t believe he is suggesting that changes in insolation are responsible.
Pingback: How Long Is The Pause? | The Global Warming Policy Forum (GWPF)
Judith, I would not bother comparing Cowtan and Way to anything in the way of respectable science. Two co contributors to Skeptical Science, the most warmist cabal of warmists put up a method of divining temperatures with a self admitted (Cowtan) method that lines up temperatures from places that are further away as more accurate than places next to each other is not science.
Their method is “failure proof ” in that it has never dropped a temperature anywhere.
The only people around who could give a scientific view seem to be outer circle supporters like Mosher, Zeke and Nick who lend blind support.
Where are the McIntyres, Curries, Cheifio’s Carricks and Jean S’s when you need them?
Someone needs to diss their paper soon and thoroughly.
“Someone needs to diss their paper soon and thoroughly.”
Here’s a list of links to WUWT rebuttals:
http://wattsupwiththat.com/?s=cowtan
none of those “rebuttals” actually tests the method.
Recovering north polar ice will make his paper academic. It was the currents not the temperatures anyway, so there is that. What hogwash.
===============
The assumption is that an expanding Arctic ice sheet will promote cooling – but will it do so in reality?
Cheifio doesnt have the skills. Mcintyre has used a similiar approach.
there is nothing wrong with their approach. I’ve tested it using independent data. it’s valid.
But you didnt test it. you dont like the answer, therefore it must be wrong.
You do know how to cover yourself.
A method being valid does not mean it is right, only that you put in a formula correctly and got an output.
Years from now you will be able to hands on heart say “I didn’t say it was right, only valid”.
Like your Eli- like “A sum of all models is better than one”, valid statement , yes, true? Hardly.
You know better than anyone else because you work with the data and the people pushing warming [cringe, cringe] desiring methods that they are valid [warming in gives warming out] but rubbish.
The answer is obvious and you have never denied it.
In all their Arctic work they have never shown a “station” get colder. Why?
They have also shown near 100% correlation [dare I use the 97% meme, heck why not] with any other data sets they have tested.
Inconceivable!
Inigo Montoya: You keep using that word. I do not think it means what you think it means.
Why, Eli might say because they front loaded those results in, but he and I wouldn’t.
Mosher, this time you really stepped in it. On 2/25/14 you guest posted the newest BEST results. You plotted HadCRU, NASA GISS, NOAA NCDC, Cowtan and Way, and BEST land plus ocean. And the pause that CW sought to erase was still there in all your graphed data sets, including CW. Because you charted anomalies conventionally, unlike CW.
How to lie with statistics 101, from a book written if I recall correctly in 1939. Clever CW, thinking most people are not well read.
Arguing against your own previous posted here 2014 figure takes selective memory to a new level. Darned that Internet, and archiving.
BTW, makes a lovely little example from an essay in the forthcoming book. So many thanks, even if you and CW did not intend the help.
the “pause”…
hearing about this is what got me interested in climate science –
the other side of this issue is that CO2 ppm has gone way up steadily during the “pause” (by about 80% unless I’m mistaken)
still don’t get how the “greenhouse” effect isn’t called into question as a result –
can heat, with complete stealth, enter any system and then hide… become “missing”?
(read the SkS incoherent explanation … ugh!)
The way I understand it is that the latest theory (excuse) from the warmists is that all the added heat because of rising CO2 levels is now being “stored” in the oceans. It must be, the warmists claim, because their models are so “reliable.” I imagine they will now start predicting that the oceans will release this heat with a vengeance (time to be determined apparently) and the wrath of CAGW will be even worse in the decades to come. They have an answer for everything.
Alarmists are already touting this but it is thermodynamically highly unlikely.
That won’t matter. Narrative above all.
===================
kim: Alarmists are already touting this but it is thermodynamically highly unlikely.
the current of warm, dense (because salty) water carries the heat energy into the deep water. There is no thermodynamic argument that makes this unlikely except by ignoring the density of the salty water.
At WUWT, Tisdale is pretty excited that August was the warmest month in SST history.
Matt, I meant the return of heat to the surface.
============
“missing” heat … alternative universe perhaps?
Far away, but still in this galaxy.
========
I think the dolphins took it with them.
Does it matter if the pause ends? In previous warmings the temps may well have gone higher than now. Chinese researchers reckon their MWP was a bit up on this current lot. (But cooling is not at all nice in China, very dry as well cold, so maybe they’re biased.)
Really, when it’s not cooling it’s warming…and what do we expect to happen in a warming? Warming! (Along with very short flat spots and mini-coolings). The Hockeystick…it’s not real. They totally made that up.
For this we are trashing the energy supply? And handing billions over to villains we neglected to lock up after 2008?
China has their own Tibetan tree ring temperature reconstruction untouched by huMann hands. I’ve long suspected that they’ve snapped to the fact that mild global warming benefits the Middle Kingdom.
===================
You couldn’t sell the late Ming on more cooling. Not interested at all. Talk to the lacquered hand.
Judith –
Given that you’ve highlighted the Lovejoy article, I thought you’d find this interesting – also from a paper he authored:
And also this:
Apparently from the paper connected to the abstract that you highlighted in your post.
Did you only read the abstract? Or is there something I’m missing here?*
*(Don, don’t answer that question.)
Oh…wait…you quoted from the conclusion. So you did read the paper…So I wonder what you make of that whole “confident rejection of the natural variability hypothesis” thingy.
She already said we need better paleo reconstructions. So did Lovejoy.
===========
I was hoping for some more elaboration, kim. You know, so that I can ratchet back my alarmism. Why does she seem to accept some of their results but then reject their conclusion? It’s one thing to say that she doesn’t think that their certainty about rejecting the “natural variability hypothesis” is justified, but I would think that then she would give some alternative range other than levels greater than 90% certainty, and explain the thinking behind her range of certainty.
So does she reject their results as being inadequate when examining centennial scale fluctuation but accept their results when attributing the “pause?” How does she decide when to agree with their methodology and results and when to find them inadequate? Is it merely on the basis of whether their conclusions align with hers, or is there some technical reason why their methods produce valid results when discussing the pause and inadequate results when discussing natural variability on a longer scale? What is her cut-off point, and what is the technical reason for picking that particular time frame?
1. see how you assumed she didnt read the paper. Apologize and explain why you jumped to the conclusion.
2. you dont get how one can like a method AND point out a limitation?
for reference there is a reason why judith likes my method and lovejoys. what would that be?
1. Curry is encouraging people to look at non-GCM-dependent approaches to estimating natural variability patterns. Lovejoy and McKittrick are both doing that in very different ways but in that same spirit of not allowing GCM hubris to circularly support GCM assumptions and GCM runs. Whether the specific conclusions from these methods agree with her individual scientific judgment about what is probably going on with the climate is distinctly secondary to the main point–these papers join the battle rather than dodging it. Judith likes that. She is more interested in steering the science back on track than making sure she wins the argument about where climate has gone/is going/is affected by forcings. Joshua can’t see that because he is a partisan and a non-scientist and so can’t imagine that someone might be more passionate about getting the scientific process right than achieving a policy win.
2. Lovejoy’s statement rejects at 99% or whatever that ALL the warming during the anthropogenic CO2 era is natural. Not sure what it says about 33-66%% of the warming being caused by natural fluctuations–but as argued ad nauseum on the earlier thread, that middle tercile is Curry’s favored zone of belief.
3. Apparently Curry also has the usual sort of scientific back-and-forth technical issues with whether Lovejoy’s method can do what he says it does with the confidence he claims. That technical discussion is one for research participants to hash out with interested observers (our peanut gallery) interpreting the various puffs of smoke and pieces of ejected debris over time to see which view (if either) is more plausible.
If a dove descended on a ray of light from the sun bearing an olive twig in its beak and spake to you that there was no reason for alarm, you would not ratchet back your alarmism.
If James Hansen knocked on your door and said there was no reason for alarm, you would not ratchet back your alarmism.
There are no circumstances under which you would be anything but alarmed.
That’s the true meaning of unfalsifiable.
I think the get-out clause is that Judith thinks half the warming is CO2, while Lovejoy dismissed the 0% hypothesis at 99.9%.
Ah. Interesting. Kind of an apples and oranges situation. Thanks,
No. you guys need to learn to read
1. Judith prefers observational approaches over GCM approaches.
2. Lovejoy has an observation approach.
3. She likes the approach.
4. She notes an issue with the data. For this approach to work, you need
better proxies.. good temporal resolution, small error.
remember your first job is to understand the strongest version of your opponents argument.
Steven Mosher you have to compare the hypothesis dismissed by Lovejoy with Judith’s 50% hypothesis. They are not the same. Judith also dismisses 0%, but then goes on to dismiss 100%, the IPCC central estimate.
err Jim. no you dont.
Josh, I know this is going to go over your head but one can attempt to look at this simply.
Take the temperature over the last 34 years; the recent flat line is CO2 and natural cooling and the 17 years before that is CO2 and natural warming.
Plot it an get something like this
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1989/to:2104/plot/hadcrut3vgl/from:1989/to:2014/trend
This is the basic position; oceanic oscillations cause periodic warming and cooling. Over a whole cycle the total warming/cooling is nil, but if one examines a period of warming, one observes warming, whereas if one examines a period of cooling, one observes cooling. This is not neurochemisty, it is very, very simple.
The vast majority of people you sneer at as ‘deniers’ believe that increasing atmospheric CO2 will cause an increase in steady state temperature. However, this is not being tested by Lovejoy, he is testing the hypothesis that all changes global temperature are due to natural variation. He is doing this because, like you, he treats anyone who is not afraid of increases in atmospheric CO2 with contempt.
He, like you, asks the wrong questions
Doc –
==> “:The vast majority of people you sneer at as ‘deniers’”
I was reading along happily, willing to look past the first slight to assume the potential of good faith…until I ran across that comment.
You see, you disqualified yourself not only on the basis of bad faith, but also on the basis of terrible reasoning.
You have never, ever, seen me “sneer” at people as “deniers.” And it’s not like the first tine that you’ve made that mistake (in one form or another – completely mischaracerizing what I do and don’t say), or read the same interaction take place with other “skeptics.”
It’s not clear to me why you would take the time to respond as you did, but irrespective of the reason, you once again can’t gain enough credibility for me to even follow the rest of your argument.
Try rethinking your position. Try recalibrating how you approach analyzing situations where your motivated reasoning is likely to be activated, and then you have a hope of presenting a valid argument – one worthy of a serious effort to understand and respond.
I promise, I won’t give up. Even though you have made the same mistake countless times in the past, I will extend you the good faith of possibly rising above that analytical flaw in the future. I have every reason to believe that you have those skills when you are engaged in analysis where you are less invested from the perspective of identity. Even though the evidence suggests that those with those skills are that much more likely to engage in these kinds of polarized situations with a greater degree of “motivated reasoning,” the causality isn’t lockstep.
I have faith in you, Doc. Don’t give up on yourself.
It was way over joshie’s head, so he picked out a word to yammer on about, thus avoiding the substance of the comment. Nice work, joshie.
Josh, you are just a first class jerk.
The problem with Lovejoy’s approach is the same as the problem with Mosher’s from the previous thread (well, they’re basically the same. Maybe Mosher got the idea from this paper?): It doesn’t just throw out GCMs, it throws out useful information about the timing and relative magnitude of major volcanic eruptions. These would provide an ability to estimate expected natural variability over a particular period, rather than assuming all periods are the same, which we already know is wrong.
The period Lovejoy uses to determine natural variability amplitude (1500-1900) is, on average, much more active in terms of forced natural variability than the period 1880-2014 or 1951-2010.
An MPI-ESM-P model historical run exhibits a trend in the last 64-years (1942-2005) of 0.67ºC. If I look at 64-year trends over 1500-1900 in the model past1000 experiment I find 5% exceed 0.66ºC. Essentially, based on this method I wouldn’t be able to rule out at 95% level that the model could have produced all the warming over 1942-2005 without any anthropogenic influence. Does anyone believe that could be true?
Not so unlikely as you might think. Though I don’t think so.
Marvelous critique and thanks. I’d never have come up with that.
===========
“The problem with Lovejoy’s approach is the same as the problem with Mosher’s from the previous thread (well, they’re basically the same. Maybe Mosher got the idea from this paper?): It doesn’t just throw out GCMs, it throws out useful information about the timing and relative magnitude of major volcanic eruptions.”
no i did not get the idea from Lovejoy.
I just flipped gavins approach on its head.
I dont throw out GCMS.. I just dont run them with anthro forcing.
yes there are problems with the approach. But my position is you calculate the fricking answer and make the caveat
Fair enough, just pointing out the caveat.
Specifically looking at the period since 1950, the evidence suggest solar forcing went down, if anything, and it is not clear that volcanic events have had a net effect on the trend. So when we see a rise of 0.7 C, it is hard to say that Lovejoy’s definition of natural variability contributed since 1950 even though it could have in past 60-year periods.
The sun’s got in your eyes. Come about.
==============
solar forcing used in Ar5 is wrong.
Statisticians McShane and Wyner were not the first to find that there was no ‘signal’ in the proxy data that Michael Mann used to fabricate the ‘hockey stick’ hoax-graph (MBH98/99/08) that the UN-IPCC showcased and Al Gore pointed to as proof than humans were the cause of global warming. All of the junk science published after that was nothing more than the findings of Mann’s sycophants, based on the same phony data.
“The key challenge is this: convincing attribution of ‘more than half’ of the recent warming to humans requires understanding natural variability and rejecting natural variability as a predominant explanation for the overall century scale warming and also the warming in the latter half of the 20th century. Global climate models and tree ring based proxy reconstructions are not fit for this purpose.”
The fact that Dr. Curry can make this statement and expect most readers to understand it shows a huge advance in our understanding of attribution over the last two years. The key change is that “natural variability” is once again in its proper place as the focus of scientific endeavor. However, much work remains. To do justice to natural variability, experimental and theoretical scientists must search for regularities that are found in nature and that have a physical existence apart from modelers’ assumptions about “internal variation” in their models. (For those who think that natural regularities must be perfect cycles, please ask some mature woman.)
“How long is the pause”?
How broad is the peak?
+1 to lovejoy as well.
See Paul S. above.
=========
+.95
How about this, least I propose a heretical hypothesis:
Starting in 2020 solar flux flatlines into a minimum lasting 22 to 33 years. The oceans cool providing an enhanced CO2 sink dragging down CO2 lagging temperature. Temperature acually decreases. After that warming resumes albeit slowly at first.
Such a scenario, being the opposite of mainstay, could be described as doomsday cooling. Certainly not recommended for a career in climate science, unless you live in Montana. Still, not considering all of the possible projections would be like reaching a point in market swings where you completely discount a crash. It can go up, it can go down and it can go sideways there is always those three possibilities.
The largest solar variations are a few tenths of a W/m2 and we see what that can do (the LIA and MM), so when CO2 forcing can go up 5 W/m2 in a couple of centuries, you can put that in perspective.
How many W/m2 has CO2 forcing gone up since 1950, jimmy dee? And since the pause started?
Strickly speaking there would be a 33 1/3 % chance of any of the three. However, based on what you know about P/E ratio, bulls to bears index (a contraindication), Fed policy, fiscal policy, etc etc … One could increase the possibility of one or the other(s) being of greater chance.
CO2 additions has provided 1.35 W/m2 since 1950 on its own, so the rise of 0.7 C is not surprising in that perspective either, and that includes all of the pause.
ordvic, the table is rather tilted by the CO2 effect which is the elephant in the room when it comes to forcing.
Jim,
If one believed in the 50/50 propositon you’d have 50 go to up and 50 go to sideways and down. I would preface that by saying natural variability can cause up as well, but for the sake of simplifing my premise I’ll give all of natch to the other two. If it were 50 to sideways and 0 to down the choice would still be hard. If it were 25/25 there would still be three choices but up would be the safest bet. Being that the market has historically always trended up many get sucked in; oblivious of an occasional crash. Keeping money in is conventional wisdom and the Warren Buffet way. However the inexperienced investers are always hurt the worst by a crash. If you invested across the board in 1929 it took until 1952 to get your money back. That is unless a lot of the picks went bankrupt and they did.
It seems to me that Climate would be harder than the market being that it hasn’t trended up in the long long term (temps being much higher in most of the ordovician epoch and trending down since). If one looks at temps since the LIA there is indeed a nice upward trend but that still doesn’t discount a crash entirely.
50/50 means your choices are up, sideways, or faster up.
How much since the pause started, jimmy dee?
You better put a stop loss on that just in case :-)
There is no pause in the 30-year climate. The last climate pause was in the 1960’s.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1900/mean:360
Since you are a pause denier, I will rephrase the question: How much in the last 15 years?
Jim,
That graph onky goes to 2000. You better try again.
ordvic, the 30-year average at 1999 covers 1984-2014.
If you had done the same thing back in 1965, you would have also reported no pause:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1870/to:1965/mean:360
…and 1965 was also 30 years into a pause:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1870/to:1965/mean:36/plot/hadcrut4gl/from:1935/to:1965/trend
The forcing in the last 15 years has changed about 0.4 W/m2 and the temperature of this 15-year period is 0.25 C warmer than the previous 15-year period.
…meant to say actually 30 years into the pause
That does not refute the pause.
phatboy, by 1965 the decade average was cooler than the previous decade average, while now the difference is 0.1 C between the last decade and the previous one. If you think 30 years is too strict, you can use that 20-year measure.
jimmy dee says: “The largest solar variations are a few tenths of a W/m2 and we see what that can do (the LIA and MM)”
Theoretically adding .4 W/m2 CO2 forcing in the last 15 years hasn’t produced any noticeable warming. Everybody but a few deniers are calling that the “pause”, jimmy. It’s a problem for Chicken Little, when people notice that the sky isn’t falling. The pause is killing the cause.
Don M, yes, you can take an El Nino which has a 0.5 C change in a year and show it drowns out CO2 variations too. Proves nothing, but you can do it. Natural variation is even more impressive if you take individual months.
Jim D, you’re arguing that there’s been no 15-year pause because the last decade was warmer than the previous decade – during which temperatures were still increasing for the first half.
You do the math!
My point is that 1965 was already 30 years into a pause, meaning that the decade 1955-1965 was no warmer than the decades 1935-1955, and even despite that, your 30-year mean would have shown no pause whatsoever in 1965.
Why are you trying so hard to argue against the pause anyway, particularly when it’s been all-but acknowledged by a great many climate scientists?
If the pause continues for another few years then you’re going to find it increasingly difficult to wipe the egg off your face.
Otherwise, if the pause comes to an end in a year or two then you’ll probably be gloating – but you’ll then be right for all the wrong reasons – which won’t do your credibility much good.
This is another way to look at the pause in the longer term context. We see that negative perturbations of 0.1 C are common, and this is just another one that seems to have ended. Also note how the La Ninas are getting warmer.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1970/mean:12/plot/hadcrut4gl/from:1970/trend/plot/hadcrut4gl/from:1970/trend/offset:0.1/plot/hadcrut4gl/from:1970/trend/offset:-0.1
Let’s do the same thing again, once again pretending we’re in 1965:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1910/to:1965/mean:12/plot/hadcrut4gl/from:1910/to:1965/trend/plot/hadcrut4gl/from:1910/to:1965/trend/offset:0.1/plot/hadcrut4gl/from:1910/to:1965/trend/offset:-0.1
We are obviously not talking about one year, or individual months, jimmy. It’s at least 15 years and counting. They call it the pause. They are grasping at any excuse. You can’t average it away. Try to catch up, jimmy. How much longer can you maintain your denial?
phatboy, the difference is that in mine we are now headed back for the mid-point (the monthly data has already been there for 5 months) and the trend is double yours.
Jim D, in 1965 we were already 30 years into a pause – big difference.
You queried earlier whether it’s statistically significant – well, perhaps, or perhaps not, but not for the reasons you cite.
There may or may not be a pause, just as there may or may not have been the warming you think beforehand – they’re both as statistically significant or insignificant as each other. You either look at the data one way, or you look at the data another way. If you contend that the warming was statistically significant, or not, then you have to treat the pause the same. You can’t have it both ways.
Advancing my heretical hypothesis I’ll go with 50% down (solar minimum) 33% sideways (oceans) and 17 % up (CO2 about where it should be).
Ironically, the UHI effect — by operation of the very thing than contaminates the sampling — serves to reduce the heteroscedasticity of the data –e.g., the continual removal of snow from and tarmac of French airports where official thermometers are located reduces winter albedo compared to the French countryside, making the sites of the weather stations more like it’s summer all the time.
From what we now know about Alpine glacial advance and retreat patterns(e.g., Christian Schlüchter pointed out that the retreat 4000 and 2000 years was greater than the retreat today), the only good argument against the finding that centennial scale giant fluctuations — unrelated to human activity– is all we need to explain all late 20th century warming is the presence of millennial scale giant fluctuations that also are unrelated to human activity.
What the #$%^ is this unscientific political strategizing of a predetermined conclusion doing in a scientific journal?
And why is the fact that it is present not the focus of vocal outrage by scientists, including the one that hosts this blog?
Can you clarify why it is wrong to acknowledge skepticism, especially as that motivated this study.
This $#!^ does not “acknowledge skepticism” as anything other than a political obstacle on the path to the predetermined conclusion.
What motivated this “study” was the intention to rationalize the party line, and to blatantly advocate for more of the same. That is not science. It is the sort of anti-scientific $#!^ that ought to be restricted to political manifestos and religious tracts.
No, he took one of the skeptic arguments at its word and wrote a paper to demonstrate to those people another way of looking at it without models.
Jim D: No, he took one of the skeptic arguments at its word and wrote a paper to demonstrate to those people another way of looking at it without models.
Without GCMs, not “without models”. His model is one of the family of statistical autoregressive models, with a maximum lag of 125 years.
Actually, it isn’t an autoregressive model with 125 years, it basically uses the multiproxy data and then a scaling assumption to scale the distrubitons grom 64 to1 25 years. Scaling could be considered an infinite order autoregressive model (it’s based on power law, scale invariant decorrelations/dependencies, not exponential, scale dependent ones). Actually, the scaling assumption is pretty minor since it is only used over a factor of about 2 in scale (64 to 125 years) and is verified by the second order moment. The only other assumption is for the extreme tails of the distribution of temperature changes: it bounded between two “black swan” type extremes (power laws with exponents 4 and 6). These are so extreme as to be outside the usual realm of classical statistics. In other words, the test I used is much more demanding than the bell curve or other standard statistics (these all fall off more quickly at the extremes: in an exponential way). If you find it too extreme for your taste, just use it as an uppur bound. Even with this, the probabilities are of the order one in a thousand that the change since 1880 is natural (nearly one a million with the bell curve – if you prefer).
Shaun
What were the two extreme black swan events?
Surely the one in aThousand chance of it being natural since 1880 can only be true if in effect it has never happened before?
Tonyb
My reference to black swans may cause more confusion than clarity. Let me try to explain.
Mandelbrot pointed out in the 1960’s that Levy distributions – which may arise naturally due to their “stability under addition” property – could explain very extreme fluctuations (especially but no only in economics), calling them examples of the “Noah effect” (from the Biblical Flood). Levy distributions generally have power law extremes with exponents less than 2 so that their second moments (variances) don’t exist. In the 1980’s, based on scale invariance and cascade processes (multifractals) the result was generalized to power laws with any exponents. In 1986, I showed empirically, (with Schertzer: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Annales.Geophys.all.pdf) that at least some climate data has extremes with exponent 5. By the 1990’s power law extremes were a major theme in nonlinear geophysics. In the 2000’s Naseem Taleb wrote his book popularizing such extremes using the “Black Swan” metaphor and applying it mostly to economic series. In 2013, in my book, I show that monthly changes in global temperatures also have exponent 5 (Lovejoy, S., D. Schertzer, 2013: The weather and Climate: emergent laws and multifractal cascades, 496pp, Cambridge U. Press).
The statistics needed to bound the empirical 125 year global temperature changes have tails with exponents 4 and 6 (the exponent 5 works best…), see the probability distributions in: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthro.climate.dynamics.13.3.14.pdf). There is therefore a theory (scale invariance) justifying such distributions, as well as much empirical data. But even if the theory is rejected, they can still be used as extreme bounds – all that is needed to reject the natural variability hypothesis with high levels of certainty.
Shaun Lovejoy,
What would your method say about the Medieval Period and likelihood of it’s warmth say, as per Moberg2005, and it being “all natural”?
My position has been that the nature of the fluctutations change fundamentally at scales longer than about 125 years: the macroweather to climate transition. Fortunately, macroweather statistics are (just) enough for settling the anthropogenic warming issue. Unfortunately, the answer to your question – the statistics of longer term climate variability – is really not at all clear, although it seems that the GCM’s underestimate it (http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/esd-2012-45-typeset.final.pdf).
For a rather biting critique of conventional approaches to this low frequency (and some other!) issueS, see “Lovejoy, S. , 2014: A voyage through scales, a missing quadrillion and why the climate is not what you expect, Climate Dynamics, “http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/climate.not.revised.28.7.14.pdf
which was just accepted for publication this morning!
It notably shows that the NOAA paleoclimate site is in error by 10 quadrillion (10**16)!.
Hi Shaun, thanks for spending time here, I’m following your papers with great interest.
Shaun Lovejoy,
Thank you. How about if you take the same length of time as you did with the modern period, with frame taken from within the period of the low point at about 900 in Moberg to the high point 200 years later?
Shaun Lovejoy: Actually, it isn’t an autoregressive model with 125 years, it basically uses the multiproxy data and then a scaling assumption to scale the distrubitons grom 64 to1 25 years. Scaling could be considered an infinite order autoregressive model (it’s based on power law, scale invariant decorrelations/dependencies, not exponential, scale dependent ones).
Sorry, my mistake. That’s still models (my main claim) but not what I thought.
Shaun
Compliments on clarifying quadrillions. We look forward to reading your paper when available.
If you have not yet seen their work, for “long” term climate statistics Koutsoyiannis et al. track Hurst-Kolmogorov dynamics over 9 orders of magnitude as different from conventional assumptions of randomness in GCMs! e.g.
Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.
Koutsoyiannis, D., Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432, 2011.
The quadrillions paper was accepted yesterday and is on my site. I am well aware of Koutsouyannis and have criticized the strong limitations of his climactogram (the requirement that the fluctuation exponent H must be in the range -1<H<0) that make it not useful over most of the range to which he applies it – that's why the quadrillions paper uses a much superior fluctuation called the Haar fluctuation which is valid for -1<H<1 which covers the entire 20 orders of magnitude needed.
See for example:
Lovejoy, S., D. Schertzer, I. Tchiguirinskaia, 2013: Further (monofractal) limitations of climactograms, Hydrol. Earth Syst. Sci. Discuss., 10, C3086–C3090, 2013, http://www.hydrol-earth-syst-sci-discuss.net/10/C3181/2013/. Supplement.
Shaun
Thanks for posting your preprint that I just found, where you so well extend the discussion of Koutsoyiannis et al.
How does Murry Salby’s addressing diffusion in ice cores reducing higher frequency variations impact such long term climate alanysis?
I won’t comment on Salby’s theory except to say that it is contrary to much evidence. In any case, I don’t need ice cores to estimate the centennial scale variability, and that’s the basis for the probability distributions, return times etc.
Shaun
Thanks for your work on the “Haar fluctuation which is valid for -1<H<1"
Re: "macroweather statistics are (just) enough for settling the anthropogenic warming issue"
In light of “unknown unknowns”, that suggests that “macroweather statistics [may NOT be] (just) enough for settling the anthropogenic warming issue”. i.e. magnitude of climate sensitivity, the magnitude of anthropogenic warming, and the confidence we can have in those!
Why does the climate science community ignore the world standard on quantifying uncertainty, especially Type B uncertainty? See:
Evaluation of measurement data — Guide to the expression
of uncertainty in measurement BIPM JCGM 100:2008
David L. Hagen commented
David, I think the data is there, most of it is however filtered away as noise. But I think it is clear proof there isn’t a warming problem, what we have is large amounts of energy moving around on it’s journey from when it is collected on Earth to finally being radiated away to space.
The people making the temp series have made one large mistake (apparently), They’ve never compared aggregated unmodified data to their field (obviously their out of band testing isn’t catching the problem, although they have to either process out of band stations to be comparable to their fields, or they have to decode the field to make it comparable to the station in question).
They are all doing something in common because they all have the flawed final results.
Thanks Mi Cro
Re “Filtered away as noise”
WM Briggs politely emphasized: Do not smooth times series, you hockey puck!
Michael E. Mann “used the filter length equal to the length of the signal to be filtered!” etc. etc.
David L. Hagen commented
Every day we get a chance to examine the surface’s response to a large input of solar energy, and there are a lot of places to sample different conditions.
I do get some live places where the weather’s boring and doesn’t change much, maybe that’s why they get it so wrong?
“Most institutions demand unqualified faith; but the institute of science makes skepticism a virtue “. Robert K Merton
Judy sometimes likes to present all perspectives it doesn’t mean she endorses it. To not consider or look at the article would be rather closed minded wouldn’t it?
There are some great looking graphs and tables one can look at on the pdf if for no other reason.
What I’m getting at, Shaun, is that if similar results could be obtained from any pre-industrial period at all, then that would cast a very different light, no?
I strongly approve of this statement in the paper. It says nothing about “politics” or “policy,” conrtary to JJ. Something like it would have applied to any pure-science dispute–say, continental drift, the existence of atoms, Chomsky’s linguistic theory,, radiation hormesis, you name it– where there were competing hypotheses or a bold hypothesis and a skeptical community doubting it. This statement clarifies the stakes in the debate as seen by the investigator rather than obfuscating in that passive-aggressive voice that makes it so much harder for non-specialists to figure out contextually what is going on in a paper.
Furthermore, Lovejoy’s behavior on the site has been impeccable so far in the face of a less-than-usual but still high level of provocation. Hard to complain about someone accepting the Curry methodological argument, rattling the china in the cupboard of conventional climate science with his pursuit of scaling theory, and engaging constructively on a skeptic-accommodating website. Some people need to take a chill pill.
wow. Is this really Climate Etc.?
Josh, sometimes it’s better not to posf. Guilt by association ya know :-)
Steve
I think the sceptics have been like pussycats with Shaun. The questioning has been robust but no more than that. Whilst we may disagree with Shaun about his methods and levels of certainty we can still respect him.
tonyb
Tony: “Less than usual but still high level of provocation” is consistent with your pussycat description using the Celsius-to-zoology conversion scale. You may have to make a TOBS adjustment, though.
Take a trip to your local crafts or art supply store and browse through the available scrapbooks.
Replacing a front door is not an expensive undertaking,
so do so if it needs it. Your kitchen should be able to accommodate
the stock of your monthly groceries with ease.
There is no reason to believe that “robust paleo proxy data” that is sufficient to establish accurate global average temperatures is even possible. We cannot really do it with thousands of thermometers so how can we possibly do it with proxies?
“There is no reason to believe that “robust paleo proxy data” that is sufficient to establish accurate global average temperatures is even possible. We cannot really do it with thousands of thermometers so how can we possibly do it with proxies?”
Creatively, with the right motivation.
On this Labor Day, the Denizons prefer little labor about length of the pause.
What did the stadium wave predict?
“The stadium wave signal predicts that the current pause in global warming could extend into the 2030s,” Wyatt said, the paper’s lead author.
http://judithcurry.com/2013/10/10/the-stadium-wave/
one more thing on “missing” heat…
(day off…idle hands)
Argo pods go down 2000 meters … shouldn’t they be picking up this “deep ocean” heat, at least on it’s way (via magic) to greater depth?
Yes, and they don’t. This is why there is so much doubt about ‘missing heat’ hiding in the abyss.
===============
Johnsmith etc etc
I asked that very question of Thomas stocker of the Ipcc at a met office office sponsored climate conference a few months ago. According to Thomas stocker we don’t have the technology to measure heat at depth and that calculation of ocean heat in general becomes more problematic once you get too far below the surface.
Tonyb
Hi Tony
A brief digression. CET August daily max was well down on the July numbers, when the daily min is taken into account, the 2014 summer (JJA) turned out to be below average
http://www.vukcevic.talktalk.net/CET-dMm.htm
(as forecasted).
Vuk
So what will the UK winter be like? I need to know if I am going to be your French house guest for the winter or not
Tonyb
“We will not be successful at sorting out attribution on these timescales until we have more robust paleo proxy data. The paleo proxy community also seems to be in a rut, with continued reliance on tree rings and other proxies having serious calibration issues.”
From my branch in the labyrinth, where only unlearned observers perch, this makes sense. What say you who know? What path to take?
Judith Curry:
”The key challenge is this: convincing attribution of ‘more than half’ of the recent warming to humans requires understanding natural variability and rejecting natural variability as a predominant explanation for the overall century scale warming and also the warming in the latter half of the 20th century”
In my opinion, the key problem is how to get rid of the threat of false AGW.
I interpret JC’s topic http://judithcurry.com/2014/08/28/atlantic-vs-pacific-vs-agw in that way that the impact of anthropogenic CO2 emissions on warming during the industrialized era has been minor – maybe even minimal:
”I do regard the emerging realization of the importance of natural variability to be an existential threat to the mainstream theory of climate variations on decadal to century time scales. The mainstream theory views climate change as externally forced, e.g. the CO2 control knob theory. My take is that external forcing explains general variations on very long time scales, and equilibrium differences in planetary climates of relevance to comparative planetology. But it does not explain the dominant variations of climate on decadal to century timescales, which are the time scales of relevance to policy makers and governments that are paying all this money for climate research.”
In my comment http://judithcurry.com/2014/08/16/open-thread-20/#comment-619105 , as conclusion, I have stated:
”As anthropogenic CO2 emissions cannot be accused of the recent global warming, there is no reason to curtail CO2 emissions. Further research must be focused on actions how to produce energy clean and competive enough, and how to adapt life of mankind to natural events of weather and climate.”
Firstly, according to natural laws, all CO2 emissions to atmosphere, and all absorptions of CO2 from atmosphere to CO2 sinks together control the CO2 contentent in atmosphere. As the recent anthropogenic CO2 emissions have been only about 4 % of the total CO2 emissions to atmosphere, even the share of anthropogenic CO2 in the recent total increase of CO2 content in atmosphere has been about 4 % at the most. Thus the anthropogenic share in the recent annual increase of about 2 ppm CO2 in atmosphere is about 4 %, too, i.e. about 0.08 ppm. This should make anyone be convinced that the anthropogenic CO2 emissions do not dominate the increase of CO2 content in atmosphere.
Secondly, the recent increase trend of CO2 content in atmosphere have been dominated by natural warming of sea surface on the areas where sea surface CO2 sinks are, which have made absorption from atmosphere to sea surface lessen. This has made even nearly all the annual anthropogenic increase of 0.08 ppm CO2 be possible; otherwise the anthropogenic increase of CO2 content in atmosphere would have been minimal, and possibly indistinguishable from zero.
Thirdly, during the last century the trends of CO2 changes in atmosphere are consequences of temperature changes and not vice versa. The above mentioned warming of sea surface on the areas, where sea surface CO2 sinks are, is a consequence of warming during periods dominated by El Niño events; periods dominated by La Niña make cool sea surface areas where sea surface sinks are.
“missing” heat was created to to rebut the “pause,” was it not?
(relevance to topic)
Yep. You are getting it.
=============
Well, I put a PDF copy of AR5 into my electronic sausage-maker, adding eye of newt, and toe of frog, wool of bat, and tongue of dog, adder’s fork, and blind-worm’s sting, lizard’s leg, and howlet’s wing.
After much grinding and digesting, it spat out 35 years, -20 years, + 35 years. It had an outlier of +1500 years.
Does that mean it is vomit?
8-)
It would be nice if the most vocal CE skeptics would define exactly what they conclude that the “Pause” means. A pause in what?
(1) A pause in an increasing step-wise gradient in temperatures?
(2) A pause before a statistically significant natural cycle cooling swing must occur (excluding things like volcanoes)?
(3) A “Pause” between something else? Or maybe the term “Pause” shouldn’t be used at all?
In responding to this question, let’s put it in a time-frame context (and statistically significant modern record anomalies) of the past ~250 years (using a Dr. Curry quote in a UK interview that during the past ~250 years, temperatures have increased).
Maybe we should define “climate” as well. And “skeptic.” And “natural.” And just in case Mosher engages, we should probably define “define.”
We need to get Bill Clinton to weigh in here.
not really, ross gives a good operational definition
“Or maybe the term “Pause” shouldn’t be used at all?”
The term I like, because it’s neutral, is “plateau.”
Well…
The climate has a number of cycles that are basically sinusoid oscillations.
http://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Sine_one_period.svg/220px-Sine_one_period.svg.png
The global warming enthusiasts and climate models basically drew a line through the graph at 0 (slope of 1) and said the warming would continue into infinity.
We are near the π/2 (90 degrees) point of the curve, slope = 0 (zero) and they don’t look quite as smart.
I’m not actually sure what is happening but things haven’t changed much for 17 years (the infamous pause). The “pause” means things aren’t changing very much.
Since intelligent scientists temperature predictions are everything from, “it is going soar moon-ward”, to “the temperature will drop 1°C” I would not bet real money on the future trend at this time.
mcKitrrick defines pause.
definitions are not a problem here.
he defines it operationally and tests.
not a problem.
if you dont like the defintion, define your own and test
So… 19 at the surface and 16-26 in the troposphere…
Ok…
“The hypothesis that the industrial epoch warming was a giant natural fluctuation was rejected with 99.9% confidence.”
99.9% confidence?
Really? This is a joke, right?
Alas, no joke. That scientists can believe such absurd numbers, and get them through peer review no less, speaks volumes about the present state of climate science. The very idea of applying confidence intervals to proxy data is nuts. Confidence intervals measure pure statistical error, which means they assume perfectly accurate measurements. Moreover, the shaky proxy data does not measure global temperatures. That is a whole separate manipulation of the poor data, one that is itself full of error.
JimD, “captd, 2 C per doubling works whether you start in 1950 or 1750, as Lovejoy showed.”
Right and both dates follow some cooling event and if you assume colder isn’t “normal” you get a lower sensitivity. Since he is using CO2 as a proxy for everything, a recovery from a cooler period would have a similar ln shape making it hard to distinguish “forced” from recovery and what “forcing” contributed what. Since we have no clue what ECS might be or even if it exists, we cannot “exclude” anthropogenic forcing but we cannot improve its estimate much either. Just another verse in the psalms of climate change.
HOWEVER, 2C from 1750 including ALL potential forcings is not greater than 2C which would have been considered low just a few short years ago.
captd, well it becomes more than 3 C if you include a 10-20 year delay which correlates just as well, and I think we know that the response isn’t instantaneous anyway. Also 2/3 of the forcing has occurred since 1950, so most of the fit is in the later period and hardly any from the 1800s and before.
http://www.physics.mcgill.ca/~gang/Anthro.simple.jpeg/Simplified.Anthro.small.forcing.jpg
It says that from what we know about solar and volcanic fluctuations, 0.8 C of warming is unlikely to have been completely caused by them (0.01% probability). Perhaps you disagree?
No, I think he is using a statistical approach, at least I hope so, as there is no way to quantify “what we know” along the lines you suggest.
In fact I disagree strongly. First I am pretty certain that we do not know that it has warmed the 0.8 degrees that you claim, because the statistical methods being used to make this estimate of warming are very crude. Second I am even more certain that we do not understand the sun-climate link, enough to think it quite possible that if this warming has in fact happened then it may well be due entirely to solar influence. Quantify that.
He did quantify that as a 0.2 C standard deviation with various choices of fat tails. It was still very unlikely.
Jim D, There is no way to get a standard deviation out of area averaging, which is how global temperatures are estimated. The 0.2 degrees must be a guess of some sort. Either that or he is taking the grid cell averages to be measurements, which of course they are not.
But when you say “It was still very unlikely” what does “It” refer to? Our knowledge of solar and volcanic influences or statistical variance?
He uses global temperature like everyone. It is a standard deviation in a time series which is a well defined concept.
It may be well defined but it is statistically erroneous. The time series of global temperatures is not a time series of sample measurements. It is a time series of statistics. Thus the confidence intervals of the global temperature series is statistically meaningless. Confidence intervals are defined probabilisticly based on sampling theory, but the time series values are not samples, rather they are themselves averages (of averages). When you average averages there is no basis in statistical theory upon which to calculate a confidence interval.
Would you characterize the pause as statistically meaningless on that basis?
Area averaging is one way. Not the only.
its easy to get a standard deviation.
and easy to test.
Jim D: It says that from what we know about solar and volcanic fluctuations, 0.8 C of warming is unlikely to have been completely caused by them (0.01% probability). Perhaps you disagree?
that “unlikely” is dependent on the outcome from estimating a statistical model that could not estimate a natural cycle with a long period if it is there.
Matthew Marler, that is what the skeptics have to resort to: a natural cycle somewhat proportional to the CO2 forcing, but decidedly not just feedback, only coincidental. What are the odds?
Jim D: What are the odds?
I keep explaining that the odds are not knowable on present evidence. If the models with ca 1000 year periods are reflective of real processes (how else could they exist except by equally unlikely chances?) then the odds are high. Conditional on those models, the current warming is occurring at about the right time (i.e., the right “recurrence” time.) Lovejoy’s result depends in part on the fact that he has omitted any possibility of estimating processes with such long recurrence times.
If there are no such processes, he’s golden; but his analysis does not tell us whether there are any.
Matthew Marler, the question was rhetorical, since the odds against such a coincidence are enormous. How can natural variations mimic the forcing function of an exponentially growing CO2 component without being in some way related to it?
JimD, “How can natural variations mimic the forcing function of an exponentially growing CO2 component without being in some way related to it?”
How come the sensitivity required to make the exponentially growing CO2 component mimic climate is constantly being reduced?
captd, 2 C per doubling works whether you start in 1950 or 1750, as Lovejoy showed. There is a fundamental upward curve that this captures.
Jim D: Matthew Marler, the question was rhetorical, since the odds against such a coincidence are enormous. How can natural variations mimic the forcing function of an exponentially growing CO2 component without being in some way related to it?
The conditional odds are quite reasonable. Given that there is a process that produces the observed oscillation, and given that a low point was the last Little Ice Age, and given that the human industrial revolution has coincided with the recovery from the LIA, the probability of the warming phase coinciding with the exponential rise in CO2 concentration is close to 1. Now, how probable is it that the industrial revolution has taken off dramatically since the end of the Little Ice Age? Who knows? It would require postulates that allowed calculation of the probabilities. But whatever the probability, it did in fact happen that way. Therefore, the conditional probability of the rise in CO2 coinciding with the natural warming is 1.
For some reason, people usually fail to take into account that all probability calculations are conditional probability calculations, and they fail to state the conditions under which the calculations apply. Lovejoy’s calculated p-value (which he inverted to get a confidence statement) is conditional on there being no process that has produced the observed oscillation with a period close to 1,000 years. How likely is it that the observed oscillations occurred without a process driving them? If there is such an oscillatory process, how likely is it to have ended right when CO2 increased above a particular threshold (say, its value in 1880, 1950, or some other)?
There is a name for the “giant natural fluctuation” that Lovejoy rejects with absurdly high confidence. It is called the Little Ice Age. Not calling it by name seems strange. I cannot imagine the statistical magic required to make it disappear, with virtual certainty no less.
Stephen Segrest
It seems to me that “pause” is a marketing term, much like “side effect”
there are no “side effects,” only effects
I would assume the burden of definition might fall on the side which believes it to be a temporary hiatus on the way to the resumption of predicted warming
I’m an agnostic on the issue, just trying to learn
I must say, with all due respect, that the semantic gymnastics that to my ear come mostly from CAGW end of the spectrum, tilt me to the skeptic side
I respect your comment even though I had little trouble understanding it
my humble thanks to our host and all who comment for tolerating me and letting me play
I don’t even bother to comment on SkS even though I read it often – they will just call me a “troll”
I”m a human being dam* it not a “troll” (joke) :)
actually. where one to lay eyes on me, most would say that I very much resemble a troll
Pretty hard to tell, it’s so dark under that bridge.
were one to lay eyes – dam* spell check
Throughout Earth’s history most of the many sudden climactic shifts have been credited to volcanic eruptions. And, all of such shifts have been downward.
We may not have perfect explanations for all climate shifts but we do know that such shifts are the reason we are here: the existence of life is explained by abrupt climate change.
For the last 100,000 years Earth has mostly been locked in an ice age punctuated only briefly by periods of warming such as the interglacial that gave birth to our species. Earth has been locked in ice age conditions for more than 80% of the time over the last one million years. Those are all of the facts. Will the hiatus last forever
I respect Ross McKitrick’s thinking about a method applicable to assessing whether the “hiatus” represents a departure from some temperature trend.
“The method makes use of the Vogelsang-Franses (2005) HAC-robust trend variance estimator which is valid as long as the underlying series is trend stationary, which is the case for the data used herein.”
I am reminded of a discussion on Bart Verheggen’s blog in 2008 with “VS”, who a that time, validated the lack of departure from a trend in surface temperatures from 1880 to 2008 thus implicating normal variation in temperatures unrelated to purported CO2 causation.
I am further reminded that Tomas Milanovic several years ago making a statement that has since stuck with me: “Never twice the same phase space.” That is, in predicting the future, one can not assume the future will have the same conditions as one has experienced in the past. Therefore, building models of the future, once one can model the past with you model, does not mean the model has applicability to the future.
More simple observations like watching what have been the weather patterns during the PDO, ENSO and AMO, may provide a thumbnail sketch of what one might expect for the future. Just don’t count on it.
Where was the scientific community from the 80’s on that has allowed this “man-made global warming” nonsense to become the prevailing scientific wisdom that it now is? Why were the hypothesis and those ridiculous models accepted as “settled science?”
Mann and Hansen are heroes now to most people. CAGW has now been taught as “fact” to two generations of students. The politicians and mainstream media dismisses any skepticism as “denial.”
It is nice to see so much skepticism now, but it seems like the horse is already ut of the barn.
It is a real shame that the scientific method and the scientific community failed 30 years ago. The world is wasting a lot of money and threatening economies to solve a problem that does not exist.
daveandrews723 | September 1, 2014 at 3:24 pm | Reply
“Mann and Hansen are heroes now to most people.”
Don’t be silly. The vast majority don’t know Mann and Hansen from Adam and Eve.
“Mann and Hansen are heroes now to most people. CAGW has now been taught as “fact” to two generations of students.”
Are you serious – “most people”. I would wager that Mann is disliked/distrusted even within his own cadre of “climate scientists”. It is pure irony that CAGW has been taught (indoctrinated) to students for two generations and yet none of them actually experienced any warming at all.
I think the problem is that no one is applying the Stadium Wave component to any of the model fits. If this is added, then the natural variability is largely accounted for and thus the pause is explained.
http://imageshack.com/a/img674/3518/rdsvA5.gif
oops, supposed to be nested here:
WebHubTelescope: If this is added, then the natural variability is largely accounted for and thus the pause is explained.
How long will the pause last? Your statement might be true, and your model might be reasonably accurate over the next 3 decades (as you unpdate taking ENSO into account, or become able to predict ENSO.) How long a pause does your model express.
Marler, The pause actually never existed. All you are seeing is variability that is compensating the underlying trend, just as noise compensates an electrical signal. The key is to reveal the underlying trend, which I and Prof. Sean Lovejoy and a few others know how to do.
The rest of you seem not to understand how to do this, which is unfortunate for you guys.
To WebHubTelescope
Yes, it is actually surprisingly simple to estimate the anthopogenic contribution: using the CO2 forcing as a surrogate for all the anthropogenic turns out to be just as accurate as GCM forecasts one year ahead! In other words if you give me the CO2 level for any year between 1880 and 2013, I can tell you (with one parameter, the “effective climate sensitivity”) the global mean average temperature to within ±0.109o C. This is almost exactly the error in GCM forecasts of the temperature one year ahead! (see e.g. Laepple, T., S. Jewson, and K. Coughlin (2008), Interannual temperature predictions using the CMIP3 multi-model ensemble mean, Geophys. Res. Lett., 35 doi: L10701, doi:10.1029/2008GL033576, 2008 or Smith, D. M., S. Cusack, A. W. Colman, C. K. Folland, G. R. Harris, and J. M. Murphy (2007), Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model Science, 317, 796-799).
The problem with the pause was that it was somewhat overpredicted by GCM’s: however using a more empirically based approach, it is nearly perfectly predicted (see: http://www.physics.mcgill.ca/~gang/Anthro.simple.jpeg/Simplified.Anthro.small.forcing.jpg).
Shaun Lovejoy —
And Tamino and Lean and Cowtan and others have done similar models, so that what we are doing is not exactly ground-breaking.
And I agree it is simpler to wrap all the GHG factors into one effective aCO2 factor, as the aCO2 seems to be the leading indicator — others call it the “control knob”.
Yes, others have done similar things, but for other purposes attempting notably to estimate “equilibrium climate sensitivity” and sometimes “transient climate sensitivity”. My approach was even simpler, (“effective climate sensitivity”) but nonetheless necessary in order to estimate natural fluctuations directly.
Sometimes in science the simplest things are done last!
But you are right that the warming since 1880 has been so large that it may still be statistically tested and rejected with confidence even without a very precise separation of natural from anthropogenic variability: my separation of anthropogenic from natural is not the most important point – more important was the estimation of preindustrial probabilty distributions from multiproxies.
Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. #8220;This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”
Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.”
You’re kidding right? Assume that all warming is CO2
and natural variation has no global energy dynamics effect and find – surprise – that all warmth is anthropogenic. The webbly certainly does – and he is the epitome of superficial and inconsequential analysis. So what do we make of this?
Let’s assume it is all anthropogenic. The rate of warming is some 0.07 degrees C/decade. The latter century warming rate is irrelevant as natural variation added to the warming and the current seems more likely than not to persist for 20 to 40 years – as seen in the record of Pacific states in the proxy records. In the absence of chaotic instability in the climate system – one might be excused some complacency.
In the presence of chaotic climate instability – it changes not a whit the practical and pragmatic responses.
Still I think it curious that the warming coincides both with the modern solar grand maxima and a 1000 year peak in El Nino frequency and intensity. These are connected in top down modulation of the Pacific state and provide a mechanism for solar amplification.
This from Vance et al 2013 – A Millennial Proxy Record of ENSO and Eastern Australian Rainfall from the Law Dome Ice Core, East Antarctica – http://connection.ebscohost.com/c/articles/85340584/millennial-proxy-record-enso-eastern-australian-rainfall-from-law-dome-ice-core-east-Antarctica
http://watertechbyrie.files.wordpress.com/2014/06/vance2012-antarticalawdomeicecoresaltcontent.jpg
I’d suggest that a millennial warming in the 20th century seems pretty likely – and that the next mutli-decadal climate shift is not guaranteed to be to a warmer state.
The data on ENSO as a steady, yet bounded, contributor to natural variability is gleaned from proxy results.
http://imageshack.com/i/ezCEAViPg
We are trying to figure out underlying patterns to ENSO at the Azimuth Project. This is a discussion on using the proxy records:
http://azimuth.mathforge.org/discussion/1451/enso-proxy-records
What on Earth could he mean by steady?
Moy et al (2002) present the record of sedimentation shown below which is strongly influenced by ENSO variability. It is based on the presence of greater and less red sediment in a lake core. More sedimentation is associated with El Niño. It has continuous high resolution coverage over 12,000 years. It shows periods of high and low ENSO activity alternating with a period of about 2,000 years. There was a shift from La Niña dominance to El Niño dominance some 5,000 years ago that was identified by Tsonis (2009) as a chaotic bifurcation – and is associated with the drying of the Sahel. There is a period around 3,500 years ago of high ENSO activity associated with the demise of the Minoan civilisation (Tsonis et al, 2010). The period had ‘red intensity’ (El Nino) in excess of 200. The red intensity for the 97/98 event was 98. It shows ENSO variability considerably in excess of that seen in the modern period.
http://watertechbyrie.files.wordpress.com/2014/06/moys-20022.png
And if by bounded he means by limits not remotely approached in the 20th century – he might be onto something.
BTW – poorly fitting a curve to data using parameter scaling in an unphysical equation is not remotely the same as understanding the phenomenon.
Why go back thousands of years when the operable time period is hundreds of years?
http://imageshack.com/a/img539/5597/CEAViP.gif
This is from the Unified El Nino Proxy records
[1]S. McGregor, A. Timmermann, and O. Timm, “A unified proxy for ENSO and PDO variability since 1650,” Clim. Past, vol. 6, no. 1, pp. 1–17, Jan. 2010.
WebHubTelescope: Why go back thousands of years when the operable time period is hundreds of years?
In order to examine whether the operable time period is hundreds of years. This is basically why, if you expect a period of 60 years, you want at least 180 years worth of data. As Prof Lovejoy explained, this part is signal processing 101. As Rice and Rosenblatt showed in 1988 (I think it was in Biometrics), if amplitude, phase, and period are all unknown and must be estimated from the data, you have a real hard problem, and 3 periods of observation are not likely enough. Autoregressive and heteroskedastic background variation do not make the problem easier.
Webster, “Why go back 2000 years?”
https://lh4.googleusercontent.com/-F7C4538ir3Q/VAUbfr9hyOI/AAAAAAAALcI/uKTnyeFx_c4/w774-h484-no/2000%2Byears%2Bn.%2Bextra%2Btropics.png
’cause we can.
captd, another way to show that data. Spot the difference.
http://static.skepticalscience.com/pics/Ljungqvist-2010.gif
JimD, You should discuss that with the author, that was his/her splice :)
It was NH extratropics, so the HADCRUT4 line is too. What was your complaint?
About the dueling graphs from captdallas and Joshua, this looks like the original: http://agbjarn.blog.is/users/fa/agbjarn/files/ljungquist-temp-reconstruction-2000-years.pdf
on page 7, that stops at plus 0.4. That additional roughly 0.4 from the SkS graph, how did that get there?
Why go back over the Holocene with a high resolution ENSO proxy? Why go back 1000 years? Because we can – and it is informative.
Ragnaar, they seem to have debated the datasets at SkS. Anyway, I would have used the CRUTEM4 NH land plot that comfortably has the same range for the period after 1900, since the proxy data was NH land.
Shaun Lovejoy | September 1, 2014 at 6:00 pm |
It then follows that if I give you a temperature for any year between 1880 and 2013 you can produce CO2 level.
Chicken:egg
Try harder, Shaun.
First – independently of the cause and effect – the magnitude of the temperature change is so large that the probability that the trend suddenly started in about 1880 is very low (about one in a thousand). You could always argue that just such a massive natural fluctuation just happened to drive the temperature, up liberating CO2 from the oceans as a consequence.
If your causal chain is correct, the probability is still extremely low, but of course the policy implications would be totally different.
However, we know pretty accurately how much CO2 has been emitted by humans and we even know pretty accurately how much has been taken up by the oceans (about half). It can’t be the other way around (the way it apparently was during the ice ages).
If you accept the strong CO2 forcing/temperature correlation, then the egg has to be the anthropogenic forcing and the chicken has to be the temperature rise. However, the probability of the warming event being natural is the same low value.
Please, Shaun. You have no bloody idea how fast twentieth century warming compared to the past because the temporal resolution in paleo temperature series
increasesdecreases with time. Failing to acknowledge the limitations of paleo temperature proxies gives away warmist cabal members every time. Your published paper using the word “denier” doesn’t speak well to your objectivity either. Just sayin.JimD, ” Ragnaar, they seem to have debated the datasets at SkS. Anyway, I would have used the CRUTEM4 NH land plot that comfortably has the same range for the period after 1900, since the proxy data was NH land.”
Since Way is a regular at SKS, why not use the new C&W version of Hadcrut4? That version cools the past and the 1850 to 1999 mean should be used to make the splice giving the SKS gang a little boost, but 0.4C? And the new version should include the error range I would think for both data sets.
JimD, “It was NH extratropics, so the HADCRUT4 line is too. What was your complaint?”
You mean other than the proxy and hadcru4 diverging? None of the reconstruction proxies provide reliable information on arctic air temperatures especially the winter warming. The Hadcrut4 comparison would need to be location matched with the proxies AND time of year the proxies represent.
captd, that is why I would prefer CRUTEM4 of those I know about. If they have a 30-90 N CRUTEM that would be even better. CRUTEM4 easily covers the range of longer red line I showed.
JimD, “captd, that is why I would prefer CRUTEM4 of those I know about. If they have a 30-90 N CRUTEM that would be even better. CRUTEM4 easily covers the range of longer red line I showed.
Nope, doesn’t “easily cover” the range. Since data points are very sparse it requires a lot of interpolation to get back to 1880. Nothing wrong with interpolation, but it has to match what it is being compared to.
In the NH the “growing” season and “dormant” season have different trends. The “dormant” or lower light half of the year, has less influence on the less variable and liquid water dominate “growing” season. So trying to splice any surface temperature reconstruction to any proxy reconstruction has plenty of issues to be considered. That is why kriging or interpolating over phase changes, both thermodynamic and temporal is very tricky.
With Hadcrut4, the addition of the very high arctic station that can be out of phase, i.e. Arctic Winter Warming where temperature may change by 10 degrees -30C to -20, but that change is much less energy associated with the anomaly change. 50N-60N could drop by 3C and 60N-90N increase by 6C but there is effectively no change in internal energy. So now you have apples, oranges and pears. Did “average” global mean surface temperature increase? yes. Does it mean anything? Likely Not. If you get different results comparing the 60S-60N Globe to the 90S-90N globe, you might need to understand why a little bit better before sounding the alarm.
Shaun Lovejoy: the magnitude of the temperature change is so large that the probability that the trend suddenly started in about 1880 is very low (about one in a thousand).
As I wrote, that probability is conditional on the outcomes of other events whose probabilities are not known. What is the probability of a Little Ice Age? What is the probability of the recovery from a Little Ice Age? What is the probability that the recovery from the Little Ice Age overlaps the human industrial revolution? If there is a process with a period of about 900-1000 years, and if it is reasonably well-estimated by Scafetta, Page and others, then the probability of a Little Ice Age and its recovery occurring about when they did is quite high.
Your probability statement is entirely conditional on your estimated background variability, and if there is a process with such a long period your method totally misses. In other words, you had nearly 0 power to detect the statistical signal of a process of great importance, for which other evidence exists; and your probability is conditional on your not finding something because you assumed that it didn’t exist.
in secret emails subsequently made public, Mann, Jones and associates discussed the necessity to somehow correct the appearance of a Medieval Warm Period. If that and the other warm periods existed, about the times and magnitudes as have been estimated, then the current warm period is occurring approximately “right on time”
“Maybes” and “Ifs” dominate these probability assessments.
What a laughable assertion. This commenter bases science on “secret emails” and what appears to be a hope for a lucky guess.
WebHubTelescope: This commenter bases science on “secret emails” and what appears to be a hope for a lucky guess.
Shaun Lovejoy based his 99% confidence on a different guess. Is the long period oscillation there in reality, as estimated by Scafetta and others? Is it continuing? Repeating the question of Jim D: what are the odds?
WebHubTelescope: This commenter bases science on “secret emails” and what appears to be a hope for a lucky guess.
No science was based on the secret emails.
WebHubTelescope: and thus the pause is explained.
WebHubTelescope: The pause actually never existed.
For how long will the natural process continue to produce what looks like a “pause” in the surface temperature and troposphere temperature increase (even though energy continues to accumulate in the briny deeps)?
Someone: “Gee whiz, I am looking at data coming in from the Voyager spacecraft, and all I see is noise.”
Me: “Get a model for the noise, and apply it to isolate the signal”
Someone: “Thanks”
Me: “You’re welcome”
WebHubTelescope: Someone: “Gee whiz, I am looking at data coming in from the Voyager spacecraft, and all I see is noise.”
What? The clear contradiction in your writing caused you some sort of fit?
The pause is explained, as it never existed. The underlying warming continued, both by sophisticated time-series analysis (see CSALT model) and also as evidenced by the fact that a significant fraction of the heat sequestered in the ocean continued unabated.
http://imagizer.imageshack.us/a/img854/8775/l7g.gif
Matthew R Marler: a significant fraction of the heat sequestered in the ocean continued unabated.
Do you have any expectation about how long the “apparent pause” will continue (i.e. how long the explanatory mechanism will continue to prevent any rise in surface temperature)? Just curious.
oops, I don’t know how that happened. The quote was from WebHubTelescope.
Shaun Lovejoy | September 1, 2014 at 6:00 pm
I’m curious does this also allow you to explain what the regional temps were, since warming is not equal globally?
That’s the million dollar question and I’m pretty convinced the answer is yes – at least to the theoretical limits that the regional temperatures can be predicted! I currently have a student working on this and there should be an answer shortly. Stochastic modelling has many advantages over GCM’s
Shaun Lovejoy commented on
Well if you want to see what the actual surface stations measured, I have a lot of data available if you follow the URL in my name.
WebHubTelescope (@WHUT) | September 1, 2014 at 5:06 pm
I’ll accept the pause doesn’t exist, if you accept that there is no warming!
WebHubTelescope: If this is added, then the natural variability is largely accounted for and thus the pause is explained.
How long will the pause last? Your statement might be true, and your model might be reasonably accurate over the next 3 decades (as you unpdate taking ENSO into account, or become able to predict ENSO.) How long a pause does your model express.
Lovejoy: Using preindustrial multiproxy series and scaling arguments, the
probabilities of natural fluctuations at time lags up to 125 years were determined
You can see the problem right away. He begins with a model that is incapable of estimating recurrence times of large natural excursions of the global mean temp of the Holocene Climate Optimum, Minoan Warm Period, Roman Warm Period and Medieval Warm Period. With this model, he would have to reject the null hypotheses that those excursions are independent of anthropogenic CO2 (that is, “natural”). His model is less complete and less informative than Nicola Scafetta’s model, or Dr. Norman Page’s model with its 960 year period. This model is less complete and informative than the model of Beenstock et al. N.B., those are rankings. there is no demonstrably adequate dependable model.
His model is totally inadequate for the problem: How “natural” is the MWP-like warming since the end of the Little Ice Age?
Ignore the millennial at your perennial.
==============
The beauty of differences is that they are high pass filters, so that 125 year changes are indeed unaffected by millennial scale variability. This is signal processing 101.
Shaun Lovejoy: The beauty of differences is that they are high pass filters, so that 125 year changes are indeed unaffected by millennial scale variability. This is signal processing 101.
That’s what I wrote. The conclusion depends on the assumption that something does not exist, based on a model that could not detect it even if it is there.
If instead there is a natural process generating the apparent oscillations producing the descent into the LIA and the recovery since then, then there is no support for the conclusion.
If you are sure you know the signal, you can estimate a noise process to fit it. If you think you know the noise process, you can estimate a signal that might be in the data. If you know neither the signal nor the noise with certainty, then you can’t conclude much of anything from the statistical outputs, other than the fact that they are all model dependent.
This is signal processing 101. So what? This is at least graduate level statistical time series analysis.
Mustn’t forget the physics, LOL. See the Stadium Wave.
Pingback: Hiding the Real Data-Sets Behind the Headlines | Religio-Political Talk (RPT)
‘Climate is ultimately complex. Complexity begs for reductionism. With reductionism, a puzzle is studied by way of its pieces. While this approach illuminates the climate system’s components, climate’s full picture remains elusive. Understanding the pieces does not ensure understanding the collection of pieces. This conundrum motivates our study.
Our research strategy focuses on the collective behavior of a network of climate indices. Networks are everywhere – underpinning diverse systems from the world-wide-web to biological systems, social interactions, and commerce. Networks can transform vast expanses into “small worlds”; a few long-distance links make all the difference between isolated clusters of localized activity and a globally interconnected system with synchronized [1] collective behavior; communication of a signal is tied to the blueprint of connectivity. By viewing climate as a network, one sees the architecture of interaction – a striking simplicity that belies the complexity of its component detail.’ Marcia Wyatt
The relevance of the Stadium Wave is this idea of a collective system. It is not a singe data series – even one that is influenced by atmospheric angular momentum. The LOD is not the stadium wave. LOD doesn’t provide a mechanism for decadal changes – but is related to atmospheric circulation changes in the global system.
https://watertechbyrie.files.wordpress.com/2014/06/abarca-del-rio-et-al-2012-interdecadal_oscillations_in_atmospheric_angular_momentum_variations.png
And yes – the pause does seem to be a perfectly natural shift in Earth ocean and atmospheric circulation on multi-decadal scales seen many times in the paleo records. The fundamental mechanisms for this is deterministically chaotic. A bifurcation in a system that has many interacting components. The persistence of past regimes suggests this one may persist for 20 to 40 years. It is a mistake as well to think that the next shift will be to a warmer state.
Rob: Your comments in the past about the climate system being a complex interaction of many subsystems has always meant sense to me and seemed compatible with the stadium wave theory, but never saw you mention it before. I’m new here and may have missed it. Is the stadium wave an idea you’ve always held?
The Aussie saying the LOD is not a correlating factor of the Stadium Wave is a direct snub of Wyatt & Curry’s claim in their paper identifying the Stadium Wave concept.
LOD is indeed an intriguing link to stadium wave
The relevance of the Stadium Wave is this idea of a collective system. It is not a single data series – even one that is influenced by atmospheric angular momentum – and is therefore integrating in a sense.
As shown in Figure 7.
But it is a manifestation of change in the system and not the underlying cause. Contrary to webby’s `usual simplistic spin the LOD is not the stadium wave but one of a collection of pieces by which the whole is apprehended. Webby reduces the whole to a single data series and fails to see the incongruity. .
Curry,
Why do you keep taking for granted that there IS an anthropogenic contribution at all to the general ‘global warming’ observed since the 50s (which was basically confined to the 25-year period between 1976/77 and 2001/02)? On what empirical evidence do you base this belief? Where and how do you see the CO2 ‘radiative forcing’ signal on global temps in the latter half of the 20th century?
The null hypothesis says that 100% of the global rise in mean temps from the 70s to the 00s were caused by natural variation (basically, the ‘ocean cycle’). Have the AGW’ers somehow managed to falsify this hypothesis in any way? If so, I’d be very interested to see the observational evidence from the real Earth system responsible for it.
I believe it is because that “Most” of the CO2 rise is “Very Likely” (95% confidence) due to activities of mankind :)
Kristian
Re: “The null hypothesis says that 100% of the global rise in mean temps . . .”
The thesis is distinguished from the null hypothesis. Thus the corresponding “null hypothesis” is that:
“the majority of the global warming since 1950 is due to natural causes.”
Er, no. The null hypothesis that the AGW hypothesis is supposed to falsify is “100% og ‘global warming’ since 1950 is due to natual causes”. “Climate change is ALL natural. Like it’s been for the last 4-4.5 billion years.” That is the hypothesis that the AGW hypothesis claims is not the case since about 1950.
The AGW promoters haven’t even shown that 1% of recent global warming is anthropogenic in origin. So this nonsense about 100% or 50% or more or less than 50% being anthropogenic is simply completely unscientific. Show empirically that there IS a contribution to be observed in the real Earth system AT ALL first. THEN we can talk. Where’s the ‘unnatural’ signal? Outside the models, all already based circularly on the assumption that there IS a contribution and its large.
Darwall’s lead in comment is particularly important concerning the new McKittrick paper, which seems about as robust as statistically possible. 16, 19, and 26 years without statistically significant warming to a 95% confidence level. The importance is CMIP5 model falsification.
In 2009, NOAA said 15 years pause would constitute falsification. (BAMS 90: Special Supplement August 2009 starting at page 23)
In 2011, Santer said 17 years. (J. Geophys. Res. 116: D22105 2011)
Following these ‘official’ declarations from keepers of the warming faith themselves, we must now declare the CMIP5 models falsified. Which means tossing out all of AR5 WG1. What an inconvenient truth.
See also Fyfe, Gillet and Zwiers, Overestimated Warming…, Nature Climate Change 3: 767-769 (2013) for another 95% confidence level analysis falsification of CMIP5. They did two comparisons of CMIP5 to HadCRUT4, 1993-2012 and 1998-2012. Figure 1 is the money image. Zwiers was lead author on AR5 attribution, so this was an ‘inside’ paper appearing AFTER the July 2012 cutoff for AR5 consideration.
Or they could just recalculate and state that the period is 19, 23 or 29 years.
They can also play the C&W ‘sea water is water and sea ice is land’ game, but that is going to hurt if the arctic continues to recover.
Revising the falsification period will be difficult for Santer, given the arguments he made.
But Paucheri is already giving speeches arguing for 30 years. That could be even funnier, if the stadium wave idea and Joe on PDO/AMO are even directionally correct.
The NOAA 15 years does have to be adjusted for ENSO though.
Is there any way to put the Kim-bot on a mute button? It posts more comments that make no sense, literally or figuratively, than all other posters put together and makes the whole blog virtually unreadable. Can’t you give it a time out to give the rest of us a break?
PS Mosh – I’m using a phone and still able to spell check; it’s not that hard.
Me
Are you going to take up his very generous offer and get paid for editing his phone generated posts?
Tonyb
D’ya think I bothered to read his error laden posts to identify that he’s offered to pay me to correct them? I have much better things to do and am amply rewarded for them.
Me
I was being sarcastic about the ”generous’ bit. . As far as I recall it was something like two dollars an hour. That’s why it stuck in my mind.
Tonyb
2.50.
good.
take my posts.
spell check.
repost them under your name and people will think you are smart, until
they figure out that you are parasitical.
Mosh
Plus 10
Tonyb
Mosher, you ordered Comeuppance Pie? How would you like that, with or without sprinkles?
Originality is not revealing your sources.
And I’ll never tell you if I picked that up somewhere or not.
Not much need for me anymore. Other people, oops, bots, are making my arguments better than I can anyway.
=================
Go for it Kim, you are on a roll. As famous as Joshua ? Sorry.
I’ve told moshe that once upon a time I had it all figured out but you have to read the blogs. Since then, I’ve forgotten. There was an unusual vacuum tube central to my processing which blew out and a replacement can’t be found. We’ve got people out rummaging through garage sales, but so far, no luck.
====================
I’m still looking, Kim. Maybe the Ruskies have one.
If the Ruskies don’t have one, I know this French guy who might be able to fabricate it – if we can dig up some specs.
Not all about you ‘me’. Judith runs a democratic forum, unlike,
well you know who they are…
Thank goodness for kim-non-pareil. I do like a bit of wit with
my climate science. You not so much, me.
Beth the serf.
Hey, me–you leave kim alone or I’ll sic Mosher’s mobile on you.
We have different world views TF but I have regard fer yr
integrity.Yr comments are duly noted.by serfs.
+1 Beth Tom is another gentleman who can disagree without being disagreeable!
Et tu Peter.
bts
Beth, No gentleman could sic Mosher’s phone on anyone, surely?
Unless they really deserved it.
me | September 1, 2014 at 5:30 pm | Reply
“I’m using a phone and still able to spell check;”
Mark the day. It’s when a cell phone surpassed a human in intelligence.
Next milestone will be when it surpasses a human with a triple digit IQ. That might take a while… ;)
Lovejoy’s ideas might be more convincing formulated in terms of standard probability and statistics. The claims need much better testing against data, even when it’s low resolution, high variance proxy data. The power law he uses probably does not fit into standard random processes, but the basic ideas certainly do. Otherwise, the odd plot with some hand waving does not offer that much empirical support. Rather a contrast to McKitrick.
‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ http://www.pnas.org/content/104/21/8709.full
This is how models evolve from slightly different – within the bounds of feasible inputs – initial and boundary conditions.
http://rsta.royalsocietypublishing.org/content/369/1956/4751/F2.large.jpg
There are no unique solutions.
‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full
The comparison of model solutions to observations seems to miss a fundamental point. The solution is one of many possible and is chosen on the basis of expectations about plausible outcomes. It would seem to be the expectations that are incorrect rather than the models as such.
Rob
Re: ” expectations that are incorrect rather than the models”
How do you exclude model error? If you cannot, then the models are not scientific – by not being falsifiable. The IPCC lists three sources of error – as distinct from expectations.
cf IPCC’s pause ‘logic’
They omit “non-radiative forcing” and fail to recognize massive Type B systematic error.
Assuming the models are plausibly formulated – the ‘bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms…’
On the issue of Arctic warming (and implicitly Cowtan and Way, 2014) there is this new paper which also discusses Curry (2014).
http://onlinelibrary.wiley.com/doi/10.1002/qj.2422/abstract
As mentioned beforehand (here and elsewhere) we have also placed a series of updates (4) on the analysis in CW2014 here:
http://www-users.york.ac.uk/~kdc3/papers/coverage2013/updates.html
Including seven different reconstructions of global temperature
http://www-users.york.ac.uk/~kdc3/papers/coverage2013/series.html
Judith, I would not bother comparing Cowtan and Way to anything in the way of respectable science. Two co contributors to Skeptical Science, the most warmist cabal of warmists put up a method of divining temperatures with a self admitted (Cowtan) method that lines up temperatures from places that are further away as more accurate than places next to each other is not science.
Their method is “failure proof ” in that it has never dropped a temperature anywhere.
In all their Arctic work they have never shown a “station” get colder. Why?
They have also shown near 100% correlation [dare I use the 97% meme, heck why not] with any other data sets they have tested.
Inconceivable!
Inigo Montoya: You keep using that word. I do not think it means what you think it means.
Judith, it may be time for a review of Cowtan and Way, in particular with their Arctic warming continuing over the last 2 years [betcha] while the Arctic has refrozen. Time for their alter egos Way and Hansen to come out with Kriging in the North Pacific. Don’t you all know the heat has been hiding there all the time just waiting for our new method to show it.
Don’t be too hard on Cowtan and Way. remember as the AMO goes negative and the Arctic ice recovers their’s is the methodology that will show the most cooling.
No, their methodology forbids cooling.
read it
http://www-users.york.ac.uk/~kdc3/papers/coverage2013/update.140404.pdf
No matter what data set we throw at the problem, the answer comes out the same.
Hadcrut and Giss get the arctic wrong.
For the above analysis, I passed on AIRS data to C&W.
For the longest time skeptics criticized the methods of hadcrut and GISS.
So now you have two new looks at the data.
Different methods, different data, validation of the methods by looking at previously un used data. validation of the method by looking at alternative instruments ( satillites ).. Multiple satillite datasets, reanalysis.
There is not one bit of data, not one shred of evidence, that Hadcrut and GISS are right. None. zero. nada
Wasn’t “disaster” when it wasn’t warming?
Don’t be too hard on Cowtan and Way. remember as the AMO goes negative and the Arctic ice recovers their’s is the methodology that will show the most cooling. – steven
I suspect that when they publish C&W July it will be level, perhaps a bit of a jump, because warmth leaked from Siberia into the arctic and fell of the other series.
As for the AMO, it’s spiking up after the new warming regime shift in 2013, and my hunch is the PDO will rejoin it when July is published in the next weeks.
Will do that today, wife permitting.
Will put up cogent argument to back up my points .
Can you tell me where Cowtan and Way have ever diverged from their ( cough,cough) hind casting checks with other models and where they have ever shown a negative anomaly in the Arctic?
Note, such must exist and their non existence strongly suggests they are not dealing with reality.
However the CW14 data show a trend of approximately 0.03◦ C/decade greater than GISTEMP over a period of 16 years.
Is this all they can say, they detected no difference????!!!
shaking in my boots Steven and wondering what in the heck I was worried about if they show no difference.
However the same regional trends also present serious challenges for station homogenization algorithms, which depend on the assumption that climate trends are spatially correlated over moderate distances.
Really.
The Steven Mosher I now stated that this is a fundamental basis of physics twice and loudly. Now his mates deny this and he rolls over for them.
“well the Arctic is an exception and Robert is such a nice trustful guy.”
Our initial speculation that the principal difference between GISTEMP and
CW14 arose from the choice of sea surface temperature dataset was incorrect: [Note; not C and W incorrect algorithms, heavens, no]
the majority of the difference is in the Arctic, and arises from differences in
the input land temperature data from meteorological stations.
Really?
let’s summarize.
Most of the Antarctic is sea not land.
There are very few meteorological stations in the area with very limited data available.
But if we take a left turn at Albaquerque we can add the temperature from some none adjacent stations like death Valley and the Sahara which are better proxies for the Arctic than some Greenland based station which was still too cold when we adjusted it?
and hey presto CW 2014
There is some evidence that the homogenization adjustment algorithm
used in the GHCN station data is attempting to eliminate some of the rapid
Arctic warming over the study period.
One can be sick without putting the fingers down the throat.
How can one make such a blatantly wrong comment.
Zeke explicitly explains why most GHCN homogenisation results in a warming of data, you know , its called TOBS adjustment and new fandangled thermometers introduced, etc
. A few rogue cells are also visible arising from CRUTEM4 stations which
require homogenization adjustments due to changes in station equipment, location or operating practices.
Damn those rogue cells heh.
warming on the Chukchi side of the central Arctic slower
warming off the coast of Greenland. Neither of these features, if real, would be captured by the thermometer records since there are no land-based weather stations in these regions.
So where they claim the warming there are no land based stations anywhere to back these claims.
“The close proximity of regions of warming and cooling on both the Eurasian
and Alaskan Arctic coasts mean that it is possible for neighboring stations
to show a very different temperature trends. Automated homogenization could
potentially introduce unnecessary adjustments to reconcile these trends.”
Not allowed to have weather now, are we??
How dare neighboring stations show different trends . We need a Dalek to take care of these rogue cells ? C3PO? N o CW.
Two stations in the Kara sea and one in the Beaufort sea have homogeniza-
tion adjustments which appear to be inconsistent They are GMO IM. E. T., OSTROV VIZE, and BARTER ISLAND; labeled 3, 4and 8
Exterminate.
The 3 problem stations account for nearly 40% of the difference between GISTEMP and CW14. Other unidentified differences in the station data at locations present in both datasets also contribute nearly 40%. The remaining difference arises from stations present in CRU but absent in GHCN.
All of the difference between GISTEMP and CW14 in the Arctic can be
accounted for by differences in the input station data.
So we got rid of the low outliers and hey presto the world is warmer in the Arctic at least.
Why not remove the 3 highest readings, Steven. Counting that as 40%
remove another 1 and 1/2 times that and see how cold you could make the Arctic. We could call it the Mosher and Angech counter Krig method and be famous as well.
We would like to thank Matthew Menne and Claude Williams for helpful
discussions relating to this problem. We are also grateful to Zeke Hausfather
for comments and Steve Mosher for help with the AIRS data.
Both of whom would surely disagree with the comment at the end of the paper
since Zeke has explained here how adjustments were necessary over the Satellite period [not to the Satellite data] to the land based station records for TOBS and Thermometer adjustments resulting in at least a 1.4 degrees F rise in global temps over this time.
“Nonetheless, claims that GHCN adjustments contribute to
the warming trend over the satellite era are unfounded.”
Thanks for posting that information here; It’s good to have input :-)
Robert Way, thank you for the links.
I I could not download Rupert Darwall and Ross McKitrick gets into really incomprehensible statistics which throw up 16 – 26 year hiatus length as one possibility. Now that the duration of the hiatus/pause is at the center of attention, let us admit that we do not know the future. There are of course various possibilities and those who wish to advance them must present scientific reasoning to justify their view. Since the hiatus has lasted a goodly number of years by now any attempt at predicting its future must include a satisfactory explanation of what has happened before. If you are going to say that there is greenhouse warming yet to come you must first explain why there is none today and why there has been no sign of it in the entire twenty-first century. Greenhouse theory, after all, is part and parcel of global warming theory that tells us to pay for our carbon admissions to save the world. The Arrhenius greenhouse theory has been predicting warming, based on the belief that increasing carbon dioxide in the atmosphere will cause warming. That was supposed to be guaranteed by Hansen who told the senate in 1988 that he had discovered the greenhouse effect himself. But it is not working like that which means that the current greenhouse theory must be discarded. The problem, no doubt, is because Hansen did not actually discover the greenhouse effect in 1988 like he said. What he said is that there was a 100 year period of greenhouse warming which proved that greenhouse effect existed. When the years were counted at least thirty of them were not greenhouse years and this nullifies his claim. It turns out that there is another greenhouse theory that correctly predicts what happens where Arrhenius fails. The alternate is Miskolczi greenhouse theory (MGT). Their difference is that while Arrhenius can handle only one greenhouse gas – carbon dioxide – MGT can handle several that absorb simultaneously in the IR. Carbon dioxide is not even the most important greenhouse gas in the air – water vapor is. There is on the average 2 to 3 percent of it in the atmosphere, several hundred times more than carbon dioxide. Carbon dioxide and water vapor are the most important greenhouse gases in the atmosphere and according to MGT they form a joint optimal absorption window which they control. The optical thickness of this window is fixed at 1.87, determined by Miskolczi himself from first principles. If you now add carbon dioxide to the atmosphere it will start to absorb, just as the Arrhenius theory says. But this will increase the optical thickness. And as soon as this happens, water vapor will begin to diminish, rain out, and the original optical thickness is restored. The introduced carbon dioxide will keep absorbing of course but thanks to simultaneous reduction of water vapor the total absorption remains constant and no greenhouse warming is possible. This is why constant addition of carbon dioxide to the atmosphere is unable to cause any warming today. This is not the first time it has happened. In 2010 Miskolczi was studying the absorption of IR by atmospheric carbon dioxide over time. He used a NOAA radiosonde database going back to 1948 and discovered that absorption stayed constant for 61 years while carbon dioxide at the same time went up by 21.6 percent. Constant absorption means no warming and here we have a perfect parallel to the current hiatus from an entirely different time period – no warming while carbon dioxide keeps increasing. There is one more example of no warming that is closer to us – the eighties and the nineties. This particular instance is not known to most people because it is covered up by fake warming in temperature curves emanating from HadCRUT, GISTEMP, and NCDC. I have known and talked about since I wrote my book “What Warming?” two years ago but nobody has taken notice, much less action about this outrageous forgery. I have written comments about it, most recently to Anthony a few days ago. You could look it up and read it by clicking on the URL below:
*
http://wattsupwiththat.com/2014/08/21/cause-for-the-pause-32-cause-of-global-warming-hiatus-found-deep-in-the-atlantic-ocean/#comment-1715234
***********************************************************************
So, Hansen, Mann, and a few others had a “Eureka” moment back in the 80’s in terms of the controlling force of CO2. They pitched it. The majority of the scientific community and public bought it hook, line, and sinker. It was quite a marketing campaign. Now it is established “fact” among a clear majority of the news media, politicians, and educational systems.
Where were the responsible scientists back then to challenge the faulty hypothesis? As a layman I appreciate all of the efforts being put forth now (as in this thread) to bring some sanity to the debate, but I fear it may be too little to late. Just watch and see how the climate march in NYC is received by the media and the politicians later this month.
Fear not. Marches never change minds.
They’ve begged Al Gore not to show up for fear of a blizzard. Gaia’s shocked that they think they can fool her so easily.
=================
Shaun Lovejoy – Thank you for the explanations.
The statistics needed to bound the empirical 125 year global temperature changes have tails with exponents 4 and 6 (the exponent 5 works best…), see the probability distributions in: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthro.climate.dynamics.13.3.14.pdf). There is therefore a theory (scale invariance) justifying such distributions, as well as much empirical data. But even if the theory is rejected, they can still be used as extreme bounds – all that is needed to reject the natural variability hypothesis with high levels of certainty.
Should we distinguish between ‘ black swans’ and ‘dragon-kings’? e.g.
https://www.youtube.com/watch?v=vuvbghZuM8U
It is the difference between the inherently unpredictable (black swan) and noisy bifurcation (dragon-kings) that is potentially predictable in both climate and economics. There is perhaps some overlap in Mendelbrot’s ‘grey-swans’ that are weakly predictable. The system is chaotic and chaos occurs on all scales in time and space. .
What this has to do with rejecting the natural variability hypothesis is perplexing and how the past 125 years can be used to bound variability is an incredible nonsense it seems. Quantified variability over the Holocene documents variability in hydrology and – presumably – temperature that is considerably outside the limits seen in the 20th century.
Moy et al (2002) present the record of sedimentation shown below which is strongly influenced by ENSO variability. It is based on the presence of greater and less red sediment in a lake core. More sedimentation is associated with El Niño. It has continuous high resolution coverage over 12,000 years. It shows periods of high and low ENSO activity alternating with a period of about 2,000 years. There was a shift from La Niña dominance to El Niño dominance some 5,000 years ago that was identified by Tsonis (2009) as a chaotic bifurcation – and is associated with the drying of the Sahel. There is a period around 3,500 years ago of high ENSO activity associated with the demise of the Minoan civilisation (Tsonis et al, 2010). The period had ‘red intensity’ (El Nino) in excess of 200. The red intensity for the 97/98 event was 98. It shows ENSO variability considerably in excess of that seen in the modern period.
http://watertechbyrie.files.wordpress.com/2014/06/moys-20022.png
It is difficult to imagine what could possibly be meant by Lovejoy – only that it appears to be utterly misguided. Sounding more like the webbly all the time in fact.
‘The statistics needed to bound the empirical 125 year global temperature changes have tails with exponents 4 and 6 (the exponent 5 works best…), see the probability distributions in: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthro.climate.dynamics.13.3.14.pdf). There is therefore a theory (scale invariance) justifying such distributions, as well as much empirical data. But even if the theory is rejected, they can still be used as extreme bounds – all that is needed to reject the natural variability hypothesis with high levels of certainty.’
Is a quote from Shaun Lovejoy above.
There seems to be an error in this paper. Lj presents a temperature model:
Tglobe (t ) = Tanth (t )+ Tnat 113(t )+ ε(t )
that should be
Tglobe(t) = Tclimate(t).
Its a nonlinear system. Assumptions of separability lead to circular reasoning.
It looks like Shaun gave us a special link that uses the term “deniers.” Here’s another link.
http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthro.climate.dynamics.13.3.14.pdf
From that link:
While no amount of statistics will ever prove that the warming is indeed anthropogenic, it is nevertheless difficult to imagine an alternative.
A lack of imagination would not seem conducive to scientific inquiry.
Detrended Fluctuation Analysis (DFA)
http://www.physionet.org/tutorials/fmnc/node5.html
Shaun turns up just about everywhere I search for “scaling fluctuation analysis.” Is anyone else in climate science using this technique?
Just an opinion. I do not think it is scientifically correct to categorize papers by their authors race, color, creed, or blog habits. They should be evaluated on their own merits, given their arguments, prior citations, and analytic methods used. And critiques should be couched in an equivalent manner.
For example 1, perhaps Lovejoy did not adequately recognize the limitations of the proxies used. That is worthy of debate. A link to a site that used the term deniers is not. After all, the link could have been to POTUS.
For example 2, whether CW surface krigging is a legitimate statistical method for that part of the year when the Arctic is not all ice/snow is an interesting question, since the summer surface differences arguably violate the underlying krigging assumptions. That they might also post on SkS perhaps shows a certain lack of social judgement, but is irrelevant to the methods and conclusions of their technical paper.
Such merit questions require facts and methods digging. Google et. al. enable that. My guest posts here and next book endeavor to illustrate using specifics.
All else is just opinion.
The link Judy included in the main post used the term “deniers.” I didn’t go looking for it.
But I agree, the piece should be judged on its technical merits. Of course, the use of that term in a scientific, in this case, letter, is questionable nonetheless.
Rud
“Just an opinion. I do not think it is scientifically correct to categorize papers by their authors race, color, creed, or blog habits. They should be evaluated on their own merits, given their arguments, prior citations, and analytic methods used. And critiques should be couched in an equivalent manner.”
Absolutely. Thanks for staing that.
“since the summer surface differences arguably violate the underlying krigging assumptions.”
And [I am very curious] from your understanding what are those assumptions?* exactly what is being kriged? and the local estimates, are they from point kriging or block kriging? how are they /should they be interpreted?** …a lot of nitty-gritty.
but we are just talking estimation and more will follow.
—-
* people here seem to poke the assumptions a little and walk away…too mathish I guess.
** yeah, poking s a ‘physical boundary’ argument including pondering how sharp they are in the context of how the estimation is configured. Not wildly excited about it, though…finite shelf-life and all that.
I agree with you for the most part Rud. But, what if you see some private e-mails that state that there is something they want to hide or get rid of and then later (often a few months for privileged folks) the paper, lo and behold, has gotten rid of the pesky event. I would think that it is ok to “diss” their motivated “science” at the same time you criticize the methods they used.
Not referring to any particular paper in my above statement. Just a general statement.
Today is an example of why I really like CE — Through Mosher, I learned about Ross McKitrick and went to his webpage and read some of his papers. McKitrick is an interesting person besides his views on the “Pause” — He is a person with conservative values advocating bottom-up actions rather than “Liberal” top-down/command-control.
Dr. Ramanathan’s “fast mitigation” (low level ozone, carbon black, methane, HFCs) is totally consistent with McKitrick’s views.
McKitrick also had the idea of a carbon tax indexed to the temperature rise, which I think would be good, but he would only use tropical ocean temperatures for some reason, not global. I would index it as $10 per tonne for every 0.1 C above the 2000-2010 decade global average, for example, using only moving decadal averages as they vary slowly, maybe updated every 5 years.
Jim D — That McKitrick supports a carbon tax is surprising. In his presentations he strongly advocates a view to: Think of GW not as a single problem — put as a 1,000 problems that could be addressed at local levels. As you might know, I really don’t like “liberal ideological” policies like carbon taxes or cap & trade.
I agree. A carbon tax is a good idea, indexing to actual temperature is a winner. But there’s a need to offset this tax with reductions in VAT, sales tax, an