by Judith Curry
On the time scale of a few decades ahead, regional variations in weather patterns and climate will be strongly influenced by natural internal variability. The potential applications of high resolution decadal climate change predictions are described in this CLIVAR doc. Based upon my own interaction with decision makers, I see a need on these time scales that is primarily associated with infrastructure decisions. Sectors that seem particularly interested in predictions on this time timescale are city and regional planners, the military, and the financial sector.
It is the combination of this natural variability and forced anthropogenic climate change that is of particular interest. Natural variability dominates regional climate change in many locations. Further, decision makers need to know the extent to which climate is varying because of natural variability, and hence expected to reverse at some point, or whether the climate is changing as the result of irreversible anthropogenic forcing.
So how should we approach this problem? Is there predictability in the climate system on these timescales? If so, how can this predictability be realized and converted into useful predictions?
The targets of interest on the timescale of the next two decades are
- the evolution of the global average temperature anomaly, for understanding the relative roles of anthropogenic versus natural climate variability/change
- the evolution of regional climate variability to support regional decision making: average temperature and precipitation; extreme events (heat/cold waves, floods, droughts, hurricanes, wildfires, etc).
Possible strategies for making decadal climate predictions on both global and regional scales include:
- global climate model simulations, including dynamical downscaling methods using regional climate models forced by the global climate model simulations.
- statistical forecast methods that combined projections of forcings and known modes of natural internal variability
- statistical/dynamical methods, that use elements of both of the previous methods
Climate model projections from CMIP3
While the CMIP3 20th century simulations used in the AR4 show some average skill on subcontinental scales (e.g. the U.S.), they show little skill on regional scales, and none in many regions (notably the southeastern U.S., which is a location that I have investigated closely.) One strategy that has been used for future projections of regional climate change is to take the projections from the CMIP3 21st century simulations and use these fields to force higher resolution regional climate models (referred to as dynamical downscaling.) The idea of dynamical downscaling is to force a regional climate model nested from a scale of say the U.S. with successively higher resolution grids down from the continental to the regional to the local scale of interest. An ambitious statistical downscaling effort for the U.S. climate based on the CMIP3 simulations is described here.
The challenge of decadal scale predictability
The IPCC AR4 projected a near term global average temperature increase of 0.2C per decade. Further, the AR4 showed an insensitivity of global average surface temperature to emission scenarios prior to about 2060. For projections on the time scale 2010-2030, the strategy described above is marginally useful owing to problems that the global models have with the modes of natural internal variability, which dominates the global warming signal on this time scale. The IPCC AR4 projected a near term global average temperature increase of 0.2C per decade for the next few decades, showing an insensitivity of global average surface temperature to emission scenarios prior to about 2060.
The challenge for prediction on the time scale 2010-2030 is that this timescale is both a boundary value and initial value problem. Hurrell et al. 2009 describes the challenge:
Efforts to predict the evolution of climate over the next several decades that take into account both forced climate change and natural decadal-scale climate variability are in their infancy. Many formidable challenges exist. For example, climate system predictions on the decadal time scale will require initialization of coupled general circulation models with the best estimates of the current observed state of the atmosphere, oceans, cryosphere, and land surface – a state influenced both by the current phases of modes of natural variability and by the accumulated impacts to date of anthropogenic radiative forcing. However, given imperfect observations and systematic errors in models, the best method of initialization has not yet been established, and it is not known what effect initialization has on climate predictions. It is also not clear what predictions should be attempted or how will they be verified. The brevity of most instrumental records furthermore means that even the basic characteristics and mechanisms of decadal variations in climate are relatively poorly documented and understood. As a consequence, the representation of natural variability arising from the slowly-varying components of the climate system differs considerably among models, so the inherent predictability of the climate system on the decadal time scale is also not well established. Demands will therefore be made on observations, particularly ocean observations, not only to describe the state of the climate system and improve knowledge of the mechanisms that give rise to decadal fluctuations in climate, but also to provide the optimal observations for decadal climate predictions and their verification.
Basis for decadal scale predictability
Excerpts from Hurrell et al.:
a) External forcing
A significant source of predictability on the decadal time scale is associated with radiative forcing changes. Past emissions of greenhouse gases have committed the climate system to future warming as the ocean comes into equilibrium with the altered radiative forcing (Hansen et al. 2005; IPCC 2007). Changes in solar irradiance and volcanic activity in the recent past also can provide some level of decadal predictability as the climate system responds to these forcing changes on decadal scales.
The best possible estimates of future radiative forcing changes are also needed for predictions. Estimates of future emissions of radiatively important pollutants are needed for making predictions, as well as modeling capabilities to accurately simulate both how these pollutants affect the global energy, carbon and sulfur cycles, and how the climate system subsequently responds to that altered forcing. In this regard, future external forcing from greenhouse gases is likely to provide significant regional decadal predictability (e.g., Lee et al. 2006), since the increase of concentrations over the next 30 years is about the same no matter what emission scenario is followed (Hibbard et al. 2007). While man-made aerosols can be washed out of the atmosphere by rain in just a few days, they tend to be concentrated near their sources such as industrial regions, and can affect climate with a very strong regional pattern. Future changes in anthropogenic aerosols, therefore, could have very significant regional climatic impacts on decadal scales. Unpredictable volcanic eruptions can be a significant “wild card” to decadal climate predictions, although techniques to handle this aspect are under consideration. Similarly, only very general features of the 11-year solar cycle can be projected, but could provide some decadal scale predictability. The influence of the stratosphere, by transmitting external forcing signals to the troposphere, might also be a significant source of predictability.
b) Natural internal variability
Regionally, the predictability of SST can be higher than for the global field (not shown), with the highest levels on decadal time scales over the middle to high latitude ocean areas of both hemispheres, especially in regions where the surface layer makes contact with the deeper ocean beneath (Boer and Lambert 2008). A fundamental precept in predictability is the notion that long-lived variations, such as those associated with the PDO or changes in the strength of the Atlantic MOC, can be predicted for a significant fraction of their lifetimes. This is simply a reflection of the fact that the persistence of the variation implies a stable balance that permits the variation to have an extended lifetime. Thus, there is some confidence that naturally occurring climate variations with decadal time scales can, at times, be predictable given an accurate initial state. These times are likely to be when a significant amplitude variation exists. At other times, particularly the nascent phase of variation growth, the predictability of variations is likely to be quite delicate and require a very accurate depiction of the current state of the climate system if there is to be any hope of accurate prediction.
Finally, it should be noted that many climate and biogeochemical variables exhibit long-term persistence that could be exploited using statistical forecasting schemes. Physical damping of high-frequency variability increases the decadal signal to noise ratio and hence the potential predictability on decadal timescales. Simple linear multivariate decadal prediction schemes that exploit the long-term damped persistence of certain physical processes may, in fact, be quite successful (e.g., Lean and Rind 2009), but they rely heavily on long-term data sets to accurately estimate the covariance matrix. With their potential for long records, paleoclimate reconstructions may be of use in estimating such statistical relationships and developing predictive models. For example, Enfield and Cid-Serrano (2006) developed a statistical model for predicting regime shifts in the AMO; similar methodology could be applied for other climatic shifts.
CMIP5 simulations
In recognition of this need for improved decadal scale predictions, CMIP5 is coordinating decadal hindcast and prediction experiments, which will be used in the IPCC AR5. The following information on the CMIP5 decadal simulations is drawn from the presentation made by James Hurrell at the Fall AGU meeting:
- 10 year integrations with initial dates from 1960-2005
- 1960, 1980, and 2005 integrations extended 30 years
- Ensemble predictions (minimum 3 members)
- Ocean initial conditions should be in some way representative of the observed anomalies or full fields for the start date
- Land, sea-ice and atmosphere initial conditions left to the discretion of individual modeling groups
The CMIP5 simulations are not yet publicly available. Some results from decadal scale simulations can be obtained from
- Latif presentation at WCC (2009)
- NCAR CCSM results
Forecast: 2011-2020
Hoerling et al. (2010, submitted) has conducted an interesting set up experiments that makes predictions for the North American climate for 2011-2020:
North American mean surface air temperature and precipitation are predicted for the upcoming 2011-2020 decade. Multiple climate models forced by various plausible scenarios for the 2011-2020 change in ocean surface boundary conditions are first employed in order to estimate the forced response, and its uncertainty, to expected changes in anthropogenic forcing. A full probabilistic decadal forecast is then generated by commingling the statistics of the forced response with those arising from internal decadal sea surface temperature (SST) and sea ice variability. The latter are estimated from a multi-model suite of 20th Century atmospheric climate simulations driven by the observed time history of SST and sea ice variations.
The prediction is characterized by surface warming over the entire continent and precipitation decreases (increases) over the contiguous United States (Canada) relative to 1971-2000 conditions. The signs of these signals are robust across the scenarios and the models employed, though their amplitudes are not. An assessment of the sources of forecast uncertainty reveals comparable sensitivity to the various scenarios of forced SST change, model dependency, internal atmospheric noise, and internal decadal SST variability. Taking these sources of forecast uncertainty into account, predictions for the 2011-2020 decade indicate a 94% and 98% probability for warmer than normal conditions over the U.S. and Canada, respectively, a 99% probability of wet conditions over Canada, and a 75% probability of dry conditions over the U.S.
The key element in these simulations is selecting scenarios for the 2011-2020 sea surface temperatures and sea ice extent:
Three scenarios for the 2011-2020 SST change (relative to 1971-2000) due to anthropogenic GHG emissions are generated (Fig. 1). One uses the 22 model average SST anomalies computed from the CMIP3 simulations (Meehl et al. 2007) subjected to the SRESA1B emissions scenario. The other two are derived from the temporal optimal detection method (Ribes et al. 2010) in which the temporal pattern of the forced response over 1901-2020 is derived from the CMIP3 simulations, but the spatial pattern of change is derived from observations. In particular, the temporal pattern is generated by averaging global mean surface air temperature over the 22 separate CMIP3 models for each year, and further imposing a smoothing constraint as in Ribes et al. (2010). Its structure resembles a linear increasing function during the first half of the 20th century and an exponentially increasing function in later decades. A spatial pattern is computed by regressing observed SST upon the temporal pattern for 1901-2009, and the 2011-2020 anomaly amplitudes are then derived from the temporal optimal for 2011-2020. Given the observational uncertainty in estimating the spatial pattern of the centennial SST trend (e.g., Deser et al. 2010), two datasets, the NOAA Extended Reconstruction version 3b analysis (Smith et al. 2008) and the Hurrell analysis (Hurrell et al. 2008), are used. For Arctic sea ice, we use a single scenario that involves persisting the recent (2007-2009) pattern of observed monthly sea ice concentration. This was a period of record low Arctic sea ice extent and concentration (see Fig. 1 in Kumar et al. 2010).
For information about the AMO (including a plot), see here and for the PDO see here.
JC’s assessment of climate model decadal predictions
The simulations that seem most relevant to the problem at hand are the CMIP5 simulations initialized 2005 and run for 30 years (out to 2035). Based upon my experience with subseasonal and seasonal predictability, it seems that predictability is greatest when the model is initialized in a well established regime. For example, for subseasonal forecasts this depends on where in the cycle of the Madden-Julian oscillation that the model is initialized; and for seasonal forecasts, this depends on where in the ENSO cycle the model is initialized. The well-known spring (April) ENSO prediction barrier arises because this is the season of transition. If this same general principle holds for the decadal forecasts, then 2005 is a good year to initialize in terms of the AMO/NAO/AMOC, since 2005 was a peak (if not the peak) in the current warm phase of the AMO. This means that regional climate features that are sensitive to the AMO should be well represented (e.g. Sahel drought, North Atlantic hurricanes). In terms of the Pacific, 2005 was a transition year for the PDO, so I would hypothesize that the models would have more difficulty in simulating the correct evolution of the PDO.
The experimental design of the Hoerling et al. paper is probably to be preferred out for one decade, since you can take your knowledge of the ocean oscillations and project forward sea surface temperature and sea ice characteristics, and then see how the atmosphere and land surface responds. How far into the future such an experimental design will be useful depends on when you initialize it. Initializing the model in 2010, with the warm AMO and cool PDO well established, this strategy should work for another 15 and possibly 20 years, depending on the length of period that we can expect the AMO to remain in the warm phase.
The key challenge of multi-decadal climate forecasting is prediction of the change points (transitions) of the major ocean oscillations. Again, based on my experience with probabilistic seasonal forecasting, the only way I see to do this potentially with any skill is to select the models that do a relatively good job at simulating the key features in hindcast mode, and then select the ensemble members from these models that compare best with observations for the first year or two of the simulation. The rationale for such a selection is that ensemble members that get off to a good start are more likely to be on a good trajectory going forward. I look forward to getting my hands on the CMIP5 simulations.
Part II
Roger Pielke Sr. asks whether it is worth the expenditure to improve multi-decadal climate forecast models. The answer he comes up with is “no.”
The subject of part II will be statistical modeling approaches, including issues related to solar variations, the ocean oscillations, and the interesting new ideas that border with ignorance related to magnetic fields, extraterrestrial influences, and some of the other issues raised by participants here.
Judith,
I have one suggestion to your blog. Could you include more plots/images in your post ? Reading along with all the text makes it hard to understand your points and focus on what you are trying to say.
Just my suggestion.
Wind hook, a valid point, but i have a difficult time putting figures in and getting them sized correctly, plus it rapidly eats up space and there are copyright issues (everyone else does it, but I am not sure about the copyright issues on a blog). I try to provide links to the relevant figures. Let me put a few more links in for figures.
Judith, you ask “So how should we approach this problem? Is there predictability in the climate system on these timescales?”
Simple answers. Dont approach the problem. There is no predictability in any of the climate models on any future timescale, since none of them have been validated.
Jim Cripwell
With all due respect, none of us have been validated, either, and yet many of us are quite predictable.
Some of us have complete certainty, too. ;)
I am a PhD Mechanical Engineer, with 47 years of experience developing computer simulation models where simulation results have consequences for public safety and human life, an outsider to the climate change research community, and AGW skeptic. As a skeptic, I very much appreciate the leadership Dr. Curry has provided in engaging the broader scientific community into discussions of proper and ethical use of current climate change models. I believe the proposed a priori decadal simulations are one of the best ways for climate change modelers to get objective feedback on the difficulties they face in obtaining accurate 50 year simulations, and to provide some sobriety in their enthusiam for allowing politicians to actually try to legislate changes in public policy using predictions of unvalidated models.
I don’t see how down-scaling does anything other than consuming resources. The Southeastern US shows a remarkable dependence on ENSO, for example. If the global model doesn’t get ENSO right, how can a down-scaled model of the SE USA with boundary conditions set incorrectly possibly be correct?
I’d make two points about this:
1. There is massive uncertainty in long-term (50/100 years) models, let alone decadal models. What evidence do we have that we can do this at all, in any meaningful, useful way? Isn’t one of the possible decisions that we should abjure completely from trying to make predictions used for policy planning purposes, when those predictions have so little proven skill? Wouldn’t it just be irresponsible – in the same way that climate science has been irresponsible over the last 20 years – to do this?
2. I suspect even the most meaningful predictions would be likely to be used inappropriately. Given that 95% (guess) of the variability over a decade will be due to weather, not climate, the climate predictions over decadal periods – even if they’re accurate – hold little useful information. All that’s likely to happen is that the predictions are misapplied in poor policy (the policy seems to flow from the ‘headlines’ not the details of the predictions). This has been the case here in the UK, where the Met Office’s junk science has been used to justify cutting back on reserves of road grit & salt.
I know it will be anathema to you, but it may be that the current state of climate science has little or nothing to offer policy makers. A false sense that we know what is coming may leave us much worse prepared for real weather than just planning based on prior practical experience.
…it may be that the current state of climate science has little or nothing to offer policy makers.
That’s my take. I’m all for climate scientists working on the big problems of understanding and forecasting climate, but until they have a proven track record for accurate climate forecasts (not tweaking models until the hindcasting comes out right) I question the usefulness of providing forecasts to decisionmakers much beyond “it’s probably going to get warmer.”
Meta-question: At what point do we trust infant sciences for forecasts upon which we will make huge, expensive decisions at the global level?
Meta-question 2: How likely are climate scientists to overestimate the accuracy and importance of their work?
My sentiments exactly, based on 47 years of experience developing and using computer simulation models where answers used by government employee decision makers (NASA) are critical to public safety and risk of human life.
Ceri Reid writes ” This has been the case here in the UK, where the Met Office’s junk science has been used to justify cutting back on reserves of road grit & salt.”
There is also speculation that the Met. Office’s junk science was responsible for the recent debacle at Heathrow. 5 inches of snow in any Canadian airport, or, say, O’Hare, Chicago, would be a minor probelm; certainly not enough to close any of our airports down at all. With warning from the 2009/10 winter, why did not Heathrow have more snow-clearing equipment? I suspect the Met Office forecasts, based on the false idea that adding CO2 to the atmosphere causes global temperaturess to rise catastrophically, is at least partically responsible for what happend recently at Heathrow.
Yes, the idea of a Magic Thermostat seems to have embedded itself in the bureaucratic planners’ minds.
“but it may be that the current state of climate science has little or nothing to offer policy makers. A false sense that we know what is coming may leave us much worse prepared for real weather than just planning based on prior practical experience.” In a nutshell, thanks.
Judy – You state, “The key challenge of multi-decadal climate forecasting is prediction of the change points (transitions) of the major ocean oscillations. Again, based on my experience with probabilistic seasonal forecasting, the only way I see to do this potentially with any skill is to select the models that do a relatively good job at simulating the key features in hindcast mode, and then select the ensemble members from these models that compare best with observations for the first year or two of the simulation.”
This raises the issue of model selection, along with the question as to the importance of initial conditions in determining the accuracy of model output. For very long term forecasts, boundary conditions appear to be more important, but as one shortens the timescale, at what point do initial conditions become critical?
As for model selection, the criteria appear to be somewhat nebulous, and there still appear to be circumstances in which even the best model fails to outperform (and may even underperform) the mean of multiple ensembles. I referenced the following in an earlier thread, but it may be relevant here:
Multi-Model Ensembles
Fred, initial conditions are definitely important for the 10-30 year simulations. There is a lot of debate about how to deal with a multi-model ensemble and weight the individual models.
They also underperform the time-tested best prediction system: “Tomorrow will be similar to today.”
Sounds like one hellava project! I would recommend a statistics watchdog to double check error estimates prior to advising policy makers, thorough review of paleo-reconstruction error estimates, validity of proxy selection and selected proxy end points and grab a couple of those chaos math whizzes. :)
Get the computational fluid dynamics community involved. Their experience modeling chaotic systems is probably the best that is available.
As there was just a recent paper which highlighted the inability of any of the current climate models to reproduce the appropriate climate fluctuations on any time scale, it is difficult for me to see the logic of this effort.
As the UK Met Office’s recent experience illustrates, it is better to be humble and admit our limitations rather than to raise unjustified expectations.
Dr Curry:
There is no possibility of using GCMs to predict the next 20 years. This is demonstrated by the last 10 years: no GCM predicted that there would be effective stasis in mean global temperature over the last 10 years.
The best that could be done to predict global climate before 2030 is to assume the established pattern will continue. And this pattern consists of several cycles of which two are dominant in the recent record.
One apparent cycle seems to have a length of ~900 years. It gave us
the Roman Warm Period (RWP)
then the Dark Age Cool Period (DACP)
then the Medieval Warm Period (MWP)
then the Little Ice Age (LIA)
then the Present Warm Period (PWP).
The other major apparent cycle has a length of ~60 years. It gave us 30-year periods of alternative negation and enhancement of the warming from the LIA and, therefore, there was
cooling or no warming before 1910
warming from ~1910 to ~1940
cooling or no warming from ~1940 to ~1970
warming from ~1970 to ~2000
cooling or no warming after ~2000.
If this pattern continues then
cooling or no warming will continue until ~2030 when warming will resume towards temperatures of the peak of the MWP
or, alternatively,
cooling or no warming will continue until cooling towards temperatures of the LIA will initiate before 2030.
Richard
Richard, I am sure you have seen this graph, which puts paid to the meme of “no models predicted such and such a period of “effective stasis in mean global temperature” over the “last 10 years””
Since there has been no effective stasis in temperature over the last 10 years or don’t you look at the various reports of global temperature?
Notice in the graph that some models do predict periods of “even” cooling of ten years or so.
And have you even looked at the results of all published models, since you are categorically saying “no models”
Sorry, I forgot to post my cite, thanks.
http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/
bobdroege:
No, I have not seen that and I have no intention of looking at it. But I would consider any valid information you provide.
Some years ago I posted information on that site similar to the information I have published here. That web site responded by not refuting my point but publishing personal lies, smears and insults about me while banning me from making further posts and making no announcement of the ban.
So, I have good reason to reject anything on that web site because I know they post assertions that have no validity of any kind and refuse to post corrections.
If you have real information then please link to a real source and not to a propogandist slime pit. I would then be pleased to consider the information.
Richard
But you said all models, not all models except those from that slime pit site. So your bias is showing as well as your lack of scepticism.
If you were treated poorly at that site, perhaps this one will provide information about what the models have predicted.
http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php
And how about the fact that it is still warming, ie not stasis in the current temperature trend?
bobdredge:
Thankyou for that link, but I do not see a model that predicted the stasis of the last decade which can be seen here
http://www.cru.uea.ac.uk/cru/data/temperature/
Perhaps you would state the model which accurately predicted it that can be found at the link. Please note that I am talking about a model prediction which used the correct emissions scenario and NOT a model hindcast.
Please note that I have no “bias” but I do have extreme skepticism of unfounded and unsubstantiated assertions that The End Is Nigh.
It is NOT “still warming”. There has been no statistically significant (at 95% confidence) warming for the last 15 years. So, be grateful that fears of warming are so far unfounded.
Happy Christmas.
Richard
> Please note that I have no “bias”
Thanks for the morning guffaw, Richard.
Nice mispelling there Dick,
And another meme about the “It is NOT “still warming”. There has been no statistically significant (at 95% confidence) warming for the last 15 years. So, be grateful that fears of warming are so far unfounded.”
It is still warming, but if you cherry-pick data sets and time frames, you can make that statement.
However, it does not work for all time frames and data sets.
And it only means there is a greater than 5% chance that the warming observed on the data set you mentioned is due to natural variation or data error, rather than a trend due to a forcing.
But you should know all that, after all you are a published climate researcher, n’est pas?
bobdroege:
So,
(a) you assert that some model predicted the last decade of global temperature stasis in advance
but
(b) you are either unable or unwilling to cite a model which made that prediction.
Are you sure you are not yet another persona of Derecho64?
Richard
Richard,
So,
(A) I did not assert that a model predicted the non existant stasis of global temperature in the last decade or decade and a half.
and
(B) I did provide a cite that shows that there are model outpute that dhow stable or declining temperatures for near decadal periods of time.
And calling me a sockpuppet is just rude, or is that the way you behave when you can’t make an argument based on scientific facts?
bobdroege:
You adopt the usual AGW-promoter tactic of making an assertion and when proven wrong then denying that you made the claim while appending an enflaming insult.
Anybody can scroll up and see that you wrote:
“Since there has been no effective stasis in temperature over the last 10 years or don’t you look at the various reports of global temperature?
Notice in the graph that some models do predict periods of “even” cooling of ten years or so.”
Both those assertions are falsehoods, and I called you on it.
Also, I did not call you a “sockpuppet”, and I wonder why you think I would wish to insult sockpuppets.
Richard
Richard, check out Easterling, D. R., and M. F. Wehner (2009), Is the climate warming or cooling?, Geophys. Res. Lett., 36, L08706, doi:10.1029/2009GL037810.
I’d be inclined to remove access to climate model output for a while for scientists working on these kinds of problems until they learn some basic experimental design.
Here the question is: “are negative trends unusual in recent surface temperatures?” Determining the null and the appropriate tests warrants at least a few paras discussing the underlying process etc, I would have thought.
Instead the authors use GCM simulations as the null, “containing no anthropogenic or natural forcing factors that have been detrended due to model drift in these experiments.” [my emphasis]. The significance of the observed temperature trends are tested against the distributions derived from this “control”.
By this stage I am lost as to what is being compared against. It might well have been better to simply compare against a normal distribution (at least the criticisms of the assumptions would be more obvious) or perhaps an appropriate AMRIA model.
Further other GCMs (I trust they were detrended too) are used as surrogates of the way climate might develop over the next decade, and the Abstract claims: “We show that the climate over the 21st century can and likely will produce periods of a decade or two where the globally averaged surface air temperature shows no trend or even slight cooling in the presence of longer-term warming.”
Well of course they don’t, they only show that climate simulations from GCMs exhibit this behaviour (as the paper’s final few paras acknowledges).
In this sense the paper says more about GCMs and their range of response than anything about the real climate.
As I said at the beginning it would have been a better paper if they had stuck to what is observed and tested against simpler models to make their point.
So, HAS, write a rebuttal paper in GRL and illustrate how Easterling and Wehner went so horribly wrong. Care to give it a shot?
Derecho64 http://judithcurry.com/2010/12/23/scenarios-2010-2030-part-i/#comment-25359
Quite often in the course of a proof one comes to a step that simply warrants writing “obvious” alongside it, although this doesn’t necessarily mean that everyone “gets it” immediately.
I’m reminded of many many years ago sitting in a
a lecture (appropriately I think on partial differential equations) where the lecturer wrote “obvious” alongside a step, and someone asked “why?”
The lecturer sat for some time and contemplated the matter, and then left the room only to return 20 minutes later to declare “it was obvious” and continue with the proof.
HAS
Don’t worry. D64’s response to everything is “do the science over from scratch or don’t bother commentating”.
He is yet to share his own, no doubt impressive, resource of publications rebutting Lindzen, Spencer, McIntyre & McKittrick etc. But only because revelation of his secret identity may cause mainstream media scandal, governments to topple etc.
Thanks AnyColour, but if D64 understands my last comment he won’t be back for more.
Of course if he is we’ll just have to send him back to his room for another 20 minutes :)
And did any of those 55 models accurately predict the last ten years before it actually occurred?
Your graph has so much overlap and different colours and stuff it almost looks as though it was prodcued by the Hockey Team to hide something.
I’m just a layperson, but looking at the graph, it appears to me the models predicted 2000 to 2009, would be the warmest decade in the instrument record, and I believe I’ve read 2000 to 2009 was the warmest decade in the instrument record.
JCH:
The Earth has been warming from the Little Ice Age for centuries so of course the most recent decade is the warmest of recent decades. But, importantly, the global temperature did not rise over the last decade while the models predicted that it would. So, the models were wrong (again).
Happy Christmas.
Richard
My question was badly phrased. it should have read:
‘And did any single one of those 55 models accurately predict the temperature profile for the last ten years before it actually occurred?’
That the models all show a similar general trend is no big deal… I could have predicted that with squared paper and a pencil by continuing pre-existing trends (as RSC has pointed out) to about the same level of accuracy. But climate models are supposed to bring new levels of sophistication and accuracy – otherwise why spend billions of bucks on them.
I was interested to see if any one of the 55 actually got it right. And I guess the answer is no. Somehow I’m not surprised.
Again, I’m just a layperson, but it also appears to they also predicted 1990 to 1999 would be the warmest decade in the instrument record up until that point in time, which I believe I have read it was, and it appears they are predicting 2010 to 2019 will be the warmest decade in the instrument record up to that point in time.
Whether any one 1 of the 55 got it “right” on a year-to-year basis, or however you want to describe your expectation, seems to me an unusual expectation, but I’m unqualified to know. I would find it shocking if they could do that with skill, or even come close to it. If one did, I think they would conclude it was dumb luck.
Maybe the experts can chime in on whether or not your expectation outweighs predicting the climate would produce two record setting decades in a row.
One way to look at it is….If there are 10 horses running in a race, and 10 people each tip a different horse, inevitably one of them will be right.
One way to look at life is that there are 10 decades in the century, and inevitably you’ll die in one of them.
Philosophy can be so reassuring at times.
‘Whether any one 1 of the 55 got it “right” on a year-to-year basis, or however you want to describe your expectation, seems to me an unusual expectation’
In which case, given that squared paper and a pencil could do just as good a job, then why have we spent billions on these irrelevant models?
Because the climate is of utmost importance to a society.
In the late 1800’s a scientist said mankind’s combustion of fossil fuels would warm the earth because CO2 is a greenhouse gas, and increasing its presence in the atmosphere would increase warming. This would have major consequences to society. Arrhenius, I believe, thought many of these consequences would be good.
So one thing they did was to make computer models to find out if he was right.
The thing I like about Vaughan Pratt is he never strays too far away from talking about Arrhenius. That’s with whom you are arguing (Arrhenius {the guy with the squared paper and the pencil,} not Pratt, and certainly not the modelers,) and no offense intended, but I do not think you’re doing even a small fraction as well with your arguments as you seem to think.
Arrhenius siad that temperature would warm up between 1,5C and 2.0C if the CO2 doubled. He also said a lot of very useful things in my own special subject – physical chemistry and some rather less helpful things in politics/social policy.
I’m quite happy to believe that Svente got it roughly right with his squared paper and ruler. I have no beef with the guy. He’s probably on the right lines.
But that doesn’t answer my question. Why have we employed vast resources on climate models that tell us no more than SA did in 1906? Job creation scheme? Look busy makedo work? Resthome for otherwise redundant supercomputers? What have we got for our money that is more useful than Sventybaby told us 104 years ago?
Coz I really cannot see that they have been any use to humanity at all.
PS And thanks for reminding me that Arrhenius thought that a warmer world would also be a better world overall.
I’ve yet to see any reason to believe that he wasn’t right in this regard as well.
why have we spent billions on these irrelevant models?
Actually I’d be interested to know what the actual modelling budget has been so far. 0.1 billion? 1 billion? 10 billion? 100 billion? 1 trillion? 47 trillion (a number I’ve been seeing a lot these days). Anyone have a reasonable estimate?
But I wouldn’t call models irrelevant. Models are typically developed in order to understand a process better. Sometimes the model has a closed form, like F = ma, sometimes not because the system is too complex to reduce to a mere formula.
Complex models are typically trained, tested, and used. Training uses some known data to establish parameters. Testing uses additional known data to assess the quality of the model; if not good enough then more training is in order, and so on to reasonable convergence. Uses are many, here are some.
One obvious use is for prediction. If the model incorporates the principles of the underlying processes sufficiently well, one can expect excellent predictions. For example the parabolic trajectory of a ball in flight permits prediction of where it will go. This simple model is based on a solid understanding of the underlying physics. We even know that this model does not scale, since at larger scales the parabola turns out to have really been an ellipse, or a hyperbola if beyond escape velocity.
Another use is to develop insights into the model’s behavior. Such insights can in turn be used to inform the science. They can also be incorporated into the model so as to simplify them, or make them more efficient, or more accurate, etc. Sometimes they even lead to a closed form formula applicable to some part of the system.
Why bother with theory? One answer is that applied scientists can do their work better when they have better theoretical insights into the processes their theory is applied to.
Another is that theories that were developed for one set of circumstances can suffer what a programmer would call “theory rot” as circumstances change. For example the energy of a particle of mass m and velocity v is ½mv² for all v’s in the experience of 18th century physicists, but in the 20th century Einstein taught us to increase ½ to 1 as v approached c, a situation encountered in modern linear accelerators, or if we converted m to pure energy.
In the case of climate, much of the theory in recent decades is predicated on CO2 as the immediate agent of change. But if we reach a tipping point where the permafrost starts to melt in earnest, the emphasis may switch to methane as it’s released from the bogs and tundra. Theory has to be adaptable that way.
Some theories get “hot,” i.e. become fashionable, for a while. Someone gets a terrific insight into cloud formation, or Hadley cell circulation, or carbon sequestration, and all attention is focused on that and ensuing insights until that trail runs cold and the scientists turn their attention to other theoretical topics.
Yet another reason for developing theories is purely for the joy of scientific curiosity. This is the main reason I do theory. Amazingly people even paid me for several decades to enjoy myself in this way, until I retired. I continue to do theory today, which I enjoy sufficiently to do it for free now.
Latimer and I went around on this – he seems to want a large number (spread over the globe and over at least 20 years) to satisfy his desire to deride anything that doesn’t meet his definition of “utility”; the facts are that climate modeling in and of itself is an extremely modest-cost activity. A major climate model center, like NCAR, spends $<10 million/year on its climate model. See my last comment to Latimer at http://judithcurry.com/2010/12/18/climate-model-verification-and-validation-part-ii/#comment-24332
You’re not getting it. The world has been warming NATURALLY since the 400+ year-long LIA ended in about 1860. At about .6°/century. So it is no trick to say that each decade will be warmer than the preceding ones.
What the models must do to show “skill” is show ADDITIONAL warming, in parallel with surging CO2 emissions since 1940. This they have utterly failed to do.
What has been naturally warming it?
JCH,
You admit you are not an expert. Why do you then stick you comments in. They are signs of misunderstanding the entire issue. The temperature has varied as much or more many times the last 10,000 years and reached as high or higher temperatures, the last being the warm period about 1,000 years ago. Whose SUV’s caused those do you think. Lack of understanding is no basis for choosing a cause (what else could it be). Natural variations happen, and we do not yet fully understand the causes, but we know large ocean storage (and currents) and solar cycles are fully capable of causing the variations.
Actually, this isn’t a “what else could it be” situation. There are several lines of evidence that show that increased greenhouse is the cause of most of the warming that we are experiencing right now. What you are doing is looking at past changes and attributing “other” natural causes without explanation. For instance, past causes would not show a cooling stratosphere or rising tropopause. Different changes in the atmosphere call for different explanations. Also, different temperature symptoms do the same. Like, warming is happening in the winter faster than summer, and warming is happening at night faster than the day. Both point to an enhanced greenhouse and not other natural factors such as the sun or ocean oscillations. Then there is the ocean itself, which the warming patterns, by observing the advection, suggest the changes are caused by enhanced greenhouse and the overall winter sea pressure changes point to greenhouse gases and human related aerosols.
When you begin to look at these symptoms and then look at what we are doing to directly change the planet, confidence grows as to how this attribution becomes stronger. We know we are causing the level of CO2 to rise and O2 to decrease by direct measurement, we know this level is increasing by fossil fuel burning by observing changes in the isotope levels in carbon found in plants and corals, and we know through direct measurement that the greenhouse is enhanced at wavelengths of different man-made greenhouse gases. You may find different causes for different symptoms, but you will only find one reason to explain them all happening at once. We know we are trapping more energy. We know why. We’ve observed several symptoms of that happening concurrently. We have yet to find any evidence that there are mitigating factors, and we know from looking at the past, the climate is sensitive to changes and has positive feedback factors. There really is not one single piece of evidence that will ever point to the fact that current warming is occurring because of humans and this change will cause suffering. If you are waiting for that, you won’t ever find it, unless you need the evidence to knock on your front door, in which in the case of AGW, it will be too late due to the impossibility of preventing the carbon cycle. We can’t stick the Earth in a bottle and exclude every unknown variable. This is why all science is built upon a wall of evidence as opposed to scientism, which is built upon truths and falsifications. All scientists agree that we exist, all nature is the same throughout, and we can use symbols that stand for things in nature. Besides these, all facts are subject to change. Anyone doubting that CO2 and other GHG’s are changing the planet is very welcome to come up with a more reasonable cause, but they must provide the reasons. Until then, if we are to ever move on from this quagmire of Popperisms and unknowns and “assertions!” conversation, it is best we agree on certain facts as we know them. And those facts are, at the moment, the world is warming, we are causing it, it’s going to get worse… so…how warm and how soon.
Queue the mêlée.
This they have utterly failed to do.
Bryan, could you be a little clearer about which step in the following reasoning “utterly fails?”
1. We are currently adding 30 gigatonnes of CO2 a year to the atmosphere.
2. Nature draws down around 50-60% of our contribution, leaving exactly the amount needed to account for the 2% annual increase in CO2 as extremely carefully measured at Mauna Loa.
3. When CO2 increases at that rate, the Arrhenius law assuming a climate sensitivity of a mere 1.9 °C per doubling of CO2 (much less than anything Arrhenius imagined in his 1896 paper) very exactly accounts for the rise of 0.54 °C between 1970 and now. That’s actually a rise of 0.54/0.4 = 1.35 °C per century.
Unless you can poke a hole in one of these three steps, or find some other heating mechanism besides CO2 that can raise the temperature at 1.35 °C per century, that looks to me like pretty conclusive proof that CO2 has to be the cause of such an extreme rate of warming.
Incidentally why did you settle for 0.6 °C per century when you could have significantly reduced it by using the anomaly of −0.26 in 1875 giving you 0.66/1.35 = 0.49 °C per century?
Neat sleight of hand VP. but in this case it weakens rather than strengthens your rhetoric.
Arrhenius may have sad lots of things in his 1896 paper, but as a good scientist, he thought a little more and came up with his revised ideas in 1907.
Would you like to remind us what his predictions were in that 1907 paper?
Neat sleight of hand VP. but in this case it weakens rather than strengthens your rhetoric.
While I don’t understand what you consider to be “sleight of hand” in my question, since everything was completely open with nothing hidden, I agree about both the weakening and the rhetoric.
With regard to the latter, World English Dictionary defines:
rhetorical question
n
a question to which no answer is required: used esp for dramatic effect. An example is Who knows? (with the implication, Nobody knows).
I knew Bryan wouldn’t know the answer and indeed my question was only rhetorical. So far you’ve given no indication that you know either.
I also agree with you that the evidence I advanced only weakens the case for global warming, at least for those still in need of evidence. A widely distributed article by McClatchy-Tribune’s Michael A. Memoli in today’s papers explains why Hawaii’s new governer, Neil Abercrombie, will be wasting his time if he follows through on his stated plan “to use his new post to counter conspiracy theorists who continue to allege that the president was not born in the United States.”
Memoli writes, “While [Abercrombie’s] goal may be to support Obama, experts who study political extremism say the actual impact of additional evidence is to perpetuate the conspiracy. They say people who embrace such conspiracies are guided by suspicion, and therefore view any contrary evidence as part of the conspiracy.”
The science may not be settled, but people’s positions here certainly are. As I wrote earlier, “anything less than … a long-running and massive global conspiracy … leaves too many questions unanswered. How else can you explain the many scientists, insurance companies, world governments, and reputable media like the Economist and the New York Times who insist that global warming is happening? How could they not all be in cahoots with each other?”
Would you like to remind us what his predictions were in that 1907 paper?
I’m afraid I’m not familiar with that paper. If you meant his 1906 paper, there he estimated that, taking water vapor feedback into account, climate sensitivity would be 2.1 °C per doubling of CO2.
104 years later, how well did his prediction pan out? Well, throughout that period we’ve measured both temperature and CO2, the latter with remarkably high accuracy since 1958, so we can look and see if he was right.
After taking into account the other climate impactors besides CO2, and taking what Arrhenius was talking about to be transient climate response with a 10 year lag from CO2 to temperature, the actual value (which automatically takes the feedbacks into account) is straightforwardly computed as 2.2. (To incorporate a delay of d years when fitting, simply replace the year 1790 in Hofmann’s formula with 1790+d.) With a delay of 8 years it is 2.10, making the point that the time taken for the surface to register the CO2 increase makes a difference.
(For what it’s worth, my personal belief based on the last century’s data is that the surface temperature lags the CO2 by approximately the doubling period for the CO2 added beyond the natural level. This appears to be around 30 years, for which the transient climate response so defined would be 3.2. One must ask which definition of climate sensitivity is meant before agreeing or disagreeing with any estimate of “it.” There is no such thing as the climate sensitivity.)
Given that Arrhenius lacked the benefit of a century of global warming data, it is astonishing that he could predict this so accurately. But even Lubos Motl admits “that I am impressed by [Arrhenius’s 1896] paper. It is as technical and as detailed as the fourth IPCC report except that it was written by 1 man instead of 2500 people and it was written 111 years earlier.”
Arrhenius also thought global warming would be beneficial. Given that Stockholm in January would have averaged around −5 °C then, and Northern Sweden can get down to −30, this point of view is quite understandable. Had his university been in the tropics he might have been more concerned.
Randomengineer: V. Pratt claims this [that CO2 is the sole climate impactor]
I suppose I should be used by now to people putting words in my mouth, it seems to be part of the m.o. on this blog. I can understand randomengineer not reading my many posts about the impact on climate of the Atlantic Multidecadal Oscillation, solar cycles, El Nino events and episodes, volcanoes, and human aerosols and other GHGs as he seems to have a reading problem. What I can’t understand is what I said that made him think CO2 was the only climate impactor.
Name anyone on this blog who has ever said this. It’s nothing but a long-running strawman argument that deniers can’t bear to part with.
you’re not addressing the underlying question of how natural warming and cooling occurs in the first place.
More of the same. Either randomengineer is pretending people aren’t addressing this question or he has some sort of reading problem. It’s been addressed over and over and over again.
Even if our understanding of the natural causes is not perfect, it would have to be far worse than any serious scientist thinks it is in order to be able to account for global warming purely with natural causes.
Some climate deniers recognize this, others like randomengineer and Bryan don’t. Deniers are all over the place on what parts of the science they decide to reject.
You can’t claim CO2 as the culprit from 1800 – present but claim that 1450 to 1800 was caused by something else entirely. That’s just handwaving.
Again you’re putting words in my mouth. I’ve never claimed anything about what impacted climate 1450-1800. You’re the one who’s handwaving. Temperatures have been fluctuating for billions of years, so what else is new? You can find much hotter times in the past than the MWP, and much colder than the LIA. No one is claiming that automobile exhausts caused those extremely hot periods, yet they happened. I don’t claim to know why they happened, so don’t say I do.
What I’m certain you can’t find is any time in the 5,000 years leading up to 1900 when the global temperature rose 0.9 °C in one single century. Here’s the HADCRUT3 record (with 10-year smoothing for clarity) showing this between 1910 and now; in fact that’s 0.9 °C in the 90 years from 1910 to 2000, exactly 1 °C rise per century. The first half of that rise is about 2/3 natural and 1/3 CO2, the natural component being the upswing of the AMO. Then the AMO swings down offsetting the more strongly rising CO2. Then they both swing up together, this time splitting the rise half and half between them. The AMO for your information is a natural and obviously major impactor of CO2, more than twice the impact of the most violent volcanoes of the whole century. We have a pretty good understanding today of the natural causes of CO2, and none of them will be competitive with CO2 going forward in the present century.
Will you please try to pay more attention, randomengineer. It is really tedious having to repeat all this over and over again and at the end of it have you act as though it had never been said.
Your sleight of hand was this throwaway phrase
‘much less than anything Arrhenius imagined in his 1896 paper’
which I am sure that you put in to draw attention to just how reasonable you wanted us to think that you were being. There is absolutely no other purpose to this gratuitous remark since later on you are using the results of his later paper (unattributed), where ‘much less than anything he imagined’ no longer applies. Once caught in your little trick, your rhetoric is much weakened.
I have learnt the very hard way that it is impossible to get a straight or accurate answer from a Climatologist on any question. I fear your manoevure is another example of that unfortunate tendency.
Professor Pratt
Your (earlier) graph showing steady temp rise and the cite of Hofmann was claiming purely anthropogenic signal. Your only cite re natural was for 65 year oscillation of AMO which you contend is masking things, hence the necessity of your graph periodicity nulling AMO. Claim whatever you like post hoc, but your intention was clear enough.
What I am saying here is that by doing this you are effectively ignoring the MWP and the LIA and now claiming you don’t know… and *I* am handwaving? Please.
What I said was — and I will repeat — that if you can’t show how CO2 changed in order to cause the MWP and the LIA then you can’t make any claim about the past 200 years. CO2 is the primary driver or it isn’t. It isn’t the primary driver only when convenient.
Now, show us how CO2 made the MWP and LIA happen, or admit that claims of anthropogenic signal post 1790 are simply convenient correlation. bear in mind that YOU are the one who claimed cause. I didn’t.
Oh, and lastly, nice job re calling me a denier. I suppose if by that you mean that I am denying that some crank graph from a retired has been is meaningful, then YES, denier will work just fine. Don’t like the term “has been?” Don’t call me names either. Geez.
randomengineer, no one said MWP and LIA were caused by CO2 changes. Consensus on LIA is that it was solar, and I suspect MWP was too.
Vaughn Pratt:
You say;
“I also agree with you that the evidence I advanced only weakens the case for global warming, at least for those still in need of evidence.”
Yes. And “those still in need of evidence” are scientists.
Activists and politicians do not need evidence which is why we got into this mess.
Richard
Unless you can poke a hole in one of these three steps, or find some other heating mechanism besides CO2 that can raise the temperature at 1.35 °C per century, that looks to me like pretty conclusive proof that CO2 has to be the cause of such an extreme rate of warming.
You’re confusing things. Brian isn’t jumping to a conclusion.
Brian there is saying that there’s natural warming, that the earth has warmed and cooled by itself without humans. He’s saying that the warming since the LIA appears to be natural. To counter this you can’t jump to conclusions re CO2 because you’re not addressing the underlying question of how natural warming and cooling occurs in the first place.
Brian is correct to ask this. Before you can start slinging CO2 about as an answer, you need to show how the LIA occured in the first place, how the earth cooled from the MWP to the LIA without magically losing CO2. You can’t invoke CO2 as the sole temperature regulation agent unless you can also prove that the LIA was caused by CO2 magically disappearing. You can’t claim CO2 as the culprit from 1800 – present but claim that 1450 to 1800 was caused by something else entirely. That’s just handwaving.
Live by the CO2, die by the CO2, I say.
If CO2 is the *only* agent responsible, you have to show that the LIA was caused by a lack of it. You haven’t done that. Until you do, I’m viewing your arguments as being substantially the same quality as those of the Iron Sun thread bomber guy (albeit with a somewhat saner writing style.)
Show me where anyone says CO2 is the *sole* climate impactor.
D64
V Pratt claims this. Meanwhile I have come to regard you as a counterproductive one trick pony thread flamer with little to say. This will be the final time I reply to you. Go bother Latimer.
I guess being skeptical of “skeptics” is bad form.
Tough.
Thanks Randomengineer :-(
I’m not sure I deserve quite such cruel and unusual sentence for an unspecified crime.
Just in case anyone’s interested, my New Year’s Resolution is already that I shall try very hard not to feed any trolls that I happen to come across. But will stick to civilised discussions with the grownups.
Lost another interlocutor D64?
Seems people have you sussed.
We know co2 isn’t the only climate impactor, but the IPCC story that natural variation was the climate change driver before 1945, then co2 took over until 2003, then natural variation took over again is just a bit too far fetched for most people with more than a few brain cells to rub together.
Probably why you still believe it.
DR64,
Ask and it shall be given….of course you will ignore it.
http://co2now.org/
Our fellow poster, Dr. Lacis, has been oddly quiet to your question, but here is what he thinks:
http://www.reportingclimatescience.com/this-issue/atmosphere-and-surface/andrew-lacis-explains-how-the-co2-thermostat-works.html
Now that you will pretend this is not an answer to you is a given, but, pearls before swine and all of that.
Nowhere does anyone say that CO2, and only CO2, has an impact on the climate.
Thanks for performing genocide on strawmen. Again.
This is known as Argument From Ignorance. It is a logical fallacy. The particular form being used here is:
If a proposition has not been disproven, then it cannot be considered false and must therefore be considered true.
http://en.wikipedia.org/wiki/Argument_from_ignorance
Latimer
Sorry… but you’re the one who has been responding to him/her/it when he/she/it taunts you.
Interestingly I’m not on your side, as it were: I’m an AGW believer who reckons man to be running an open ended experiment that probably won’t end well. I figure the radiative physics people are largely correct. I think the only worthwhile “green” tech we have is nuclear. I’d like to see more of this.
My point is merely that I think you and I could share a pint and not come to blows.
***
This is known as Argument From Ignorance. It is a logical fallacy.
Jim
Vaughan Pratt’s argument is that man and CO2 are the driver of climate since 1800 and he’s been making this argument for a number of threads. He’s invoking Arrhenius and the population study stuff from David Hofmann as his basis.
I don’t agree with the professor. The same natural signal that gave us the MWP and the LIA ought to be able to have an effect since 1800. Given that the temp hasn’t reached MWP proportion yet it seems premature to make claims re “unprecedented” as well.
Brian H is essentially making this same argument.
The question to V Pratt then is the same thing: if CO2 is the driver from 1800 to date then how did it drive from 900 – 1800? If CO2 is the driver at all then you have to be able to show how it creates the LIA.
The idea that CO2 drives now but not then seems *mighty* convenient, eh?
Interestingly I’m not on your side, as it were: I’m an AGW believer who reckons man to be running an open ended experiment that probably won’t end well.
Well, the entire universe is a bit of an open ended experiment, no?
Oops, my bad, I replied to randomengineer too soon (see second half), sorry about that.
Notice in the graph that some models do predict periods of “even” cooling of ten years or so.
And some models don’t. In fact, they’re all over. It appears that the operating principle here is to have an ensemble of SWAGs whereupon you pick the one line that accidentally looks closest to what happens followed by a proclamation that the fat lady has just hit high C.
It’s amazing to think that anyone could be skeptical of that.
I agree with Richard Courtney. But assuming you or someone else embarks on this ambitious, albeit, dubious task, will you insist on the safeguards outlined in many of your previous threads, such as a protocolyzed approach designed in advance with statisticians (from all political perspectives), public code, and audit trails just to mention some?
At the risk of anticipating comments in this thread that may never materialize, I see it as notable that Dr. Curry is specifically addressing her descriptions to the acknowledged need to improve on the ability of current models, singly or in aggregate, to made short term and/or regional predictions. Their current inadequacy in that regard is generally recognized.
The questions, then, relate to downscaling, initialization, model choice, and other variables. I was particularly intrigued by her suggestion that certain internal climate modes such as ENSO, PDO, etc., can best be modeled during “well established” as opposed to transitional states, as a means of reducing uncertainties. I wonder whether there are data on this for predictions over longer than seasonal intervals – certainly one would want that for the longer-cycle changes such as PDO or AMO – although from her commentary, it appears there may not be.
Beyond the technical challenges to model development, there is another facet to this topic that also merits some attention. Despite some carping from extreme factions, it is now generally acknowledged that potential adverse consequences of climate change cannot be adequately addressed by either mitigation alone (e.g., curtailment of CO2 emissions) or adaptation alone (e.g., engineering solutions, changes in agriculture, water conservation, etc.). It seems to me that the relevance of the points Dr. Curry raises should be seen in this context. What happens on a regional scale over the next 1-3 decades harbors important implications for adaptation, but these relatively short term consequences and their variation from one region to another does less to affect the magnitude of mitigative strategies, which ultimately must address long term, multigenerational global effects. To the extent that climate change is adverse on the latter scale, the question of “when” and “where” diminishes in importance to the extent that the answers will be “eventually” and “everywhere”.
It was not my point here to argue the magnitude of long term climate consequences, those arguments already having been made in excess, but rather to point out that the relative importance of adaptation and mitigation will likely vary according to the length of time at issue and the size and heterogeneity of the area at risk. In particular, adaptive and mitigative measures undertaken today will realize their potential over intervals of vastly different length. As an example, drought, water shortages, and forest fires in the Southwest U.S. are critical targets for adaptive measures, but it is possible to argue that long term reductions in warming will be needed for adequate control of these hazards.
‘it is now generally acknowledged that potential adverse consequences of climate change cannot be adequately addressed by either mitigation alone (e.g., curtailment of CO2 emissions) or adaptation alone (e.g., engineering solutions, changes in agriculture, water conservation, etc.)’
Who is doing the ‘general acknowledging’ that you write about. Where can I read such general acknowledgment?
Is there a ‘consenus’? If so, among whom? And on what basis – given that the problem is very uncertain, how has anyone determined whether those strategies would be adequate, inadequate or just plain not needed at all?
Latimer – I made my above comment with some trepidation, because the topic of this post and thread is model improvement in relation to short term, regional climate predictions, and I was afraid we would get diverted into policy discussions. For that reason, I’ll be reluctant to continue along this line.
However, there is now a substantial history to the mitigation/adaptation concept. Originally, most climate scientists rejected adaptation as a subterfuge designed to avoid controlling greenhouse gas emissions. Over time, however, they concluded that the magnitude of the problem was great enough for practical mitigative measures to be insufficient to forestall plausibly serious harms from continued warming. The result was an acknowledgment of the need for combined mitigation/adaptation strategies, and the IPCC WG3 analyses addresses adaptation in some detail.
For a taste of the voluminous writing on the topic, Google is informative, at Climate Change Mitigation and Adaptation
I’ll probably prefer to return to Judy Curry’s focus on model improvement unless important new evidence on mitigation/adaptation choices is offered.
With the greatest respect, if you don’t want to go discuss an issue in depth and aren’t prepared to justify your position against some rigorous challenges, then I suggest that you don’t raise it at all.
You put this question on the table, not anyone else. And any trepidation was not apparent in your very direct language. ‘It is generally acknowledged…’ which begs the question ‘by whom?’
Readers can visit the link I cited to make their own judgments about the level at which the need for combined mitigation/adaptation strategies is recognized. I’ll be content with that, and will try to focus on some of the salient challenges facing attempts to render climate models more accurate when downscaled to regional predictions over relatively short time horizons.
Fred -In spite of your expressed desire to redirect the conversation, I think it should be pointed out that several of your points should be discussed with regard to policy implications. And if I think about it for a while, I can probably find implications for the models as well.
The first is your apparent automatic assumption that the consequences of climate change will be entirely negative. Without banging the drum about where that idea came from, there’s considerable evidence that there would also be major positive effects. And that the claimed negative consequences would be less than has been assumed.
The second point is the automatic assumption of the direction of the climate change. I won’t make a case here for the opposite of that assumption, but as a matter of public policy it would be criminally negligent to dismiss it as impossible. Case in point – the present stories in at least sceptic circles re: the UK Met Office and the realities and results of “weather” for the last 3 winters. I suspect the alarmist blogs have a different take (I haven’t checked yet). But the reality is that life has gotten really sucky for a lot of people because of this particular automatic (knee-jerk?) assumption and it’s resultant policies.
The third point is that you also apparently assumed that the wild fires in the Southwest are a direct result of GW/CC. This is a case where ignorance reigns. Those fires have little or nothing to do with GW/CC, but are the result of the fire suppression policies through most of the last century. Those same policies are still being used in many places, although the Southwest states have gotten smarter about it over the last few years. Those policies also have other and even more far reaching effects than the fires, some of which are also being falsely blamed on GW/CC. If you want an education on this subject, go talk to the hot shot crews, the wildlife biologists and others who do the actual field work. Don’t bother with the environmental organizations – they can’t even get a good count on the wildlife when it’s staring them in the face.
There’s more but I think that’s sufficient for now.
Hey then!
This looks to be great fodder for another discussion, if Dr. Curry hasn’t already posted a page for Mitigation vs. Adaptation, then it seems about due for one (saying this because more than once I’ve begun to post, only to find Climate Etc. has a new topic page on exactly the subject I’d been pondering one step ahead of me); if not already up, then perhaps time for a new topic is indicated.
Excellent topic to discuss, too; up to and including infered assumption number one.
(Re-hashing alleged assumption two seems silly at this late date and overdone. What direction? Surely there’s some strawmanish putting words into Fred Moolten’s mouth on that one, but even if not, aren’t we well-past such a simpleminded idea as a single monolithic direction? “CRIMINALLY NEGLIGENT?!?!” Drama queen much, Mr. Owen?)
As for assumption the third, it seems quite clear that Fred Moolten has attributed the regional conditions from his specific example cited to clarify coherently with Dr. Curry’s baseline assumptions, ie that natural variability plays at least a significant role in events; to excorriate the selection of that particular example because it also has significant other inputs simply obfuscates without also illuminating, and without substantiating documentation of the sort a decent skeptic might ask for, and upon being provided with thank his correspondent, review with deliberate consideration, and amend his views or seek further clarification as warranted. “Ask a hot shot crew?” Could you not even go so far as Fred and Google a link or two?)
So, to frame the proposed Mitigation vs. Adaptation vs. Positive Benefits topic:
Are the much-vaunted but generally hollow and vaguely unsubstantiable claims of benefits of accidental, uncontrolled, unplanned and unconsented Climate (fill in whatever PC term least offends your sensibilities, one suggests ‘Neutralization’ ;) ) even remotely likely to be true, proportionate, accessible, or measurable, or does this remain as meritricious a red herring as it has been up to now in the available literature, readily falsifiable from not just current but also well-established older reputable peer reviewed literature and plausible new experiments that no one seems to have troubled to attempt before making these spurious handwavings?
Just to frame the question as fairly and in as balanced a manner as possible, from a sceptical point of view.
Have at, good gentles, so soon as that new topic is opened by Judith’s kind graces.
well this was going to be a key element of the forthcoming (and much delayed) thread decision making under uncertainty Part II (its one of four posts I’m hoping to finish before Jan 5), will see how it goes
@Bart –
Just a couple points to ponder –
How many people on either side of the CC debate have you seen seriously propose the “possibility” that temps could drop into another LIA? There are some, but not many.
Read “Climate Crash” by John Cox. Good book for the first 11 chapters, then it goes downhill quickly. He makes the case for a LIA well – and then concludes without any evidence or discussion that we’re all gonna fry. Huh???
Second point – UK – Met office predicts warm winter, doesn’t advertise it publicly, then denies it made any prediction at all even though that prediction was used by the bureaucrats to cut back on winter supplies. Then the snow comes – and the cold – and the energy assistance funds run out (in December?). The roads are clogged, industry is stopped, the economy takes a dive – and people start to die – particularly old people, pensioners, the sick, etc because they can’t afford the energy prices. All because da gubmint (read: bureaucrats) prepared for the projected tropical paradise rather than for winter.
Can’t happen?? Don’t be silly – it IS happening – right now.
Criminal negligence? You bet.
On whose part? The bureaucrats for failing to do their job? Or the Met Office for being incompetent and/or making bad assumptions based on preconceptual science?
Does it matter? People still die.
Third – why do you need a link for a hot shot crew? Nearly every State has has them – sometimes dozens of them. Contact your State DCNR or equivalent to find them.
Hmm – do you know what a hotshot crew is? My bad for not defining, but it’s one of those automatic expressions that I normally don’t even think about. Hot shot crew = an underpaid, underappreciated seasonal crew dedicated to fighting wild fires (includes smoke jumping) , doing controlled (or prescribed) burns and attacking other dirty, nasty related jobs that need doing. Tough puppies.
As for assumption the third, that was neither an assumption nor a question. To repeat – those fires have little or nothing to do with GW/CC, but are the result of the fire suppression policies through most of the last century. Do you remember the Yellowstone 1988 fires? Yogi Bear sucks swamp water – and is directly responsible for millions of acres of wildfire devastation. Not my judgment, but that of others who have to deal with the fires. I only get to walk through them – five of them in 2006, only one in 2010.
To be a bit more explicit, Smokey’s “stop forest fires” policy delayed them, while fuel built up. When they happened they killed tall trees that would have otherwise survived, and scorched the soil deep enough to sterilize it, which doesn’t happen in normal fires.
So Smokey is a covert agent dedicated to killing forests. Or maybe just message-drunk stupid, like ICS believers. (ICS has been selected as the next Warmist acronym by the denizens of smalldeadanimals: Irritable Climate Syndrome.)
Or maybe just message-drunk stupid, like ICS believers.
That can work both ways. Were I a betting man I’d exploit it.
smalldeadanimals – good site
Hmm – shoulda thought of this before – but the same situation wrt wildfires is building up in all those “wilderness areas” that the environmental organizations keep pushing for. Massive amounts of fuel, no maintenance, no prescribed burns. Watch for it.
Jim Owen:
You ask;
“How many people on either side of the CC debate have you seen seriously propose the “possibility” that temps could drop into another LIA?”
I explain that possibility above in my first post in this thread.
Richard
Thank you. I missed that.
Fred, since mitigation is of such interest to you, you must be gratified to have a perfect counter-example unfolding in Britain,day by day. I’m not an expert in mitigation, so I’d be interested in what you think can be learned from it.
Fred, my apologies –
for “mitigation”, read “adaptation” – I was getting my “-ations” mixed up.
Actually, it’s a perfect example of non-adaptation because of reliance on (future) mitigation — for which they are committed to paying through the nose (and ears, and eyes, and every other available orifice.)
From your link Fred:
For people today, already feeling the impacts of past inaction in reducing greenhouse gas emissions, adaptation is not altogether passive, rather it is an active adjustment in response to new stimuli.
What effects exactly are people today feeling? Is being snowed in really an effect of global warming? That is to imply cold is a result of warming. How cold does it have to get for how long before cold isn’t an effect of warming?
Jim – There are many examples but I’ll illustrate with one. The most lethal component of a hurricane is the storm surge. The height of the surge is increased if the surge is based on a higher starting sea level. Although exact quantitation is difficult, one can reasonably estimate that over recent decades, higher sea levels resulting from global warming have resulted in tens of thousands of excess deaths in Bangladesh, Myanmar, and elsewhere.
For all impacts, past, current, and (potentially) future, you might review AR4 WG2, with its dozens of relevant references. If you believe other salient references have been omitted, these too can be cited.
I believe we often tend to see events through a “first world centric” perspective. Although climate change harbors potential adversity for the affluent, developed nations as well as the others, that danger is more immediate among the less fortunate of the world’s population.
Fred Moolten:
You assert:
” The most lethal component of a hurricane is the storm surge.”
No!
The most lethal component of a hurricane is lack of appropriate infrastructure and evacuation capability.
Miami is rich and has them. Haiti is poor and cannot afford to have them.
If you really want to reduce lethality of hurricanes then work to reduce poverty because that is the most effective and the cheapest way to do it.
There is no “reasonable” estimate of increased deaths from rising sea levels. Indeed, your citation of Bangladesh is silly: it has increased both its size and its resistance to ocean storm surges by river delta deposition (and it continues to grow from the deposition).
If this is the best you can do to argue that there are already “impacts” from AGW then your case fails.
Richard
Fred this surely is the worst post you have made.
Assertions, claims that supposition is evidence etc
The sentence regards Bangladesh and Myanmar is straight out of WWF Greenpeace type pamphlets and has absolutely NOTHING to do with evidence and resembles science as well an elephant resembles an oak tree.
Baa H
Don’t discourage Fred too much.
The more he posts his unevidenced assertions, the more the uncommitted will see that, when challenged, the Emperors of AGW are but scantily clad.
However much their toadies have agreed with them that they are bedecked in Imperial Purple and Master of the Universe arrogance.
Fred – how much does a few millimeters in sea level really make? I don’t believe it currently has a material effect on the storm surge. You must realize that SL has been increasing since LIA. The SLR due to AGW is millimeters, if any, and the AGW portion can’t be characterized and differentiated from the normal SLR which, in fact, has lessened in rate. Also, it has been pointed out by others that aquifer pumping has added to Sea Level Rise. So you are very far from showing storm surge has increased from AGW.
Latimer, I was getting all set to say something similar, but you beat me to it.
I think it is important to get the policy question to be answered very well specified before deciding what technique is appropriate to answering it. This requires some more detail than the level included in the post.
From a political (small “p”) point of view in the first instance we are probably interested in the likelihood of temperatures rising above a certain level, and for this we will probably be quite satisfied with a prediction with confidence limits. I note in passing that we probably would be satisfied with estimates at a limited number of points on the globe.
From that we interest in mitigation strategies (if it looks like a problem) would follow and with that sensitivity to controllable factors such as forcings (and the significance of these).
If there is no real confidence in the forecasts then we surely would want to know what to look (out) for as early predictors that things are not going well.
I touched on this whole general issue a couple of weeks ago (see
http://judithcurry.com/2010/12/14/co2-no-feedback-sensitivity-part-ii/#comment-21798) and from that it should be clear I’m waiting for Scenarios 2010-2030 Part 2 . An important point I made in that comment is that from a political stand point quantifying uncertainties realistically is important.
Sorry should have said “prediction with quite large confidence limits” in para 2, and delete “we” in para 3.
Still a bit early to blame the Xmas cheer.
HAS – I don’t disagree with your overall perspective, but an important consideration relates to the pace at which strategies, once implemented, exert their effects. This relates more to mitigation than adaptation. In essence, feasible approaches toward curtailment of CO2 and other greenhouse gases, if begun today, will only manifest discernible benefits over the course of many decades, and there is no realistically useful criterion for judging their performance on an early basis, given the types of internal climate variations as well as other forcings (e.g., volcanic eruptions, solar changes, aerosol pollution control policies) that can encumber the analysis. Conversely, waiting until confidence limits shrink will forfeit the opportunity to avoid potentially substantial harms that become inevitable in the interim. In other words, there is no substitute for making decisions in the face of uncertainty. They can be modified later, but choosing not to decide now is itself a decision of great consequence.
In reading Dr. Curry’s post above, including the passages she quotes, I perceive a strong sense of the general direction in which events will proceed if a business-as-usual scenario is adopted, along with a recognition of the need for better quantitation. The subject of this post and thread is the quantitation, but the trend itself is predictable enough to warrant some rather critical decisions now of both an adaptive and mitigative nature.
I cited the Southwest U.S. as one example of a region threatened by the consequences of unmitigated warming – more drought, more forest fires, less available water, and so forth. Most models do a poor job quantifying the future nature of those trends, but the direction is rather well defined. It would be interesting to discuss other regions in the same light.
Yep, potential effectiveness of mitigation is definitely part of the mix.
My main thought was that if you specify the nature of the decisions you need to make, you probably start to exclude GCMs from consideration (in particular unable to quantify uncertainty well in predictions and therefore difficult to hold a political consensus in democracies).
Fred, if you practiced medicine in the same hyper-precautionary as you predict climate, the entire medical establishment would collapse its overindulgence.
Sorry………..mode………….of
> … more drought, more forest fires, less available water, and so forth<
Fred, you sound like the Demtel man (buy today and you get 50% off – he never says 50% off what). More drought, more forest fires, less water than when ?
As you may grasp, the "more, less" scarey-bear rhetoric simply doesn't cut it. This is the exact reason for Judith C raising this thread
Mitigation of what? You perhaps have historical examples of the devastation wreaked by Global Warming to hand? I and many others have bucketloads of cooling-caused catastrophes. (Real ones, continental and global in scope, decades in duration.)
While this blog post may be informative, I didn’t see the use of reading too far.
For those on the lowest end of decision making, I believe future possibilities (high probability or not) matter very little. In the real world we already combat issues such as flooding, mudslides, water shortages, and coastal erosion, etc. If only we had the funds to build the road that will not be impaired by all 100 year floods, buy out the owner of vulnerable beach front property, assure the farmer he’ll have enough water for his crops, or keep development off vulnerable hillsides…..
I don’t know who the decision makers are that you’re talking to Judith, but I suspect they are not overly concerned about many of the infrastructure problems we currently face. In my town, road projects move forward despite the fact they are not what WE KNOW is necessary. Why? Because there is not enough money to do the job correctly. So what do they do? As much as they can, lest the funds be lost.
Good planning needs good information AND MONEY. We already have much of the information to make good plans. A major problem is….. we can’t/don’t follow the plans.
I’ll have to give Mr. Craven some credit, because (although anonymously – and I’ve explained that before) I will for this record, as a scientist and citizen, relay my concerns and thoughts … and say….
I really don’t care about or believe in climate model forecasts 50 years from now (be it global or regional/local). We don’t deal too well with the current effects of, and man’s response to, climate that are already evident. We need to focus on today’s problems. The decision makers need to have the guts to say you can’t build that house on the beach, steep hillside or floodplain. While you’re at it, instead of spending millions on minimizing CSOs, perhaps direct a percentage of those funds to building sewers and water treatment plants in the (for pc police) developing world.
Academicians can help society find answers to problems, but as far as this climate thing goes, imo, big time humility, and a megadose of reality is in order.
Get Real.
Ditto! and…
Long range “Climate Models” are like Olympic Marathon Champions who are yet to be born. Before the kid can crawl s/he has to be born. Models that “predict” 50 to 100 years hense, must first do 10 to 20 years right, but before even that great accomplishment they must do 1 to 2 years right, and before even that they must do 3 to 6 months right. But what is the current situation; where are we today? We’re at the 24 to 72 hour mark, ususally; not always, just usually. We often talk like new expecting parents. Dreaming about dreams.
Global “anything” needs to wait for “local”, “regional”, “continental”, and “hemispheric”. It’s not just a matter of timespan it’s also about space. We have, in our excitement, put the ox before the cart and we’re not going anywhere. (Indeed, many observers seem to be laughing and yelling that we’re going backwards.)
I expect there will be a lot of caution in decadal predictions, and a typical outcome might be a forecast of 0.2 C rise in 10 years plus or minus 0.2 degrees, which is the natural variability. This would be very hard for politicians to act on. In some ways this is a more difficult forecast to act on than the century forecasts, because the signal barely emerges from the noise in this time scale. The noise is hard to predict, but a key part of this project, is hoping that the noise is in some way deterministic from the initial state, where both ocean circulation and solar activity have to be considered as part of the forecast problem. Interesting challenge for sure.
Joe Sixpack might reasonably ask why anybody should do anything at all – and especially not with his hard-earned cash (taxes) for such a small rise in temperature.
He could reasonably point out that the 0.6K rise that others declare has already happened has not had any measurable adverse consequences. Absent any further hard data, he will be very reluctant to more good money after bad.
Apologies…should read ‘to throw good money after bad’.
And by hard data I mean demonstrable observations of bad things actually happening , unquestionably caused by a change in average global temperature, not output from models.
You mean horrors like the greening of the Sahel? Invasion of pristine desert ecologies by disruptive grasses and grazers and acacia trees and predators?
;)
http://news.nationalgeographic.com/news/2009/07/090731-green-sahara_2.html
Oh, I see I forgot a few de rigeur adjectives:
“pristine, fragile, delicate, desert ecologies” …
An analogy is the beginning of weather forecasting, where improved techniques and data gathering were developed after the first few failed attempts. You only learn by trying, and within a couple of decades decadal prediction could be a mature practical field.
So why should we spend trillions of dollars to implement decisions based on an infant “science”? In 30 years the science will, as you say, be a lot more mature. If, and only if, the prediliction for ASSUMING that the prior assumptions/knowledge is final and certain. If the physical or biological sciences had developed the way climate science has re: the rejection of anything deemed “sceptical”, we’d still be using the Bohr atom and DNA would still be wiating for discovery somewhere in the future.
gcapologist had a point – use the money to build infrastructure NOW that will serve regardless of what the climate does.
You are entitled to your opinion. Mine is that decadal planning is something that needs to be done for infrastructure (dams, reservoirs, levees), and this would be a valuable effort to improve the basis for those plans.
How so? Did you know there are bridges, canals, etc in the UK that are over 300 years old? Those engineers didn’t believe in slim margins of safety – they built for the long haul. How about those Roman roads from when the Republic was in charge? But then, an engineer who had a bridge or building collapse back then was subject to the death penalty if anyone died. Today’s construction is an entirely different matter – roads last 5-8 years, bridges maybe 30 – because the governments that pay for them consistently cheap out. GW/CC isn’t a real factor – money is. Unless you can find a factor that I’ve missed. And I WILL admit that possibility :-)
Dam/reservoir design “might” benefit if you expect massively more water than at present. But I can show you a dam in California that was built like that. It’s never been even half full – until this week.
Note also that levees only channel the water so it’ll flood other places downstream that never flooded before the levees were built. I grew up with levees – the Corps of Engineers still hasn’t learned. I think they’re still using the 1920’s construction manual.
@Jim D
If you are today designing dams/reservoirs/levees that aren’t already capable withstanding a change in global temperature of 0.2C, then you are a very very very dangerous engineer.
And if your planning horizon for the life of those structures is as short as ten years, then you really shouldn’t be allowed to practice with anything more substantial than Lego.
I’ll leave it for other better qualified than me to discuss the safety factors and error margins built into civil engineering constructions as a matter of course.
Suffice it to say that I have just returned from my morning walk along the Wey Navigation (canal). It was originally opened in 1653 and (with regular maintenance and occasional improvements) it is substantially unchanged since then. It has withstood 350 years of the worst that British weather can throw at it with grace and aplomb. I doubt that a 2o year weather forecast will change that.
Link if anyone’s interested in a wee bit of English heritage
http://www.weyriver.co.uk/theriver/wey_nav_1.htm
As with so much in climate “science”, the issue of adaptation is treated as if present arrangements are axiomatically inadequate, and that this needs to be remedied at public expense. I can see no justification for either of these assumptions.
As with so much in climate “science”, the issue of adaptation is treated as if present arrangements are axiomatically inadequate, and that this needs to be remedied at public expense. I can see no justification for either of these assumptions.
Yes, it’s unfortunate people can’t see the 1 tonne of CO2 emanating from their car’s tailpipe after burning 100 gallons of petrol. If it looked more like an oil spill they might be talked into doing something about it.
Also unfortunate is that they can’t see the exponential growth of the world’s population. Their street seems unchanged: same number of people on it as when they moved there. Had they seen it going from “lots of room” to “overcrowded” they might be talked into attending a protest meeting at their local town hall.
Fact is that the population went from 600 million to 6.7 billion in three centuries. In the same period the CO2 emitted per capita increased from about 0.5 tonnes per year, of which 0.3 tonnes was from breath exhaled by that caput, to 4.5 tonnes per year (but still only 0.3 tonnes from breathing). (Most humans today don’t consume anywhere near 100 gallons of petrol in a year, in fact most don’t even own a car. Where are you in that big picture, and does that exert any influence on your position on global warming? No? Really?)
Combining those two exponential increases, in population and per capita fuel consumption, we’re now emitting more than 100 times the CO2 we were in 1700.
But because the CO2 is invisible, the universal response is, “What, me worry?”
Back when we were emitting a mere 10% of the CO2 we’re emitting today, Arrhenius pointed out that the CO2 would work like a warming blanket. Lots more have done so since. So just as with 9/11, we can’t say we weren’t warned.
The blanket metaphor brings to mind the following scenario, between two people I’ll call Bryan and Myrtle (any connection with people living or dead is entirely coincidental). They’re in bed trying to get to sleep; the window is open but it isn’t helping.
Myrtle: Why’d you just go and put on another blanket?
Bryan: I feel more secure with them.
Myrtle: But there’s a bloody heat wave outside. We need fewer blankets, not more.
Bryan: Blankets don’t change the temperature, Myrtle, nature does. It was cold before and it will be cold again soon.
Myrtle: Bryan, that was last winter. Now it’s summer and there’s a heat wave on.
Bryan: Right but it will be cold again soon. (Puts on another blanket.)
Myrtle: Next winter is months away. And stop putting on those blankets, they only make it hotter.
Bryan: No they don’t Myrtle, blankets don’t raise the temperature. Nature controls temperature. Just wait, the temperature will go down again.
Myrtle (grabbing a sheet and heading for the living room sofa): You can fry in hell for all I care.
I note that all you really say is ‘But there’s a lot of CO2 – and lots more than there used to be’. And lots of parables
But no actual answer to TomFP’s simple and very understandable point.
You did not demonstrate anything about why you feel
‘present arrangements are axiomatically inadequate, and that this needs to be remedied at public expense’
Just a lot of handwaving.
As I said elsewehere, it is it is impossible to get a straight answer from a Climatologist. Here is yet another example.
The challenge to the “catstrophism” needs to be harder: Prove that increased CO2 is not beneficial.
Chiefio’s suggestion that plants have once again driven down CO2 to (their) starvation levels and are awaiting a new injection (from a flood basalt or the like) is worth serious consideration. Maybe we can help them out with our small increments, though it’s very doubtful we’ll be able to bring it back up to the 2,000 ppm optimum.
Well, one thing is certain, too cold is really bad.
Thousands of dead starfish that littered a beach near Charleston last weekend are the first signs of what might become a disastrous winter for coastal sea life. They died because water was chilled to a lethal temperature by frigid weather earlier this month.
With coastal waters already hovering near critical lows, biologists worry there might be a mass die-off of shrimp, sea trout and red drum as the season turns cold again.
William Gay, owner of Port Royal Seafood, said he has heard Beaufort crab trappers talk about dead shrimp showing up in their crab pots, but said the cold water hasn’t yet affected his business.
S.C. Department of Natural Resources biologists also heard reports of stunned red drum and sea trout.
Though Beaufort County is only about 50 miles south of the starfish die-off, water temperatures have been a bit warmer, and the extra warmth has helped.
Let’s hear a cheer for warming!
http://www.islandpacket.com/2010/12/26/1490923/cold-weather-endangers-sea-creatures.html
@willard
Well, I did say you could just ignore it.
Or – you can “induce” anything you want from it.
Your choice.
Just a lot of handwaving.
Speak for yourself.
As I said elsewehere, it is it is impossible to get a straight answer from a Climatologist.
No matter how many times you say this, it doesn’t make it true. You just refuse to listen.
I concur – Latimer wants everything put in front of him with no effort on his part. Even then he’s very unlikely to actually try. Quite lazy on his part.
Please present a recent example where you have presented a straight answer to a straight question.
What is it with you guys that means you are completely incapable of making a logical argument.
One last try. @Vaughan
The straight question is ‘why do you think that it will not be possible to adapt to any climate change, and that we therefore need to spend lots of public money.
You managed to begin answer the first part (sort of). You said …there will be a lot more CO2. (100 times more emitted than 1700). You made no attempt at all to show that the effects will be considerable and/or bad. You made no attempt at all to say why even if they were considerable and bad that adaption would not be possible. And finally you said nothing whatsoever about the sources of funding for any other strategies that you may or may not wish us to make.
So on a four part question, a generous marker might give you 1.5/10. On three of the four points that a solid answer would have covered you wrote nothing at all and the argument on the first was undeveloped.
Now do you understand why I do not fall for all your handwaving. There are several steps in the argument that you do not address. Assuming that all you have to do is say ‘Carbon Dioxide’ as if it is some magic spell that allows all and any other arguments to be ignored is wishful and sloppy thinking on your part and contributes greatly to the lack of credibility of those who call themselves ‘climatologists’.
Please try to answer the question fully. Otherwise my observation about straight answers will stand.
It would be interesting to know what “axiomatically inadequate” refers to in TomFP’s “very understandable point”. It would be also be interesting to know how TomFP can justify his assumption that “public expenses” will be lesser if nothing gets done right now. The public is already subsidizing oil. And also potatoes, D64.
I would not bet my money on Greenspanian economical mindframing, right now. I would not even bet others’s.
The government shouldn’t be subsidizing energy at all; not coal, not oil, not nat gas, not wind, and not solar. Only nuclear should get some government help. I do see it as a special case as it has much energy to offer and a lot of political flak. Also, the fuel can’t get into the wrong hands even though that cow is a ways out the door already.
A remarkable postulate:
> The government shouldn’t be subsidizing energy at all [unless we have a special case with much energy to offer and a lot of political flak].
Consistency is a habit of small minds. You know as well as I do that one can’t formulate a set of rules and NEVER break them.
Jim, I agree with what you’re saying, but you need to get that quote down right:
“A foolish consistency is the hobgoblin of small minds.”
It’s inappropriate, forced consistency which is the problem.
You apparently don’t understand that doubling the CO2 doesn’t double the warming. Your analogy is false and therefore unconvincing.
Apropos of nothing, did you perchance know that those who work the coal mines very often live in CO2 levels that exceed 3500 ppm?
You also fail to understand that if there are X barrels of oil and Y tons of coal available to be extracted, then that’s exactly how many barrels of oil and tons of coal WILL eventually be extracted – and burned. And if you fail to burn some of it, someone else will take care of that little detail for you. Think China, India, Brazil, etc. You can be cold if you like, but others are not constrained by your delusions.
You don’t have to like that. But you WILL live with it because it WILL happen, regardless of your feelings/illusions.
Damn engineers – just can’t leave a guy his fantasy world, can they?
BTW, you’re talking about my aunt there. :-)
> [D]id you perchance know that those who work the coal mines very often live in CO2 levels that exceed 3500 ppm?
A remarkable question. No, I did not. What should I infer from this bit of trivia?
You could just accept it as trivia.
Or you could learn that high CO2 levels are not inimical to life as such.
Or you could ignore it.
I certainly hope that CO2 levels are not inimical to life, as such or not.
This bit of trivia only shows that miners can mine with 3500 ppm of CO2 levels. It does not tell us how much time miners can endure that. It does not us tell how to transpose this bit of trivia into the climate debate.
Are we supposed to induce that if miners can mine at 3500 ppm, so could the earthlings if the Earth came to pack up as much ppms as these mines overall?
Willard and others:
Other human data: Signs of intoxication have been produced by a 30-minute exposure at 50,000 ppm [Aero 1953], and a few minutes exposure at 70,000 to 100,000 ppm produces unconsciousness [Flury and Zernik 1931]. It has been reported that submarine personnel exposed continuously at 30,000 ppm were only slightly affected, provided the oxygen content of the air was maintained at normal concentrations [Schaefer 1951]. It has been reported that 100,000 ppm is the atmospheric concentration immediately dangerous to life [AIHA 1971] and that exposure to 100,000 ppm for only a few minutes can cause loss of consciousness [Hunter 1975].
Would it be fair to induce that this bit of trivia offers the other bits of trivia that “we can’t feel the warming of a fraction of a celsius” and that “CO2 is a very, very tiny fraction of the stuff that there is on Earth”?
Addenda posted there:
http://judithcurry.com/2010/12/23/scenarios-2010-2030-part-i/#comment-25730
> You apparently don’t understand that doubling the CO2 doesn’t double the warming. Your analogy is false and therefore unconvincing.
Or maybe he does, but felt such a detail would be irrelevant in an *analogy*, or perhaps even difficult to express in a simple *analogy* for complex behaviour intended to explain a broader point unaffected by details like that. What with it being an *analogy* after all. Feel free to mentally insert “blanket with approximately half the effective insulating properties as the previous blanket” if it makes you feel better. Nice and succinct that, eh? And makes a drastic difference to the *analogy* yes? Oh no, wait. It doesn’t.
> Apropos of nothing, did you perchance know that those who work the coal mines very often live in CO2 levels that exceed 3500 ppm?
Apropos of nothing indeed. I’m glad you pointed that out first.
> You also fail to understand that if there are X barrels of oil and Y tons of coal available to be extracted, then that’s exactly how many barrels of oil and tons of coal WILL eventually be extracted – and burned. And if you fail to burn some of it, someone else will take care of that little detail for you. Think China, India, Brazil, etc. You can be cold if you like, but others are not constrained by your delusions.
Which basically boils down to a worldview that is both selfish and nihilistic. Where were you when measures were put in place to counter acid rain?
To paraphrase:
“What you fail to understand is, if there are X tons of lead in the earth, that’s exactly how many tons WILL eventually be extracted – and put into household paint. Or flushed directly into our waterways.”
Now, absent some effort to curb this behaviour you’re probably right – unregulated, unconcerned for the environment and unfettered by rational scientific advice, there is simply no incentive *not* to extract it all. And the more delusional people there are claiming there’s nothing to worry about (such as yourself) the more likely you are to get your wish. You claim this is something that is not understood – but it is precisely because it *is* understood that there is such focus on getting externalties recognised by the global economy, to prevent the otherwise inevitable from happening and give the markets the necessary incentive to focus on other energy sources.
Me, I find your wish a strange and spiteful thing to wish for.
ROFLMAO!!
Putting words in my mouth, eh!
I didn’t “wish” for anything. I stated a fact. If you’re naive enough, you can disagree with said fact. But allow me to ask a question – do you really believe that the Chinese are gonna give up burning coal? Or using oil? Or didn’t you know that they’re building 130 coal plants every year on a 10 year plan? Or maybe you didn’t know that a country (China) that didn’t have but about five miles of paved road at the turn of the century now has 50.000 miles of paved road, is building its SECOND beltway around Beijing and is building 5,000 miles more every year? And just what do you think they’re gonna do with all that infrastructure? Sit there and admire it, maybe?
Then there’s India, which is trying very hard to catch up with China. And then there’s Brazil – and Indonesia and several of the African countries.
And what do you think they’re gonna feed all that infrastructure with? And when do you think they’re gonna stop feeding it?
The answer to that last question is – when they run out of “food” for it. “Food”, of course, being gas, oil and coal.
Yes – I do understand. But you apparently don’t. The developing countries have no/zero/zip/nada incentive to enter into binding agreements which will fetter their development and keep their people in poverty. Your belief in catastrophic warming is and will continue to be unconvincing to them because they know that failing to develop their capabilities in ways that will enable them to survive will – simply ensure their non-survival. As I said, you may not like it. But they don’t care what you like or what you believe.
I told y’all in the beginning – I don’t play the BS games. You might want to start believing that.
A few words are missing:
> […] that this bit of trivia offers [the same kind of argument than]
PS: Thanks for the conceptual science, JCH!
In answer to various responses above, I also wonder how decadal forecasts can be used. Infrastructure should be based on century forecasts. Perhaps a customer for decadal forecasts would be agriculture planning, but I don’t know how or whether anyone does that or needs ten years lead time. Maybe only investors will gain by gambling based on it. Or, if we see a trend towards increasing forest fires or decreased snow at ski resorts, that might be planned for.
My reading of the CLIVAR project suggests that they are hoping to use decadal forecasts just like an early warning system. i.e. coming floods, drought etc
Jim – the one piece of “infrastructure” that ALWAYS serves a community in adversity is wealth. “Action” invariably costs money, and in addition leaves wealth-impairing legislation festering on statute books long after the lunacy that begat it has been silently disavowed.
I got a letter from the electricity people today, telling me in unctuous tones how my electricity will in future cost more, to subsides “renewables”. This is robbery, and I hold every one who has ever advocated “action on global warming” responsible.
TomFP:
Yes! I applaud your every word.
These truths need to be shouted so they can be heard above the loud lies of the AGW-alarmists.
Mitigation is pointless.
Adaptation is cheap.
Wealth reduces pollution.
Subsidies for ‘renewables’ are immoral injustice.
Richard
http://www.standpointmag.co.uk/node/3639/full
(h/t BH)
“Madrid’s Universidad Rey Juan Carlos has estimated that the market distortions needed to create one green job destroyed two jobs in other sectors. Since 2000, each green-sector job has cost €570,000, with wind-industry jobs costing €1m.”
You still a technical editor for CoalTrans International, Richard?
Derecho64:
As I said when you raised that irrelevance last time, the answer to that is in my entry on the ‘Denizens’ thread.
Are you still an anonymous ignoramous, Derecho64?
Happy Christmas?
Richard
I may be anonymous, but I’m no ignoramus.
Any evidence to back up yet another unsubstantiated assertion from you?
I’m pretty sure that the grownups herein are capable of putting whatever biases exist in perspective.
Yup. The adults can.
Not sure about the kiddies and badly behaved spoilt toddlers though
Not sure about the kiddies and badly behaved spoilt toddlers though.
Early warning sign of Alzheimer’s: you find yourself forgetting that the kiddies of today are the adults of tomorrow.
Some of the ‘kiddies and badly behaved spoilt toddlers’ posting here have considerable chronlogical ages.
It was their mental ages I was referring to.
Any evidence that the ones acting childishly are actually adults? They sounded to me like they were in their early teens. Three-quarters of the You-tube comments seem to be in that category. They’ll grow out of it.
Actually, decadal forecasting is at the same stage seasonal forecasting was two decades ago. If you live in the UK, you may not feel very optimistic about such prospects based on the seasonal forecast analogy http://www.bbc.co.uk/blogs/opensecrets/2010/12/met_office_seasonal_forecasts.html , but what is going on at UKMO does not seem to be state of the art, i’m working on a seasonal forecast post
Dr Curry:
You say;
“decadal forecasting is at the same stage seasonal forecasting was two decades ago.”
Yes! And that is the important point.
The politicians have been funding climate science to the tune of $billions in each year of those two decades. And what have they got?
Answer, nothing of any use.
So, how much longer will politicians keep pouring money into climate science? Climate scientists should be afraid, very afraid.
Happy Christmas.
Richard
Given that even arch-modeller Andy Lacis can’t provide any examples of real successes of the last thirty years of big efforts, big money and big boasts about their abilities, I do have to wonder what the world would lose if all climate models were immediately unfunded starting Jan 1 2011.
And the more I inquire, the more the answer seems to be ‘nothing’. They have shown zero forecasting ability and zero other utility. In UK our Met Office has already junked its ‘medium term ‘ effort as being too difficult – after a series of embarrassing misforecasts and growing public anger that they have been sold an expensive pig in a very expensive poke.
Surely it is time to put these efforts in a decent grave, halt the misery of those who nurture and tend to them and save the resources so squandered. They have achieved nothing at all.
(For Lacis’s remarks, see http://judithcurry.com/2010/12/20/understanding-conservative-religious-resistance-to-climate-science/#comment-24821 and subsequent)
Surely it is time to put these efforts in a decent grave, halt the misery of those who nurture and tend to them and save the resources so squandered. They have achieved nothing at all.
Sorry, had to step away for a minute. Are we on to the George C. Marshall Institute now? I’m with you on that.
No idea. I am a Brit so your reference has no resonance with me. Please explain.
Reminder: though this blog is US based, the readership is wider and certainly comprises most of The Loyal Colonies as well as The Rebels from Washington.
References to purely NA based things without any explanation are unlikely to be understood. It is, so we are told, *Global* warming after all.
Given that even arch-modeller Andy Lacis can’t provide any examples of real successes of the last thirty years of big efforts, big money and big boasts about their abilities,
At least the denier story is consistent there. Deniers find it convenient to claim that thirty years ago climate scientists’ models predicted global cooling. That way that can then claim that the unprecedented run-up in global warming shows that climate models have been a complete and utter failure.
Unlike most of the denier dogma, which appears to be largely based on the deniers’ theory that scientists know nothing, there’s a certain appealing consistency in the way those two parts of denier dogma mesh together.
Unfortunately all predictions of global cooling have never been more than the sort of fringe science that the media loves to blow out of all proportion. That’s why all the examples of global cooling predictions are from the popular press, based on questionable research, which one can always find examples of.
The theory that scientists know nothing has been proven to the satisfaction of the deniers. As far as they’re concerned, that’s good enough for them. It’s an internally consistent theory as long as you don’t look too closely at it. Looking closely at theories is a weakness that only scientists succumb to; it makes things so much harder, no wonder they know nothing.
Deniers find it convenient to claim that thirty years ago climate scientists’ models predicted global cooling.
Don’t need to “claim” anything. That happens to be true – and I have some of the documentation. But don’t take my word for it – go to the NYT archives and look around 1973-1975.
As for scientists “knowing nothing” – they generally know a great deal – in depth. And very little on a broader scale. There are, of course, exceptions, but they’re rare. Part of my job description for about 30 years was to provide scientists with the “systems” knowledge they needed to do their jobs effectively.
You really shouldn’t get so exercised about all this. It’s bad for your blood pressure. :-)
Milivoje Vukcevic made a very interesting point about the impact of Russian summer on UK winter. He argued that a negative feedback from Russia would reduce winter temperatures.
Although Milivoje was arguing that this would tend to even out the temperature, thereby mitigating global warming, he did not consider the possibility of overshoot. If as MV says the extra-hot Russian summer can cause a cool European winter, what if it caused an extra-cool winter? What might that look like?
Might we be looking at the start of a ping-pong match between Europe and Russia where each volley takes six months, with the negative feedback reversing the sign of the temperature each six months?
That could result in an increasing oscillation in which every Russian summer is hotter than the last, and every European winter is colder than the last.
That would be really neat to see. I do hope the deniers get their way and prevent people from stopping this. (I have no intellectual investment in policy, only in the science.) It would be a really cool climate experiment, literally on the UK side.
If Russia isn’t completely fried by the summer of 2013 they might find themselves competing with the US to take over a completely ice-locked UK the following winter. Maybe France too by then.
Trouble is the SH polar front jet is having an excursion as well bringing coldening to the SH midlatitudes ,which is more consitent with stratopspheric coupling ie the smaller then usual SH ozone hole this year.Nam and Sam what a pair.
Sorry forgot link.
http://squall.sfsu.edu/gif/jetstream_sohem_00.gif
Dr. Prat
I was looking forward to the GW, since I think it would be only moderate and beneficial.
If there is such scenario as described above , then I would disagree about Siberia being warmed in summer months by CO2. High insolation would increase radiative effect, but not by CO2, which is there at a very low level (as any global CO2 distribution map will show), but by CH4 (methane) which as a GHG is about 20 times more effective than CO2.
There is a positive feedback here too, progressively higher summer temperatures (I think initiated by a natural warming cycle) would release more and more methane of which there are huge amounts of in the Siberian permafrost. This may make CO2 role far less important.
would release more and more methane of which there are huge amounts of in the Siberian permafrost. This may make CO2 role far less important.
Exactly right. I said as much in theeighth paragraph of this comment concerning how tipping points might shift the research interest from CO2 to methane.
But that’s for the time scale of modern warming, which is measured in decades. For paleoclimatology it is in millennia, much longer than the lifetime of methane, which degrades to CO2 in a matter of decades.
For that reason CO2 is the big control knob for geologists. Because of that, and because methane hasn’t figured much so far in modern warming, there is a tendency to jump to the conclusion that CO2 must be the big control knob for modern warming too.
Serious permafrost melting will disabuse people of that idea.
Extra info:
http://www.nsf.gov/news/news_summ.jsp?cntn_id=116532&org=NSF&from=news
In the area CO2 concentration is about 370ppm.
from the link above: in some areas CH4 (summer concentration) is more than 250x background level of 1.85ppm which would make it ~ 450 ppm
Methane is 20x (link quotes 30x) more potent GHG than CO2, then the total summer’s CH4 radiative effect would be 25-35 times more than that of CO2 in the areas concerned.
Reading about the 300 year old bridges, buildings, roads engineered a long time ago, my first attempt on Dr. Judith Curry blog. Being a bit of a history buff and not having any expertise in engineering nor in gobbling about esoteric subjects with fancy words, let me point out some generally forgotten facts. An engineer of the 16th or 17th century was mostly a so called philosopher with a wide education in numerous fields, from historic climate patterns, current weather as well as multi-lingual (usu. latin, French and German) with an interest in building structures. He (there are no known “famous” female scholars) would have read Descartes as an example. would have known about Kepler and the Aristotilian physical metaphysics. They would have understood that the MWP was followed by a period of cooler climes. They lived in it. Building a road or cathedral could and did last decades partly because of weather and funding. A generally shorter
lifespan dictated that information was passed on from father to son. The old fashioned “internet” called mail was widely used between philosophers, artists, physicians of the healing arts in various countries, kingdoms and fiefdoms. The point of all this history lesson? Just because we had 30 years of warming and now may have entered into 30 years of cooling before warming starts again, let us not forget that this has all happened before. Myopically believing that our times are different because of this or that does not change the past.
A fundamental precept in predictability is the notion that long-lived variations, such as those associated with the PDO or changes in the strength of the Atlantic MOC, can be predicted for a significant fraction of their lifetimes.
My (nonconformist) observations of oceanic oscillations show that nature of these oscillations is misunderstood. Due to de-trended calculations, real and fundamental meaning of a possible long term rise or fall in the underlining cause is lost . Deprived of the cause the oceanic oscillations would revert to the default state of ‘no oscillation’.
Fundamental error of treating oscillations in their de-trended form is more than clear in this set of graphs:
http://www.vukcevic.talktalk.net/NPG.htm
All the underlining causes are on an rising slope since 1860’s coinciding with general trend of the global temperature rise. Further, there are short periods of fall towards ‘default state’ which clearly can be identified with well known lows in the global temperatures. Such a trough, according to this analysis, is currently evident which leads to a conclusion that another significant dip in global temperatures can be expected, with a high probability.
Vukcevic, old buddy!,
Ocean density changes my man! The deeper you go the more dense and pressure of the material for energy to try and penetrate. In shallow water, the material under the water absorbs heat but the deeper you go, the more pressure. Cannot forget also the rotation of the planet and the rotation of the sun. These make short wave energy due to NOT being stationary.
By the way…MERRY CHRISTMAS!!!
Joe, you persist in posting these scientifically illiterate perorations.
Water is incompressible. It does not get more dense as pressure increases. Energy moves at the same slow rate (conductively) at depth as it does on the surface.
Your announcements sound like quotations from Alice in Wonderland. Except that those were clever satires.
I take it you have never worked with an airless paint sprayer.
I take it you have never worked with an airless paint sprayer.
I was going to guess an unventilated garage, but paint fumes would explain it too.
People have died creating the footings under the Brooklyn Bridge due to pressure density changes before they realized what it was that was killing them.
Earth to JL: The bends don’t result from density changes in your precious bodily fluids. You need to read this. Or the Wikipedia article.
Do not air bubbles get compressed?
People have died creating the footings under the Brooklyn Bridge due to pressure density changes before they realized what it was that was killing them.
The behaviour of gasses dissolved in water is not relevant. Density is gm/cc. That does not change under pressure. Temperature affects water density, but not pressure. There are some exotic “ices” which can form under high-pressure regimes, though.
“Temperature affects water density, but not pressure.”
Read and understand this line you have said.
The more depth into the ocean, the more pressure and the more density from colder depths.
Two different actions are happening but are generating a more complex situation and interaction.
This is why we have a poor understanding of the atmosphere actions as there are many actions happening at the same time.
Yet our mindset of thinking can only focus on single actions and forgetting the complexity of multi-action events. “Individual Areas of Laws of Science” in a multi-complex situation.
OK, now I’m gonna get brutal.
JL, you’re full of it, and ignorant as a post. Look up “compressibility of water” in a physics text. It is so minuscule that for all practical and almost all scientific purposes it is incompressible. “The low compressibility of water means that even in the deep oceans at 4 km depth, where pressures are 40 MPa, there is only a 1.8% decrease in volume.” (Wiki)
And don’t come back with any of your usual cr*P about the fundamental errors in Science that you know better than.
To borrow a very apropos Brit expression, Sod off and get educated. You’re making me embarrassed to be a Canuck.
Where does centrifugal force fit into this “wiki” science you are preaching? Planet is rotating. Like a centrifuge.
NOT INCLUDED IN CURRENT SCIENCE!
Rotating compared with which static background?
I think you really should study some basic stuff about relative motion. Most school science books do this.
Alternatively, rather than waving ‘ROTATION’ everywhere you can find an opportunity, perhaps you might write a short piece on why you think it is important. And with rather more detail that just SHOUTING SLOGANS at us.
To say ‘it’s not taken into account’ is insufficient. You also need to show why you think that it matters rather than merely asserting that it is.
The floor is yours.
Latimer,
This is quite a massive area now.
Literally reprogramming your head to what has been hammered as LAWS and facts are still theories grown upon.
The shape of our planet has different energies working from equator to poles at different variances to generate the balance of rotational flow.
Cloudcover and storms DO NOT CROSS THE EQUATOR, due to the different force generations.
Centrifugal force is strongest at the equator generating the bulge of gases and is weaker towards the poles due to two dimensional rotation. Current LAWS state that relativity is 9.8m/sec/sec. The planet rotated faster in the past and the relativity HAD to be 10.3m/sec/sec due to centrifugal force exerting faster with the speed of rotation. Salt in the oceans compensated by more dense and holding more salt.
Um.
Still not clear to me what you mean. And I think that you mean ‘the acceleration due to gravity’ is 9.8 m/sec/sec. Not sure where the idea of relativity comes in to your ‘model’
On what basis do you say that this number (however called) ‘had to be bigger in the past’? What evidence can you present for your assertion.
And I’m afraid that without further explanation in terms that I can understand (A level Physics will do),
‘The shape of our planet has different energies working from equator to poles at different variances to generate the balance of rotational flow’ is merely gibberish to me.
This is not a consequence of my ‘programming’, bit if your inability to describe what you mean.
Sorry Latimer!
When you get too far ahead of science, it is extremely hard to go back to the beginning where most other society current is.
I doubt if I explained it at another approach, you would understand it any better.
Possibly if you understood a centrifuge and the different speeds to change material densities.
Our planet is in a slow centrifugal speed compared to a planet rotating much faster.
I’m not trying to dodge the question…just figure out a better explaination.
Yep. I have a Masters in Chemistry. We use a lot of centrifuges for separating things. I am familiar with the principle.
And therefore….??? What do you think I am missing??
PS – less patronising answers are more likely to be understood than more patronising ones. You r last post was towards the latter end of the spectrum.
Latimer, quit toying with the poor thing! Did you torture insects and small animals as a child?
Besides, you’re just encouraging more floods of self-important bumpf, and my scroll-by finger is already getting cramps.
Latimer,
That explains a great deal on your hyper focus in one area to the exclusion of all others and why the current science LAWS are absolute.
No matter even if you understood what I was getting to, you would NEVER believe in it as you have prejudice.
OK Joe
We’ll call it a day there, I think.
You’ve had plenty of opportunity to explain what your theories are and haven’t managed to do much more than burble without any real point.
Ciao.
I’ve clicked on your graphs before, don’t quite understand them or the gateways, would appreciate clarification. The way I view it, the PDO is in cold phase, AMO still in warm phase, say sometime in next 10-20 years, both PDO and AMO will be in cold phase, which would mean a cool dip in temperatures, maybe not as cool as previous dips owing to AGW. that is my take on it.
Hi Judith. I appreciate your tempered approach. It certainly appears that cosmic rays play a role in the weather and climate. That in and of itself probably wouldn’t limit the higher max temperature due to higher concentrations of CO2. A separate cloud mechanism (or some other mechanism) would have come into play to moderate the climate response to increased energy in the troposphere due to higher CO2. Some have argued that such a mechanism can’t exist due because the climate sensitivity has been determined from historical data. Do you agree with that assessment or do you see room for some yet to be delineated mechanism that could damp the climate’s response to higher CO2, making the global temperature less than would be expected from radiative and lapse rate considerations?
Andy Lacis acknowledged that the oceanic cycles are important in understanding a possible temporary cooling of surface temperature, but when I asked him how much of the warming the oceanic cycles might have been responsible for in their positive phase from 1976-2003 he didn’t reply.
In fact, I’ve been asking this question of AGW proponents for several years, and not one of them has answered the question. They change the subject, accuse me of being a flat earth denier, obfuscate the issue or simply ignore the question.
It is simple inescapable logic. If natural variation can reduce temperatures, it can raise them too. If it was responsible for some of the warming, then sensitivity is lower than estimated.
I’d be interested in Judith’s take on this.
It is simple inescapable logic. If natural variation can reduce temperatures, it can raise them too.
Consider the claims that natural processes can “mask” a warming signal and later amplify it (e.g. AMO.) I think nobody is claiming natural processes do nothing. What did I miss in your argument?
That as well as “later amplify it” they might have earlier amplified it. PDO and AMO were both strongly positive during the 1976-2003 period. How much of the warming attributed to co2 by the IPCC were they responsible for?
Tallbloke,
So far as I am able to find, the “Global Average Temperature” range is very narrow of a maximum 10C swing. But the range of annomalies is quite wide ranged for regions.
The usual chirruping of crickets. No-one want s to answer this question form the warmist side.
I can see why too.
Didn’t M. Latif, an AGW scientist, give an estimated range?
Dr. Curry
I am bit slow in writing explanations, work in progress but I may eventually need a bit of a help with it.
‘Gateway’ is a symbolic name, it can open and close in sequence generating oscillations; but fully, partially open or not open at all, the long term oscillations will disappear.
The way I view it, the PDO is in cold phase, AMO still in warm phase, say sometime in next 10-20 years, both PDO and AMO will be in cold phase…
If I was a competent scientist I would challenge that, but I will just suggest: they may or may not, or just bobble up and down, with no significant longer oscillating period discernable, as it was case with both PDO and AMO for some time prior to1900, when global temperatures and larger oscillations took off.
I always look to the CETs for the long term variability reference:
http://www.vukcevic.talktalk.net/CET-GMF.htm (ignore green line, here just coincidental); following sudden recovery after 1720 oscillations are slow to build up (integration process) in either the intensity or length to culminate in the 20th century’s large deviations. I suspect that if the AMO was available it would show a similar trend. Source of the 1670-1700 plunge in the temperature’s data is identified to be the same one as the de-trended N. Atlantic Gateway (last of graphs on NPG webpage).
MC&HNY to all.
Judith
late last year, the WCRP sent out a survey called WCRP Community-wide Consultation on Model Evaluation and Improvement.
The survey asked 6 questions. Respondents were to send these electronically to Anna Pirani. apiraniATprinceton.edu by 19th Sep 2009
I have the questionaire on my usb key but the pdf format won’t laet me cut n paste. (I’d be glad to email to you if you wish)
The survay was sent to various groups including
NWP and seasonal forecasting centres
World Climate Modelling Centres
WGCM and associated MIPS
CLIVAR modelling groups
CLIVAR Regional and monsoonal panels
WCRP taskforce on regional climate downscaling
and various others.
This was a “bottom up” survey about the key deficiencies of regional and global NWP and climate models.”
Is it worth trying to acquire the results of this survey?
interesting, I haven’t heard about this before, would be interested in following up on this. I am working on a post “Whither Climate Models?”, I have some material, but this would definitely be interesting and relevant.
Those who are trying to predict weather/climate on a decadal timescale with the underlying assumption that the atmosphere will keep warming due to AGHGs and that this warmer atmosphere will in turn warm the oceans is committed to failure.
Baa,
You are soooooooooooo correct!!!
Atmosphere gases cannot penetrate the oceans density and the deeper you go, the more pressure is exerted.
Many are not familiar with CLIVAR. Who they are and what they do.
This is a group established in 1995 with a 15 year tenure. Their goal is to improve local and regional climate predictions.
They are eqipped with the latest and best including numerous ocean going vessels and aircraft.
They consist of 13 panels as part of a scientific steering group reporting to the project office.
This http://www.clivar.org/organization/organization.php link is a good start.
Baa Humbug, thanks for this clarification, i’ll add the link to the main post
Judith,
It gives me such a good feeling when I get another breakthrough…
Where is the shortwave radiation from the sun produced and where is the long wave radiation produced?
The carona produces short wave radiation due to the speed of the rotational spin and the longwave radiation is produce from the suns core.
From my own research, we are currently in a fully activated Ice Age.
Oceans are the key in understanding the reflectibilty and triggers to how the oceans are currently cooling. You need to understand how water is capable of survival and generated an extremely narrow window of temperature range to try and stay as a mass. CO2 is also inhiting the full penetration of solar energy to the planets surface.
We are mesuring the atmosphere temperature above the ground. No one is measuring the temperature penetration in the ground at the same areas to get an understanding of regional solar penetration.
It is hard to read this post without becoming angry. The premise seems to be that we have solved the AGW modeling problem on the global, long term scale, so we might as well focus down onto regions and decades, to help the local policy makers as well. We can use statistical analysis to get past these pesky, short term local natural variations, because after all it is really all about AGW. This is simply ridiculous, a paradigm of AGW hubris. I would defund the modeling all together.
> I would defund the modeling all together.
Of course you would. Inconvenient truths and all that.
The problem is that there are no truths in the modeling. The modeling is an inexhaustible source of scary speculations. It is worse than waste, it is crying fire in a crowded theater. It has to stop. What we need is to understand why climate changes, not and endless stream of model driven AGW horror stories. I would fund actual science, that is, trying to explain nature.
You will never be able to understand why the climate changes without models.
Derech064 writesa “You will never be able to understand why the climate changes without models.”
On the contrary, the only way we will ever find out why climate changes is with measured data; something we can rely on, instead of the completely useless output of non -validated models.
Ironically, a lot of “skepticism” revolves around claims that “measured data” is wrong. To make sense of observations, you need a model.
You’ve been reading your book again haven’t you? Bless.
How would you know? You refuse to read – you just write on blogs.
What’s that saying about silence, speaking, doubt, and foolishness again?
The book reviews for your favourite book make that point. As do the extracts kindly put in here by those with cash to spare.
But F= G (m1m2)/r2 does not require supercomputers to run it, hordes of sloppy programmers to write it, but is capable of producing useful predictions about all sorts of phenomena. It is qualitatively different from ‘climate models’.
That you fail to grasp this basic point and rely on schoolboy semantics to make glib points merely shows how shallow your understanding really is. Even if you have read the big book you’re so pleased with.
D64
“Ironically, a lot of “skepticism” revolves around claims that “measured data” is wrong. To make sense of observations, you need a model.”
Actually, to make sense of measured data you need a RELIABLE, properly verified model. Otherwise what you get is a confused mess.
We have been doing science for 400 years without models. But I am specifically referring to the AGW models. Them we can do without. They have outlived their usefulness, if they ever had any.
> We have been doing science for 400 years without models.
That would be news to Galileo, Copernicus, Kepler, Newton, Einstein and so on.
I am talking about huge computer models, which none of these pioneering folks you list had or needed. Apparently you mean something else by “model.” No wonder we can’t understand you. In any case my point is that these huge models have passed the point of diminishing returns, as it were. They are diverting money and talent away from the important scientific questions, which have to do with natural variability and predictability.
Ignore D064. He is giggling with teenage glee bacause his favourite big book has pointed out that in some abstract philosophical sense the statement E=Mc2 is also a ‘model’ of the universe.
And having only recently discovered this clever bit of gee whizz semantics he wishes to show off his newfound knowledge at every occasion – relevant or not. Having no sense of proportion he cannot see the wood for the trees…best left in his own little world.
I find his comments to be generally cogent, albeit irritating as we are on opposite sides.
The term model is in fact also used to mean a cognitive construct in phil sci, but that was not my meaning so he basically changed the subject on me. On this blog model almost always means computer model.
Regarding the list it is just what we need: big thoughts from new directions.
Do you have evidence that climate models are “diverting money and talent away”? I suspect you hate models (based on no knowledge of them AFAICT) and that’s what your opinion is based on.
As you haven’t been able to show that climate models are any good at all at anything at all, it is axiomatic that money is being diverted away from something more useful to pay for them.
Not so sure about talent…………
I could point you to studies of model performance, but I won’t bother, since you don’t read them.
See my challenge to you at 11:49.
OK
Here’s the challenge I set to Andrew Lacis – who managed a very lacklustre and unconvincing reply.
In your case, I’d just like you to summarise – in a sentence or two only, exactly what you think each ‘proof point’ shows before you quote a reference. Just a list of papers will not constitute a proof point.
As I am sure you are aware I am trying to discover exactly what value the taxpayers have received for their substantial multibillion dollar investment in these models. As you are intimately familiar with them, who better to cast the most flattering light possible on their achievements?
Here’s my question exactly as posed to Andy Lacis:
‘Please provide 5- 10 recent ‘proof points’ which you would draw to our attention as demonstrations that your sophisticated climate models are actually modelling the Earth’s climate accurately. And that the general populace of citizens and policy makers should therefore give any weight at all to their output in decision making.
That they all agree with each other to a greater or lesser degree is NOT a proof point of anything other than having a similar design. That they all predict that global temperatures will rise as a result of CO2 is NOT a proof point. Arrhenius managed that in 1906.
Please phrase them as you would in a popular science magazine like Scientific American or BBC Pubs. Focus.
Example
Model x (run in 1998) accurately predicted the global temperature record of the period 2000-2010 to within 10% = Proof Point 1.
and so on…
You can choose any proof points you like to show how well the models work. But I’m sure you have a string of great examples easily to hand and will be spoilt for choice. Please pick the ones with the most impact and ‘wow’ factor’
Why would anyone bother to tell you to read up on anything? You’ll make a million excuses as to why you can’t.
You’re not interested in learning anything.
So you have no proof points at all that you are willing to share on this blog for the purpose and utility of climate modelling?
Whatever you may or may not think of my own abilities, recall if you will that this is a public forum and many other than you and I read and post here.
An even wider readership just ‘lurk’ (unhappy term for it), read and inwardly digest. They form their opinions more quietly than you and I , but they form them nonetheless.
I’m sure that, like me, they will be disappointed to discover that one of the chief and most vocal proponents of climate models and how essential they are can find absolutely nothing to say in their favour when openly invited to pick *any evidence at all* to show how good they are.
Perhaps they will conclude that the evidence just doesn’t exist at all – that if it does you don’t understand it well enough to be confident in publishing and discussing it – or that you don’t have any real grasp of the subject and have just been blowing smoke.
My best guess is that it’s a combination of 1 and 3. Given that Andy Lacis – who is about as close to a real bonafide climate modeller as you can get (see his CV) – couldn’t come up with much, then I’m sure that it is thin pickings anyway. But maybe you are not quite the mysterious expert that you like to portray yourself either.
And to dismiss a golden opportunity to shine because you don’t happen to like the character of the questionner is the behaviour of a petulant spoilt toddler rather than that of a mature and rational scientist.
But I could be wrong/ The floor is still yours.
See this earlier thread: specifically the section on Recent developments.
Thanks Judith.
But interesting though your link is about future improvements, I was hoping that Derech064 could provide a list more suitable to persuade ‘an educated Joe Sixpack’ that anything worthwhile has already been achieved…after 25 years and at least a billion dollars of ‘investment’.
‘Free beer tomorrow’ is a good marketing ploy for a new pub. But not when the punters have been disappointed by its non-appearance for two decades and more.
You do seem to make circular arguments when it comes to the current effectiveness of climate models. You must agree that there are not currently any models that are effective for policy making decisions regarding climate change.
I am not saying I do not believe further investment in climate model development is not a good idea; just that the investment to date has not yielded a useful result. Future investment needs to be more carefully managed if taxpayer funds are used.
“Current effectiveness” is dependent on how that’s defined.
My question regarding effectiveness was straightforward and simple- if you agree–“there are not currently any models that are effective for policy making decisions regarding climate change.”
The key point being that policy decisions need to be made for specific regions/nation states –and there are no models that effectively predict temperature/precipitation at that level
They all made the same mistake by not including the planet is rotating. They also did not understand the energy of centrifugal force.
What I have been trying to point out, to me, is obvious. There are two aspects of physics; theoretical and experimental. They are not separate entities. They are one ball of wax; opposite sides of the same coin. Each supports the other.
Currently, from what I can make out, there are many, many ideas as to how the atmosphere works. One can call them models, or hypotheses, or theories, or whatever. It does not matter; they exist. What we dont know is which ones of these many ideas are closest to what is actually happening in the atmosphere.
We will make little progress by going on trying to debate which of these ideas is closest to reality. The only way forward is to get more and more observed data, so that we can determine which idea is, in fact correct.
If you want to see this concept in action, look no further that the collider at CERN labs. Theoretical physicists have be contemplating such things as the God Particle; now they may get the observed data to see how right, or wrong, they were.
We seem to be in the same position with climate science. All sorts of ideas, models, hypotheses, theories, etc, and not enough observed data to determine which are correct. That is the point I am trying to make.
You have not factored in mistakes in physics.
300 year old absolute LAWS that disintegrate when time of the past actions of the planet are included.
Law of relativity at objects falling at 9.8 m/sec/sec do not apply as the planet was rotating faster.
If you really cared you would be advocating something useful for the modeling centres, say demolition and planting trees or housing the homeless .
You’ll do away with climate models only after overcoming the strenuous objections of the United States military, who have enjoyed a close and fruitful relationship with NOAA and NASA, just two examples, for a very long time.
From U.S. Global Change Research Program (USGCRP) :
Climate changes are underway in the United States and are projected to grow. Climate-related changes are already observed in the United States and its coastal waters. These include increases in heavy downpours, rising temperature and sea level, rapidly retreating glaciers, thawing permafrost, lengthening growing seasons, lengthening ice-free seasons in the ocean and on lakes and rivers, earlier snowmelt, and alterations in river flows. These changes are projected to grow.
I found this, and an alarming number of other related web sites from the CLIVAR link in the post. Why are there so many organizations on climate change and who is footing the bill for all of them? But I digress.
Are the listed “problems” on this web site due to climate change really happening? I don’t think so. Was the growing season longer this year? Where’s the proof?
For example, here is a report on the first day a bare tree was sighted. There is no pattern here. Of course, the author cautions that this isn’t evidence that climate change isn’t having an impact. But I’m not buying that.
http://www.naturescalendar.org.uk/NR/rdonlyres/83C1B4FD-891F-405F-A4EF-580D7B262479/0/climatevariabilityautumn08.pdf
10 years isn’t long enough to determine a trend.
OK, so do you have any evidence to present?
Google “first flowering climate change” and the like.
OK, I’m not finding anything definitive for first flowering. It’s either not up to date, no error bars, or a lot of cautions that past data may need to be re-interpreted. Do you have something not behind a pay wall that is a solid study?
I’m tired of all these claims but no proof.
Did you go looking for any?
See the above entry – yes I did.
Yep, ten years is too short a period but getting close. I think it was Tamino that figure 14 years was the minimum for a determining a statistically significant trend. With the PDO shift, a roughly neutral trend should be apparent pretty soon. Should the AMO shift, the synchronization of it with the PDO should produce a negative trend eventually.
https://pantherfile.uwm.edu/aatsonis/www/2007GL030288.pdf
Ten years ago a “trend” was determined in 5 years. Then it changed to 8 years. Then 15 or 30, depending on who was defining the terms. It’s been a moving target.
USGCRP is the multi-agency USA climate change research program. Funded at almost $2 billion per year, it represents about half of the world’s climate research efforts, dwarfing the efforts of other countries. It is very pro-AGW, as dictated by the US government endorsement of AGW in 1992. These people control the climate science funding.
On the other hand, Americans spent $7.1 billion on potato chips just in 2009.
Let people spend their own money to spend and they don’t waste it on irrelevant fripperies like the governments do…..
Just pointing out the scale of the aghastness on Mr. Wojick’s part needs to be put into perspective.
You come across as someone with very strong libertarian (as it’s called here in the US) tendencies, Mr. Alder. Interesting.
Whatever.
Potato chips are irrelevant, Dere. I am talking about climate research funding and what is wrong with it. You do have a knack for changing the subject when it does not suit you. By the way it is Dr. Wojick if you want to use prefixes. My Ph.D. is in the logic of science. Yours? Oh and David Wojick is my real name.
Potato chips are quite relevant – if we Americans are willing to spend ~3X as much on them as we do on some definition of “climate science”, then your complaints that the money is misspent ring rather hollow.
‘Course, from someone with a background in “freeing” industry from environmental regulation, I would expect you to opine that *any* government spending on just about *any* aspect of the environment would be a waste.
Oh, and I don’t care much for titles nor does the use of real names impress.
Why not fund Climate Science by donation then you would know how much people wanted it or what they thought it is worth
I think all government functions should be financed by donations as well – we’ll see if the American citizenry is willing to pony up ~$2,500 per capita to fund the military budget.
Potato chips are quite relevant – if we Americans are willing to spend ~3X as much on them as we do on some definition of “climate science”,
The cost/benefit ratio is far better for potato chips. At least potato chips give us someting to munch on while we read about how we’re all gonna fry and become crispy critters.
we’ll see if the American citizenry is willing to pony up ~$2,500 per capita to fund the military budget.
They might not be. But at least it would be “their” choice that then makes them slaves. Following the Church of AGW dogma would simply lead to slavery by a different route that would eliminate their choice in the matter.
Given a choice, I’ll take choice over dogma.
Thanks for the “slavery” rhetorical flourish. Let me guess – taxes are theft, aren’t they?
Sometimes they’re necessary – to build infrastructure or to “provide for the common defense”.
They’re theft when da gubmint is taking from one person to give to another. That’s not a legitimate function of government.
As I figured…
Predictions of species going extinct due to a few hundred PPM CO2, a catastrophic result, are based on what? Do we have a 100 year long experiment where the global temp went up 3 C? I don’t think so. So just like the warming itself, we are to trust scientists “intuition.” Even without a lot of mutation, the native genetic variability might be enough for most species to adapt to such a slow change.
One of the tells that climate science is out of control is when atmospheric scientists make predictions about biological systems they know nothing about.
I’m just a layperson, but looking at the graph, it appears to me the models predicted 2000 to 2009, would be the warmest decade in the instrument record, and I believe I’ve read 2000 to 2009 was the warmest decade in the instrument record.
JCH:
You made the same post as this above and I there gave you this answer.
Richard
JCH:
The Earth has been warming from the Little Ice Age for centuries so of course the most recent decade is the warmest of recent decades. But, importantly, the global temperature did not rise over the last decade while the models predicted that it would. So, the models were wrong (again).
Happy Christmas.
Richard
So from your post we can conclude that cold causes warmth, right, because that is what you are saying.
In reality, we know pretty much what caused the Little Ice Age, maybe Ben Franklin’s discussion of sunsets might clue you in.
And just because you keep saying global temperature didn’t rise over the last decade doesn’t make it true, but keep repeating the falsehoods.
bobdroege:
Withdraw your unfounded assertion that I “keep repeating the falsehoods”. Apologise for that lie.
I said that there has been no rise in mean global temperature for a decade. I could have said that there has been no change (rise or fall) in mean global temperature at 95% statistical confidence for the last 15 years.
Are you another of the multiple personas of Derecho64?
Richard
Roger Harraban (BBC): Do you agree that from 1995 to the present there has been no statistically-significant global warming
Phil Jones: Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.
As I have commented elsewhere, somehow the climatologists have managed to insert the assumption that the “confidence levels” used by soft squishy quasi-sciences like psychology are appropriate. The “95%” level is used in no hard sciences, especially physics, of which “climatology” claims (risibly) to be a sub-topic.
Confidence levels for rejecting the H0 are set high to offset the normal and inevitable confirmation biases of investigators, the filtering of published results such that only “positive” studies appear, data selectivity and snooping, etc., etc. One chance in 20 that some or all of those are responsible for a result is far too high, especially in a field with so little hard data to work with, and with such huge incentives to conform to the well-heeled and very domineering consensus.
In a word, a 95% level is junk.
Brian H:
Yes, I agree. However, 95% confidence is the large degree of acceptance that climatologists have agreed among themselves that they accept. So, I avoid any possibility of rational dispute if I accept their assumption of required confidence when considering their data.
In this case, the issue was whether there was or was not a discernible change in temperature over the last decade. There was not.
Richard
You cede too much. They “agree” on that level because it’s so easy to game. It’s worthless.
Brian H:
Point taken.
Richard
A summer with Low Sea Ice Extent is again followed by a winter with High Arctic Ocean Effect Snow. This is 50’s Science, right on the money. Ewing and Donn did predict this would happen. This high Snow Cover Extent will have a cooling effect. I am really concerned with the state of climate science. This entire thread does not mention Albedo even once. It is snowing like crazy right now because Arctic Ice melted a lot this past summer.
As more ice melts it will snow even more. That will raise the Albedo of Earth and cool it off. This is the temperature stabilizing mechanism that Earth uses and you climate scientists do not recognize it.
You can look on NOAA’s website yourself and see that years with low Sea Ice Extent are followed by winters with fierce snows.
Does the NH currently have a high snow cover extent?
Peak snow cover has trended up a bit.
http://www.climate4you.com/SnowCover.htm#Northern%20hemisphere%20weekly%20snow%20cover%20since%201966
Sure doesn’t look like a trend upwards in this graph:
http://climate.rutgers.edu/snowcover/chart_anom.php?ui_set=0&ui_region=nhland&ui_month=11
But see this:
There have been a number of indications that January 2008 has been an exceptional month for winter weather in not only North America, but the entire Northern Hemisphere.
We’ve had anecdotal evidence of odd weather in the form of wire reports from Saudi Arabia, Iraq, and China where record setting cold and snow has been felt with intensity not seen for 30-100 years, depending on the region.
From our remote sensing groups, we have reports of significant negative anomalies in both the RSS and UAH global satellite data for the lower troposphere. The there’s NOAA’s announcement that January 2008, was below 20th century averages, plus news that Arctic sea ice has quickly recovered from the record low extent of Summer 2007. Finally, there’s the massive La Nina said to be the driver of all this but may be a harbinger of a more permanent phase shift according to veteran forecaster Joe Bastardi.
Now to add to this, we have images and reports from NOAA and Rutgers University of large anomalies of snow cover extent for the northern hemisphere in January 2008.
http://wattsupwiththat.com/2008/02/09/jan08-northern-hemisphere-snow-cover-largest-since-1966/
Here’s a new one:
Here is the image from the AQUA satellite, as you can see, except for a small part in the Southwest, snow is everywhere.
http://wattsupwiththat.com/2010/12/24/snowfall-a-very-rare-and-exciting-event/#more-30161
Since when are the British Isles indicative of the entire NH?
More Wattsian cherry-picking. Look at the graph I presented earlier – by season, 2008 wasn’t remarkable.
It’s odd that we can see a couple of recent higher highs in this chart:
http://www.climate4you.com/SnowCover.htm#Northern%20hemisphere%20weekly%20snow%20cover%20since%201966
but not the one of “anomalies” that you linked D64. It’s funny how data gets morphed when it is converted to “anomalies.”
You seem to ask a lot of questions but supply little in the way of supporting data.
It will raise the albedo at high latitudes during the winter when there isn’t much sunlight to reflect in the first place. At 50 degrees N, slightly south of London, TOA insolation on 12/21 is about 86 W/m2 compared to the annual average of 285 W/m2. At 55 N, TOA insolation is only 53.5 W/m2. Snow cover would have to last through February at least in areas that aren’t normally covered with snow to have much of an effect.
When the Arctic is frozen, ice retreats and we warm.
When the Arctic is thawed, ice advances and we cool.
It is this simple. Many other factors have an influence, but the stabilizing effect of ice and water is the primary thing that keeps our temperature in the range that earth has had for the past 10,000 years.
Dr. Curry: One thing that puzzles me is that several article posts ago I posted a long comment about intrinsic unpredictability, due to nonlinear dynamics (or chaos). You said you agreed, but I see nothing about the likelihood of unpredictability in the present post.
If anything the assumption seems to be that local climate is somehow more predictable than global climate. But AGW modelers usually discount nonlinear dynamics on the grounds that they are looking at gross global changes. More CO2 must mean more heat energy, they claim. How that all plays out locally is usually regarded as intractable and irrelevant. Now suddenly things seem to have reversed.
The assumption seems to be that of course we can help these poor local people, by telling them how much it is going to rain, or not rain, or snow, or not snow, or how hot it is going to be, or not, etc., so they can take the proper steps to prepare for AGW.
I see this whole movement as nothing but the environmentalists recognizing that all politics is local. Since global scares have not worked they now intend to try local scares.
I see this whole movement as nothing but the environmentalists recognizing that all politics is local. Since global scares have not worked they now intend to try local scares.
Surprisingly (for an AGW believer) I sorta agree with the tenor of this even if I don’t buy into the conclusion re who the bad guy is.
The argument for tighter gridding and better prediction is premised on disproportionate regional impact where it’s assumed that civil engineers are imbeciles who are incapable of building structures that may withstand weather events.
The argument that civil engineers need to be better prepared for potential drought is just stupid; i.e. say you live in Tuscon and there’s a drought, you make plans assuming that this isn’t going to change soon. OK, so let’s say you make a $250 million mistake when it turns out the weather changes and the drought lets up for 30 years. So? After the 30 years and the NEXT drought kicks in, you’re already prepared. You didn’t need a multi billion dollar government program nor 50,000 bureaucrats to do this.
In that case you have to ask what’s left. Well, you’ve argued that it’s fairly evil enviros and my take is that when you have 50,000 employees working on a “problem” it doesn’t serve them to SOLVE the problem. The spectre of evil enviros need not apply. The Iron Law of Bureaucracy is enough.
http://www.jerrypournelle.com/archives2/archives2mail/mail408.html#Iron
I perceive the enviros as along for the ride, annoyingly giggling and cackling amongst themselves — but they’re just passengers, not the driver.
Well spoken Random. Only thing I’d like to point out is that these “enviros” are not just passengers. Many many have made their way into corridors of power. Some “enviros” ARE setting the agenda. (see for example the many coalition govts in Europe and the recent minority Australian govt.)
So if not so much “drivers”, I’d say they’re riding “shotgun”
See the commentary here about the (current :-) )
Chairman of the disastrously performing UK Met Office
http://bishophill.squarespace.com/blog/2009/9/22/should-we-believe-anything-the-met-office-says.html
Robert Napier is certainly no sitter on the sidelines. But I fear that after a spate of embarrassments his tenure will not be a long one.
There’s no doubt that the enviros are busying themselves schmoozing as many bureaucrats as possible whilst simultaneously claiming their poorly disguised misanthropy to be a form of self-flagellation. In the US however note that what recent inroads were made by the republicans were by and large a repudiation of the social conservative focus and instead a quorum on fiscal conservatives. I reckon it’s the fiscal focus that will win the day.
Much of the underlying tea party angst was fueled by notions of irresponsible and overweening government; e.g. there were a number of calls for the abolishment of the EPA. While it’s unlikely that this will come to pass this does indicate a groundswell of opinion against bureaucrats effectively shaping law in the form of untouchable, unelected government agencies. I’m not a tea partier by any stretch but I do agree with their overall point: law is supposed to be created by the legislative branch, not ex-greenpeace advocates sneaking in the back door at the EPA.
(Many downplay the role of the tea party and belittle their events as racist redneck central, but I’m convinced that this will backfire so long as the more extreme right wing evangelicals are kept out of the control loop.)
So in answer to your point, YES, many enviros have made their way into the halls of power, and I think this will actually be good in the longer term in that the more egregious regulatory attempts will result in a political slapdown and much needed house cleaning. Having prominent targets makes this easier. But either way a reckoning is coming to the EPA and other agencies. The tea party is only the warning shot over the bow.
(OK, so I’m an optimist. Sue me.)
And yes as an AGW believer I am all over the notion of fiscal responsibility and effectively neutering greenpeace etc for all time. There is no room for their disgusting misanthropy, which only fuels misplaced skepticism anyway.
You seem to believe that there’s minimal interaction between government agencies, the science academies and the environmental organizations. You might find Chris Horner’s book “Red Hot Lies” interesting and informative. Especially Chapter 3. He’s definitely a sceptic.
Hi Judy – I have posted a comment on your excellent post – see
http://pielkeclimatesci.wordpress.com/2010/12/24/comment-on-judy-currys-post-scenarios-2010-2030-part-i/
Best wishes for the Holidays!
Roger Sr.
Hi Roger, i read your post, excellent comments.
David Stockwell (Niche Modeling) observes Bill Illis fitting the RSS temperature from 1979 to 2010 using the tropical NINO3.4 region, the Atlantic AMO, the Kuroshio Anomaly in the Pacific, with residual warming attributed to CO2. See:
Sensitivity to CO2 ex oceans
From that Illis projects 0.55 deg C rise in temperature for doubling CO2 in contrast to IPCC’s 3 deg C rise.
This follows “Sherwood Idso and his (1998) natural experiments with a sensitivity of 0.15C/Watt or 0.5C for doubling (multiply by three to convert sensitivity in degrees C/Watt to degrees C per CO2 doubling).”
Idso and Illis etc show major paradigm disconnect with strongly different testable predictions of 0.5C vs 3C doubling CO2 by the IPCC.
http://landshape.org/enm/sensitivity-to-co2-ex-oceans
http://landshape.org/enm/sensitivity-to-co2-ex-oceans/www.mitosyfraudes.org/idso98.pdf
Judith,
I know you go by graph, charts and math for a determination on warming and cooling.
I go by “actual physical evidence” in understanding the three waters we have on this planet. The physical evidence of mechanical rotation and the reproduction of the energies these create as well as historical evidence.
This way all options are open to understanding how this planet operates and the objectives of what the planet is doing.
As you are currently understanding, the models are currently failing badly and the theories are collapsing. Basic physics and thermal dynamics have made grave errors in basic understanding in science. Without including planetary rotation, you are missing a vast amount of connected science that can go back to the birth of this planet. The changes in chemistry is enormous.
I just spotted this newly published paper by Jim Hansen: Global Surface Temperature Change
http://pubs.giss.nasa.gov/docs/2010/2010_Hansen_etal.pdf
You do have to view anything Hansen says with a critical eye. That is true for anyone, but especially him with his well-know political activism. For example, he stated China is building more alternative energy than any other country.
http://wattsupwiththat.com/2010/12/24/jim-and-bills-excellent-misadventure/
What he fails to mention is that China is building about 2.5 coal plants per week. That’s 130 per year, and 1300 over the 10 year period China plans to build them.
http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=389×2432942
What he said is correct- China is building more alternative energy than any other country. What you posted was also correct as to their build many coal fired plants……how does that makes what he said previously incorrect?
He made it sound like China is moving towards “green” energy. It may be building more “green” energy sources than any other country due to its size, but it is also building more coal plants. I see Hansen’s characterization as “spin.” I didn’t say he was incorrect or lying, I said you have to evaluate his statements with a critical eye. Look at the omissions. So what was your point?
Let’s look at China’s actually energy mix. It is anything but “green” and it is very deceptive for Hansen to even mention it. China’s energy mix is overwhelmingly “black.” I might also note, it is easy for China to make grand statements about reducing energy usages and using more non-fossil sources, but walking that walk is a far cry from talking the talk.
Coal supplied the vast majority (71 percent) of China’s total energy consumption of 85 quadrillion British thermal units (Btu) in 2008. Oil is the second-largest source, accounting for 19 percent of the country’s total energy consumption. While China has made an effort to diversify its energy supplies, hydroelectric sources (6 percent), natural gas (3 percent), nuclear power (1 percent), and other renewables (0.2 percent) account for relatively small amounts of China’s energy consumption mix. EIA envisages coal’s share of the total energy mix will fall to 62 percent by 2035 due to anticipated increased efficiencies and China’s goal to reduce its carbon intensity or carbon emissions per unit of GDP by at least 40 percent from 2005 levels by 2020. However, despite the anticipated efficiency gains, the absolute coal consumption should nearly double to 112 quadrillion Btu accompanying robust economic growth. China also recently announced plans to reduce its energy intensity levels (energy consumed per unit of GDP) by 31 percent from 2010 to 2020 and increase non-fossil fuel energy consumption to 15 percent of the energy mix in the same time period.
http://www.eia.doe.gov/cabs/China/Background.html
China is doing what is smart for China. They have a very well reasoned plan economically. They have a great many long term concerns, but what they are doing is in a word…impressive
Here are more up-to-date numbers for China’s energy mix:
For 2010:
Coal 70% (About the same as 2008)
Oil (Not quoted in this article)
Nuclear 1.2% up from 1% from 2008
Nat gas 4% up 1% from 2008
http://www.amchamchina.org/article/5681
When I went to the link you posted I found some interesting information.
1st- that China is going to increase their “nuclear based” electricity generation over the next 10 years by a couple of orders of magnitude. That is smart and impressive.
2nd- That China does not seem immune from the inefficient, overly bureaucratic approval processes that make “smart nuclear” far more expensive than necessary in the USA.
I’m skeptical that they will be able to get much energy from wind and solar. Nuclear for them as well as us is the best non-fossil fuel alternative. I tend to agree with at previous poster that in all likelihood all the coal will be dug up and burned, especially if we have the mini-ice age that appears to be on the doorstep. We have had 4 colder winters in a row. Won’t take many more to make climate :)
And, of course, the “40%” figure is the usual risible handwaving that passes for gubmint commitments world-wide these days. There is not a snowball’s that even half that will be achieved, short of something that Nordhaus terms a “low-cost backstop“, an as-yet-unknown low-cost low-carbon efficient energy source discovery.
What IS certain is that the coal plants will get built. And Australia will make billions selling fuel for them. Etc.
Which is all excellent. We must break this CO2 famine! Fortunately, economics is collaborating.
When I read Hansen’s paper, I found it to be an unreliable analysis due to the very high number of assumptions and adjustments to temperature data.
What does your analysis show?
Provide it, and your code and data as well. Thanks!
That question does not make sense. I did not state I performed a different analysis. I stated I read Hansen’s paper. I did not review any model code, but the assumptions that were stated in his paper regarding the adjustments and assumptions. Did you read it?
I’m familiar with the GISS analysis and methodology; I figured you had done a better analysis without Hansen’s “unreliable” assumptions and adjustments. My mistake.
Here’s a nice analysis:
If the extra heat in data measured on land is applied to a period 1900-2010 – just to get a rough idea of the possible impact – using 35-40% land area as hadcrut does – we get global extra heat of +0,34 to +0,39 K added to the overall warming of the Earth related to the extra heat occurring when measuring from cities, Airports etc.
0,34-0,39 K is roughly half the supposed global warming 1900 – 2010 , but in this context we cannot claim to have quantitative precision, obviously. But the rough estimate of 0,34-0,39 K suggests that the impact of “extra heat” that cannot be detected by satellites plays an important role when trying to estimate global temperature trends.
The problem of “extra heat” in land temperatures (likely to be UHI and more) is escalated by GISS because they extrapolate the ground based land temperature measurements over the oceans in stead of using real ocean data:
And then there is this:
This is a very significant result. It suggests the possibility that there has been essentially no warming in the U.S. since the 1970s.
Also, note that the highest population class actually exhibits slightly more warming than that seen in the CRUTem3 dataset. This provides additional confidence that the effects demonstrated here are real.
Finally, the next graph shows the difference between the lowest population density class results seen in the first graph above. This provides a better idea of which years contribute to the large difference in warming trends.
Taken together, I believe these results provide powerful and direct evidence that the GHCN data still has a substantial spurious warming component, at least for the period (since 1973) and region (U.S.) addressed here.
There is a clear need for new, independent analyses of the global temperature data…the raw data, that is. As I have mentioned before, we need independent groups doing new and independent global temperature analyses — not international committees of Nobel laureates passing down opinions on tablets of stone.
But, as always, the analysis presented above is meant more for stimulating thought and discussion, and does not equal a peer-reviewed paper. Caveat emptor.
http://wattsupwiththat.com/2010/03/16/spencer-direct-evidence-that-most-u-s-warming-since-1973-could-be-spurious/
http://wattsupwiththat.com/2010/12/16/uah-and-uhi/
I don’t take anything from WUWT seriously since Watts still can’t figure out what a baseline is.
D64 – Get a clue already. Watts just posted these, he didn’t write them. Your defense here is completely lame.
UHI won’t erase AGW. Sorry.
Show your proof.
Look at the SST data. No cities in the ocean, last I checked.
How does that show a significant difference from the h0 of warming from the last mini-ice age?
UHI is part of AGW possibly/ probably greater than Co2 driven warming
Define what you mean by the “mini Ice Age”. Be specific.
Oh, and I guess you’ve abandoned your “AGW is all UHI” argument, since you’ve shifted over to discussing the alleged “LIA”.
You answer my question first.
What’s an “h0 of warming”?
That’s the null hypothesis. You can throw in UHI if you like.
In true believer land, the auditor is required to start an identical company, run it, and then show what the results should be. Analysis is not allowed. Only belief.
In “skeptic” land, there are no independent analyses – there’s just one piece of code and one piece of data that everyone runs the code on. The “skeptics” learn lots that way!
You just have to read these bits to realize that this is junk science:
“We present evidence here that the urban warming has little effect on our standard global temperature analysis.
However, in Appendix A we carry out an even more rigorous test. We show there that there are a sufficient number of stations located in “pitch black” regions, i.e., regions with brightness below the satellite’s detectability limit
(∼1 mW m -2 sr −1 mm −1 ), to allow global analysis with only the stations in pitch black regions defining long‐term trends.”
“Station location in the meteorological data records is provided with a resolution of 0.01 degrees of latitude and longitude, corresponding to a distance of about 1 km. This resolution is useful for investigating urban effects on regional atmospheric temperature.”
That is simply not true, the margin of error in position is quite often from several kilometers to tens of kilometers (or even more occasionally). Apparently Hansen et al. don’t have the slightest idea how bad the metadata are in the GHCN dataset. I have checked the swedish “black” and “pitch-black” sites where I am familiar with local conditions and they are all badly misplaced., e. g. one is in the middle of a large lake 20 km from the municipal airport (Jokkmokk) where it is really situated. Actually you don’t need local knowledge to realize how unreliable the metadata are. Just select a few airport sites and check the positions in Google Earth and see how many of the positions are actually within airport limits (though the classification in airports/non airports is often wrong too).
Where’s your analysis, analogous to Hansen’s, that has these claimed errors corrected? I’m sure the climate science community would be very interested in it.
‘Course, it’s whole lot easier to take a potshot here and there than doing actual work.
You are correct in that it is easier to find errors in others work than it is to do the original basic research.
However, that point is meaningless to the discussion. Hansen’s recent paper needs to be evaluated based on what was written. When reading that paper, (IMO); any reasonable person would recognize that the conclusions in the paper are based upon numberous “assumptions” about the data, and numberous other “adjustments” to actually recorded data.
I make no conclusions about Hansen overall, simply that this paper seems to point out far more doubts than certainties.
They are not claimed errors. They are errors, period.
And by the way, in what other science would the last digit automatically presumed to be significant?
And here is a brief analysis of the metadata of the 19 GHCN stations in Sweden:
Two of them have zero nightlight:
64502142000 JOKKMOKK 66.63 19.65 264 313R -9HIFOLA-9x-9WOODED TUNDRA A 0
64502456001 KREUZBURG SWEDEN 60.00 18.20 621 19R -9HIFOCO25x-9COOL MIXED A 0
The position for Jokkmokk is way off. The position given is in the middle of a large lake, so it is not strange that there are no nightlights. The actual weather station is at Jokkmokk airfield at 66.49 N, 20.17 E, 25 km to the SE, on the other side of Jokkmokk town. By the way the vegetation in the area is taiga (coniferous forest), not wooded tundra.
The case of Kreuzburg is even odder. There is no such place in Sweden. The altitude is absurd, the closest mountains that high are some 300 km to the northwest. The actual altitude at 60.00, 18.20 is about 40 meters. The position is in the middle of a large forest with no houses nearby, thus explaining the zero nightlights. There is a Kreuzburg (romanian name Teliu) near Brasov in Romania that is at approximately the right altitude. Perhaps that’s the place? The station number seems to indicate that the station might actually be Films Kyrkby, a small village at 60.23 N, 17.90 E, i. e. ca 30 km NW. Whether the actual weather record is for Films Kyrkby or Kreuzburg I don’t know. The difference in altitude would probably mean that the figures would not be obviously absurd.
So that is the two “dead black” sites in Sweden, both obvious errors. So let us take a look at how valid Hansens claim of 0.01 degree accuracy is at the other Swedish sites:
64502080000 KARESUANDO 68.45 22.50 327 371R -9FLFOno-9x-9WOODED TUNDRA B 12
I don’t know exactly where the weather station in Karesuando is, but it is definitely not in the position given, which is not even in Sweden, but rather about a kilometre inside Finland.
64502128001 STENSELE 65.10 17.20 327 380R -9HIFOLA-9x-9MAIN TAIGA B 12
The position given is in the middle of Rackojaure lake c. 4 km NE of Stensele village.
64502183001 LULEA FLYGPLATS SWEDEN 65.60 22.10 17 23S 42FLxxCO 1A 3MAIN TAIGA C 71
The indicated position is about 6 kilometres north of Lulea airport (flygplats=airport), inside Luleå town and in the middle of Luleaelv river
64502196000 HAPARANDA 65.83 24.15 6 5R -9FLxxCO 3x-9COASTAL EDGES C 41
The position given is also in the middle of a River (Torneaelv), but otherwise it seems likely to be within a kilometre of the correct position.
64502226000 OSTERSUND/FRO 63.18 14.50 370 317S 14HIxxLA-9A 8MAIN TAIGA B 22
The weather station is at Oestersund/Froesoen airport. The indicated position is on a golf course about a kilometre south of the airport.
64502361001 HARNOSAND SWEDEN 62.60 18.00 8 18S 19HIxxCO 5x-9WATER A 7
I don’t know exactly where the Haernoesand weather station is either, but is very unlikely to be in the indicated position on overgrown former farmland on Haernoe island, 4 km SE of Haernoesand.
64502418000 KARLSTAD FLYG 59.37 13.47 55 55U 51HIxxLA-9A 1WATER C 38
Also an airport. This position is pretty good. It is in a suburb about 500 meters NE of the airport
64502439001 OREBO SWEDEN 59.30 15.20 33 42U 171HIxxLA-9x-9MAIN TAIGA C 95
Also pretty good, probably within a kilometre of the correct position. Not in a taiga area though. Spelling is wrong, should be Oerebro.
64502458000 UPPSALA 59.88 17.60 41 29U 157HIxxno-9A 2COOL MIXED C 80
Another airport. The position given is in a suburb of Uppsala town about 1.5 kilometers south of the airport.
64502464000 STOCKHOLM 59.33 18.05 52 13U 1357FLxxCO10x-9WATER C 120
This is the site at the old observatory in the middle of the city, which incidentally has a continuous temperature series since 1756. The position given is about 1.5 km SSW of the true one.
64502512000 GOTEBORG/SAVE 57.78 11.88 53 41U 691FLxxCO 7A 2WATER C 36
Also an airport. An excellent position, only about 300 m NW of the actual weather station. Spelling should be Goeteborg/Saeve
64502512001 TORSLANDA 57.72 11.78 3 11U 691FLxxCO 1A 3WATER C 37
Classed as an aiport but isn’t. Torslanda airport closed in 1977. The position given is in a suburb about a kilometre north of the old airport.
64502550000 JONKOPING FLY 57.77 14.08 232 183U 131HIxxno-9A 5COOL MIXED C 24
Also an airport. A good position within a few hundred meters from the airport. Spelling should be Joenkoeping.
64502576001 VASTERVIK SWEDEN 57.80 16.60 9 7S 21FLxxCO 3x-9WATER C 36
I don’t know exactly where the weather station in Vaestervik is, but is not likely to be very close to the position given.
64502590000 VISBY AIRPORT SWEDEN 57.67 18.35 47 18S 20FLxxCO 1x-9WATER B 17
This is NOT classed as an airport, despite “airport” being part of the name, and it most certainly is an airport. The position is excellent, being the only one except Goteborg/Saeve to be within the actual airport area.
64502620001 HALMSTAD SWEDEN 56.70 12.90 64 37U 50HIxxCO 5x-9WARM CROPS C 37
This also is not classed as an airport, though it is one. The position is also badly off, being in mixed forest/farmland about 5 km east of the airport.
64502627001 LUND SWEDEN 55.70 13.20 73 40U 55HIxxCO 8x-9WARM CROPS C 41
Also an old station with a record going back to 1753, and now in the center of a major town. The position is within a kilometre of the correct one.
Instead of a list, provide a map. Then include the various stations, the alleged GHCN errors and how Hansen’s analysis treats those stations. Then you can provide a better analysis, and correct Hansen.
A long list on a blog isn’t the way to do it.
You really are a try hard D64. tty has just listed the coordinates to show that the stations are not where they purport to be.
You don’t have to believe tty, in which case YOU can log the coordinates into google earth and check for yourself.
If the stations are not where they are supposed to be, then their classification by nightlights may be erroneous.
If tty presented graphs, you’d have wanted the metadata. he presents metadata, you want graphs.
Give it up, it’s becoming tiresome.
A link to the analysis would suffice.
If random blogger X wants to overturn some aspect of the science, she’s gonna have to work at it.
Whilst you sit on your proverbial sniping at any and every sceptical post ha?
I just checked 2 of the co-ordinates in ttys post and (s)he is correct.
If you have an issue with the accuracy of her/his post state it instead of just sniping.
Like I said, your posts are becoming sooooo tiresome.
p.s.
You wanted….
A map…Google earth is freely available. if everybody posted every map, chart and graph mentioned this blog would become a dogs breakfast
Include the various stations…..(s)he did, with names and co-ordinates. Open your eyes.
The alleged GHCN errors….(s)he did by showing us the co-ordinates to be in the middle of a large lake for instance. NIGHTLIGHTS geddit?
How hanson treats those stations….NIGHTLIGHTS geddit?
Like I said, tryhard.
Unless tty is willing to show more work and illustrate the impact, it’s just sniping. Why should anyone else but tty do the work? They’re making the claim; back it up with something more substantial than just a list.
BTW, looks to me that the problem is with GHCN, not Hansen’s analysis.
Derecho64:
With a typical demonstration of your idiocy, you assert:
“Unless tty is willing to show more work and illustrate the impact, it’s just sniping. Why should anyone else but tty do the work?”
No!
tty has shown a fault exists. That is all anybody has to do for the criticised work to be rejected.
Similarly, if somebody points out that the builder of a house has used doors and windows that do not fill the larger holes in the walls then that is sufficient for the builder’s work to be rejected.
The critic is not required to correct the work or to do substitute work:
THE PRODUCER OF THE SUBSTANDARD WORK IS REQUIRED TO PUT IT RIGHT.
And you admit the work is substandard when you say;
“BTW, looks to me that the problem is with GHCN, not Hansen’s analysis.”
So what? If a house falls down because its builder chose to use faulty bricks then that does not change the builder’s responsibility for the standard of the builder’s product.
Derecho64, all your posts are silly and several are offensive so they seem to be intended to disrupt sensible discussion. Please stop spoiling this blog with your posts until you can find something sensible to contribute.
Richard
Richard, thank goodness Kepler or Einstein didn’t take your approach with respect to Copernicus and Newton (respectively).
Claiming that errors exist is easy; proving that they do, and that they matter, can be a lot harder.
Derecho64:
I asked you to try to present sensible posts, but you have ignored that and replied in your usual manner.
YOU agreed there was an error when you wrote;
“BTW, looks to me that the problem is with GHCN, not Hansen’s analysis.”
So, the point at issue is not whether there is an error but who has the responsibility to correct it. As I explained, that responsibility is with Hansen.
And Einstein DID use my “approach”. Google Michelson and Morley.
I repeat my request that you try to present sensible posts because you are disrupting sensible discussion.
Richard
I agree … but it’s “numerous”… unless you’re being jocular.
D64 – The climate scientists are the ones claiming warming and/or catastrophic warming. The burden of proof is on them. All the skeptics have to do is find and illuminate the many weak spots in the science.
Only the most inane “skeptics” claim that there’s been no warming.
The climate science community has determined that it’s anthropogenic; the “skeptic” community, if it disagrees with that, needs to come up with a non-anthropogenic hypothesis that explains the observations better. I’ve not seen a coherent one.
That’s because of semantics and choice of words/language leading to misunderstandings.
I believe the planet has warmed, then it cooled, then it warmed etc. like it always has… always. So where are we in relation to Ts at the beginning of the industrial revolution? I honestly don’t know. I understand and accept that there are people/organizations out there, e.g. GISS CRU etc who claim a warming since the IR, something less than one full degree C.
However, as a born sceptic, thinking about the advocacy practiced by these people, the averaging of thermometre readings that don’t cover the globe properly in a statistical sense, the regular adjustments made to their past published work (proving past work was not correct, leading me to suspect present work may also not be correct) and the claim that thermometres read over 50 years ago give data in tenths of degrees,(not possible) and current data presented in up to thousanths of degrees all in all makes me suspect of their claims.
So I don’t know if the planet is warmer now than in say 1850.
The skeptics do not have to produce a “coherent” hypothesis. They only have to highlight the weaknesses in AGW. It is the climate scientists who have to produce and prove a coherent hypothesis … and they have not succeeded.
Like I told Richard, thank goodness Kepler or Einstein didn’t take your approach with respect to Copernicus and Newton (respectively).
Seems to me a great many “skeptics” just want to take potshots and not do any work. Pity.
Tough sh*t. That;s just the way it is. You have to prove your theory … not just giggle among yoursleves at how clever you are. Meaningless pontifications like ‘The climate science community has determined that it is anthropgenic ‘ has all the probative qualities of the defence attorney saying
‘My client, Fingers the Safecracker swears on his mother’s life that he had nothing to do with the peter job down the back of the Bailey. I don’t care how many witnesses you got he’s a good boy’
But I guess you can always fall back on your government salary to console you in the darker and colder nights.
I note too that you avoided Jim’s remarks by irrelevant diversions to Kepler and Einstein. You really must find a different strategy. This one is worn out and unfit for purpose.
I’d like to see “skeptics” prove *their* theory – that is, if they could settle on one that has better explanatory value. AFAICT, their ideas are all over the place – “It’s warming”, “It’s cooling”, “It’s all natural variability” , “it’s all the sun”, “It’s all GCRs”, “It’s clouds”, “It’s all instrument error”, “It’s all UHI”…
An endless stream of incoherent and mutually contradictory potshots.
Sceptics do not have to prove anything. Alarmists do.
The alarmist theory must be proof against all and any potshots to be shown to be ”right’.
As soon as it fails a single test it is no better than any of the other theories in town..it is *wrong*.
And the more that Alarmists fail to acknowledge this fundamental truth and avoid putting their theories, data, models and so on up to rigorous scrutiny from outside the ‘climate sciecne community’, the more they look like they have something to hide.
Truly confident scientists who had done the work to the highest standards and keen to persuade the world that their findings were robust and rigorous would not need to behave like shysters.
They would be ‘loud and proud’ and happy to show all comers that they were right. Alarmists in general adopt the exact opposite strategy..secrecy, obfuscation, ad hom attacks, scare tactics, ‘consensus’…..anything to avoid scrutiny and/or make it as difficult as possible for any outsider to follow their work.
‘Course, “skeptics” such as yourself would have to do a lot of reading and learning to be able to ask the right questions. Going on and on about how much snow you have at your location ain’t the way to do it.
Very simple question that you will continue to refuse to answer:
Have any climate models demonstrated predictive power beyond a simple Arrhenius’ Law calculation?
If so which and when.
You ducked this one last time..even a real modeller like Lacis could do nothing much better than wave his hands around. I predict that you will duck it again as you have nothing constructive to say.
Latimer, take a look at Climate Models and their Evaluation. And, of course, the papers referenced therein.
We are having a really, really lousy solar cycle. The worst in at least 100 years. Sunspots have gone back to bouncing off of zero.
In the next 10 years we will know beyond doubt how much of an impact the solar cycle and other astrological factors have on ‘climate’.
No point in arguing something that will become self-evident.
Yes, we’re having a very quiet sun lately. Yet it continues to warm.
dr64,
Perhaps one of your many problems could be confusion about what skeptics do.
We have won: we have pointed out that current theory of global climate disruption fails.
That is our job.
Deal with these points and then maybe we can speculate together about what is actually going on.
“Skeptics” don’t *do* much of anything except ignorantly spread FUD.
No D64, it’s a joint effort. AGW spreads the fear, Skepticism point’s up the uncertainty and expresses the doubt.
It’s the “skeptics” who spread the fear – “We’ll all be living in caves with animal hides because the alarmists will send destroy our civilized lifestyles because of their Marxist one-world-government commie dogma”. “We don’t know its warming because Hansen fiddles the data because he’s a librul activist”.
@D64
Thanks. I read the paper you referred me to, which is actually a part of the IPCC report.
And I wasn’t too surprised to find that the answer to my question
‘Have any climate models demonstrated predictive power beyond a simple Arrhenius’ Law calculation?’
still seems to be ‘No’.
And that even when the evaluation of the success or otherwise of their field is done by a bunch of climate modellers themselves. Even they – not likely to be the most independent reviewers its possible to imagine – can come up with nothing much better than saying that ‘the average of all the models gets to roughly the same level of prediction as Arrhenius’
which really isn’t very good at all.
I can perhaps understand an individual not being able to see the strengths and weaknesses of the whole field (if indeed there are any strengths), but when the combined ‘wisdom’ of the IPCC can’t come up with any either, it really is time to turn out the lights and start singing ‘Thanks for the Memories’ on this complete waste of money and energy.
Latimer, don’t pass off the “It’s from the IPCC, ergo it’s garbage” nonsense any more. The IPCC reports are assessments – which means the meat is in the referenced papers. If you don’t look at those, then you’re not getting the substance.
And climate modelers know full well the shortcomings of their models. They’re quite critical.
Can’t quite see where the ‘It’s from the IPCC therefore its garbage’ bit came into my entirely reasonable observation that the people least likely to be critical of the utility of ‘climate models’ are modellers themselves. They are after all paid to do it.
They didn’t ask the Chief Accountant of Enron to give his ‘independent’ assessment of the state of the finances of that company either.
Out here in reality land, we have an idea that things called ‘Conflicts of Interest’ aren’t generally a good idea and are to be discouraged. One day this idea will also come to Climatology and academic pal-review. I guarantee that you will find this professionally uncomfortable.
Exercises such as writing on this blog could be useful training exercises for you, but you’d have to get used to the idea of ‘persuading’ people, rather than belittling and lambasting them.
Do64 –
My first post on this blog I wrote words to this effect: “I’m a sceptic, a barbarian, and infidel. You’re the believer. It’s up to YOU to prove to me that your cause is true and just. Convince me.”
You’re not convincing me. You apparently fail to understand some of the bases of science, as expressed in this small quote:
A scientific theory is a mathematical model that describes and codifies the observations we make. A good theory will describe a large range of phenomena on the basis of a few simple postulates and will make definite predictions that can be tested. If the predictions agree with the observations, the theory survives that test, though it can never be proved to be correct. On the other hand, if the observations disagree with the predictions, one has to discard or modify the theory. (At least that is what is supposed to happen. In practice, people often question the accuracy of the observations and the reliability and moral character of those making the observations.)
From – “The Universe in a Nutshell” by Stephen Hawking
Do you argue that Hawing is NOT a scientist – or that his ethics are questionable or that he fails to know what constitutes science? If so, then you have nothing useful to say. If not, you need to read that and apply the lessons to your own attitude toward science in general.
What I see very often in your posts is directly related to this essay:
http://web.mac.com/sinfonia1/Global_Warming_Politics/A_Hot_Topic_Blog/Entries/2008/8/19_Cognitive_Dissonance.html
It has two parts – you might want to go on to the second part as well.
I’m skeptical of the “skeptics” – how can they make me believe their stuff? First off, of course, they’d have to agree on what it is they do believe – AFAICT, their beliefs are all over the place. Like I said some time back.
This may well end up in the wrong place. Tough.
@Derecho64 | December 27, 2010 at 5:52 pm |
I’m skeptical of the “skeptics” – how can they make me believe their stuff? First off, of course, they’d have to agree on what it is they do believe – AFAICT, their beliefs are all over the place. Like I said some time back.
I didn’t say you shouldn’t be sceptical of the sceptics. In fact, I think it’s required that somebody be so. What I said was “don’t argue the unarguable”. The function of a sceptic is specifically to punch holes in the theory. If they can do that then the theory requires either modification or replacement. It’s exactly the same function as working IV&V. Which I did a lot of and REALLY pissed off both the Hubble and Shuttle management for some years.
Conversely, if you can find the holes in the sceptic arguments then you’re performing a necessary service. But if you’re just arguing for the sake of argument then you’re a troll. Not my call.
As for what sceptics believe – I was once told pretty much what you said – that what I had to say wasn’t within shooting distance of “mainstream sceptical belief”. My answer was that scepticism, as science itself should be, is a free-range critter and that his expectation was laughable. You have stated a small part of the range of “possible” factors that may eventually be found to be pertinent to climate. Or not. But dismissing ANY of them out of hand is not good science.
And assuming that what you believe you know now is all there is and/or that only data/ideas/etc that support your preferred conclusions should be admissable is called “preconceptual science”. I won’t bother telling you what I think about that concept. That could get me banned.
I’d just like to see the “skeptics” present a testable theory that explains the observed changes in the climate system (and others) that requires no anthropogenic components whatsoever. So far, I’ve seen nothing.
Einstein didn’t just say Newton was wrong – he provided a theory that satisfied the obs. Newton didn’t just say Aristotle was wrong – he provided a theory that satisfied the (then-known) obs.
Einstein didn’t just say Newton was wrong – he provided a theory that satisfied the obs. Newton didn’t just say Aristotle was wrong – he provided a theory that satisfied the (then-known) obs.
In reverse order, Einstein /Newton never even considered disproving Newton/Aristotle. That was neither their purpose nor their method. They both started from First Principle and followed a logical path given the knowledge available at the time and their own mental processes.
I’d just like to see the “skeptics” present a testable theory that explains the observed changes in the climate system (and others) that requires no anthropogenic components whatsoever. So far, I’ve seen nothing.
Now for the important part. I might like that too. But it’s not possible because there IS an anthropogenic component. I breathe, as do you and every other living person on the planet. I eat cooked food, as do you and nearly every other living person on the planet. I drive a car, as do you and many other people on the planet. I heat my house, operate a computer, light my house, etc. as do you and many other people on the planet. All of that generates CO2. I think we can agree on that. But then we part company because I don’t believe the human component is as great or has as great an effect as is claimed. In part because, as an engineer who understands feedback mechanisms, I have no belief whatever that the total planetary climate feedback is more than minimally positive. Nor do I believe the feedback can be even determined at this time with the present knowledge and computational capability. And I don’t believe that the proper approach has yet to be seriously taken. Specifically, that feedback is a dynamic (if not chaotic) time and space variant function. I don’t even understand why that’s not obvious to others.
Second point is that all that CO2 that we probably agree about is the result of combustion, which is also a source of energy, which in turn produces UHI (and other heat sources) that are apparently still being claimed to be “insignificant”. Bullfeathers.
There are other heat sources – and other CO2 sources, that I seriously supect are not included in the models – or considered “significant”.
I think that may be sufficient to put me somewhere outside the bounds of “mainstream sceptical belief” (if there is such an animal).
Understand this – much of “climate science ” IS based on physics, and is logical and MOST sceptics agree with parts of it. Some agree with most of it. But MOST, if not all, sceptics do not agree with the certainty claimed, nor with the solutions proposed, nor with the politicization, nor with the blatant perversions of the scientific process.
So, tell me, do you believe that if a sceptic “did” come up with the kind of theory you propose, that they could get it published? Accepted?
I hope you’re not that naive.
What is preconceptual science?
@willard | December 27, 2010 at 8:20 pm |
What is preconceptual science?
Simple definition – deciding what the results will be prior to running the experiment or the calculations. And then excluding all evidence that fails to support the predetermined conclusion.
Or as I said before:
Assuming that what you believe you know now is all there is and/or that only data/ideas/etc that support your preferred conclusions should be admissable is called “preconceptual science”. I won’t bother telling you what I think about that concept. That could get me banned.
It is one of the many types of scientific misdemeanors. Unless it is done for profit or to elicit funding, in which case it can be termed “fraud”. In any case, unless one is extremely lucky and the evidence actually does support ones hypothesis, it is highly probable that it’ll come back to bite in an inconvenient place. There are several convenient examples immediately available, all of which I will ignore for the present. :-)
Defining science as a “decision” sounds strange. In any case, here is Einstein’s eulogy of Newton:
http://www.pbs.org/wgbh/nova/newton/einstein.html
Here is an interesting excerpt:
> The last step in the development of the program of the field theory was the general theory of relativity. Quantitatively it made little modification in Newton’s theory, but qualitatively a deep-seated one.
Has it already been decided that Einstein was not disproving Newton yet, or should we revise that claim?
;-)
@willard –
I didn’ t say either that Newton didn’t replace Aristotle or that Einstein didn’t replace Newton. I said that neither of them started with that as their specific initial intent.
In fact, Newtonian mechanics is still taught for engineering because the differences are less than insignificant for that purpose. “You”, in fact, still use Newtonian mechanics – unless perhaps you’re in a very specialized field.
Comparing climate science to Kepler is such an ignorant argument on so many levels.
Pot shots is exactly what happened to Einstein. Of course, Einstein was smart enough to include some falsifiable elements in his theory. Sorry to say, there isn’t much of that in Climate Science. At any rate, the pots were shot, fell over, and Einstein’s theory was left standing.
Time is on our side D64 all you have to look forward to is a Sandwich Board
Is the Sandwich Board before or after the Burger Flipper?
Sandwich Board is favorite no aptitude test
But in the bizarre universe of true belief, you are just picking things apart and since you have not offered a replacement, you are an ignorant denialist tool.
I posted this incorrectly in the wrong spot
Can you PLEASE define what you mean by a denialist?
If I believe that humans are effecting the environment, but I do not believe that current climate models successfully demonstrate that a warmer world is necessarily bad in the long term, or that the rate of warming will be per the IPCC’s assessment, and that the USA should not implement a “cap and trade” scheme that will not help the climate, but would add to the cost of government…..is that being a denialist or simply reasonable?
In respect of UHI I suspect from the points of rebuttal made in the paper that it is in part a response to a number of recent critiques e.g. De Laat & Maurellis (2006); various by Pielke Sr and various by McKitrick et al, although none such are cited in the paper. (It does seem the criticisms are much less well cited than the those involved in the development of the product, but given the nature of the paper this might be expected, and perhaps the criticisms aren’t there in the literature).
Anyhow anyone with an interest in this aspect of the Hansen paper (but from the grey literature) could look at “A Critical Review of Global Surface Temperature Data Products” (2010) Ross McKitrick http://ssrn.com/abstract=1653928
I agree that there is probably not enough data to say for sure whether planetary temperatures have risen since 1850, particularly considering that coverage of the oceans is extremely bad before the satellite era.
On the other hand I feel sure that there has been a temperature rise on land in Northern Europe since 1850. This is supported both by the instrumental record that is probably good enough that far back, and also by biological data (phenology, treeline, animal distribution changes) and modest glacier retreat. But I am far from sure that it is warmer in Northern Europe now than it was during the MWP or 1720-1740 (in the Uppsala temperature record that goes back to 1722 the warmest year is still 1724, despite extensive urbanization).
Judith,
It is very interesting that in your post you mentioned the difference of prediction skills between SE USA and SW USA. I do not know if there are researches about the explanation of such regional heterogeneity. Tentatively, I guess it might be related to the different weights of local feedbacks, stationary tele-correlation pattern, and the transient weather regimes in different regions. Possibly similar difference in predictability also exists between the future change of Southern-Europe and of Asian monsoon areas.
I agree with you that appropriate choice of the initial condition very important, as well it is possible the right choice of lateral boundary similarly important to the regional climate prediction.
Happy Holiday.
I am another who, like some above, are prepared to accept that there has ‘very likely’ been some warming in the 20th century, but would argue that the data are so terrible (as Roger Pielke Sr has shown in the essay cited above) that it is almost fatuous to go on about differences to the order of a tenth of a per cent.
Having said that I would join also in wondering why we need to have better models to deal with local or regional climatic developments, when if that (trying to discover what is likely to hit us in the next twenty years or so) were the object, every region would very likely have its own priority in what to look out for.
As I have said in earlier posts, where I live (SE Australia) the top three priorities about what we are likely to experience are rainfall, rainfall and rainfall — and SE Australia is a very big place, so that perhaps twenty or thirty areas within it would have rather different worries about the nature of the pattern.
Why on earth would we go to look hard at T? The orthodox answer is that T drives everything because CO2 drives T. This still seems highly ‘unsettled science’ to me — and Roger Pielke Sr says something to that effect in his essay.
Judith,
Best wishes for the holiday.
Some effort on statistical approaches to improving local decadal forecasts should be welcomed in my view. My comments here relate to the (part of the) proposal related to investing time improving local forecasts from dynamic models.
While a small effort on upgrading numerical methods in GCMs, such as dynamic LGR, might well be justified, the sort of large-scale attack on regional-scale decadal prediction using dynamic modeling which you describe here seems to be far too early, given the level of understanding of the climate system and the maturity of the GCMs – even if one were to accept the argument for the utility of such predictions. Additionally, there are mathematical reasons why model-based meteorological forecasts suffer rapidly decreasing predictive accuracy after 5 days or so. It is not clear to me how long-term meteorological model forecasts must be subject to chaotic instability but short-term climate forecasts are somehow rendered immune.
It is worth examining the history of development of dynamic simulators in other disciplines. In every instance that I am familiar with, the application of improved numerical methods – dynamic local grid refinement, flexible gridding, local options on type of solution routine and boundary condition characterizations – were all introduced to allow improved local characterization and reduced numerical error only AFTER certainty had been gained in the completeness and aptness of the governing equations to be applied.
The assumption underlying this present proposal is that, if infinite computing capacity were available such that there were no restrictions on grid definition and time-step sizing, then the models would be able to more accurately match global observations. This is not just an unsafe assumption; it is a patently false assumption at this time. The models cannot obtain even a coarse match of key observational data in hindcast, and therefore cannot be expected to found sensible boundary conditions to support a local grid refinement scheme in any useful way. It is an exercise in futility for any model until it has been through a rigorous V&V sufficiency test. For any scientist who seriously doubts this statement, I would suggest a careful consideration of the implications of Figure 9.3 in AR4 WG1 Chapter 9, which shows the development in ALL of the CMIP models of a 2 w/m2 error in the CHANGE in outgoing SW over the critical 1980 to 2000 period. http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf
There are numerous other examples, but one bullet should be enough for a dead horse.
Paul – I agree with your conclusion that current models are inadequate for useful regional and/or short term projections, but not with your inference that it would be futile to seek improvements over the next few years. I believe that to be unduly pessimistic for a few reasons.
First, weather is chaotic enough, as you note, but climate is less so. As Judith Curry and others have pointed out, predictions involving years or even one or two decades are more sensitive to initial conditions (i.e., more vulnerable to chaotic elements) than those involving longer intervals, but they are far less subject to the type of variations operating on very short timescales of days or weeks than are weather forecasts.
Second, the models you referenced in AR4 did not exhibit 2W/m^2 errors overall. Rather, some of the models did not incorporate volcanic forcing into their assessments, and so failed to match the profound cooling due to the aerosols emitted by Mt. Pinatubo in 1991. Models that included volcanism performed better.
None of this implies model skills yet sufficient for regional downscaling, but the models are, I believe, less distant from that goal than you suggest.
Hi Fred,
Your point about Pinatubo is perfectly valid, but that is not what I was referring to. If you look again at Figure 3, you will see that there is a notable trend of decreasing OSR. (One can ignore the spike associated with Pinatubo.) Hatzianastassiou (2004) found a strong decrease in reflected solar radiation over this period:
QUOTE The decrease of OSR [outgoing solar radiation] by 2.3Wm-2 over the 14-year period 1984–1997, is very important and needs to be further examined in detail. The decreasing trend in global OSR can be also seen in Fig. 5c, where the mean global planetary albedo, Rp, is found to have decreased by 0.6% from January 1984 through December 1997. ENDQUOTE
This was attributed to a decrease in cloudiness in the tropical and subtropical regions. (see http://scienceofdoom.com/2010/07/04/the-earths-energy-budget-part-four-albedo/ for excerpt.)
A similar result was found by Wielicki (2002), one of the lead authors of AR4 WG1 Ch 9. The point about Figure 3 is that it demonstrates that NONE of the models matched this decrease in albedo – leading to a cumulative error of ca 2 W/m2 in all of the models. For energy balance this error in the models is compensated for in large part by overattributing GHG effect over this critical period.
To be clear, I was not suggesting that “it would be futile to seek improvement” in the models over the next few years – only that it would be futile to launch a major attack on regional decadal prediction when the models are still a long way from offering a broad-brush match to observational data. Moving to improved numerical methods will not resolve major issues related to the governing science. I would prefer to see a major effort on V&V and attribution studies FIRST.
For agriculture and forestry, local models matter a whole lot. It’s the not the change in the average temperature so much as the change in weather patterns and timing of weather events. In the Northeast, the winters are getting shorter, and spring warming occurs much earlier. Many plants have been leafing out sooner in response to the early warmth – and have been getting zapped by relatively mild frost events in May. Falls are much longer and warmer, and some plants are actually starting to leaf out in late fall after normal leaf drop. Some plants even have flower buds in December. It’s crazy.
As one not directly involved in agriculture or forestry, can you give me some practical examples of how these models might be of use, assuming they could be accurately constructed. And what would be done differently by having these rather than better short term (monthly?) weather forecasts.
I’m interested in real practical doable things.
Though only a hobby farmer myself, I can give you some examples.
Almost all farmers in Australia would like to have advanced warnings (years not months) of impending El Ninos or La Ninas.
Cattle graziers could make stock level decisions in a timely manner as opposed to emergency reductions for example.
Growers could change the crops they grow. Machinery is very expensive. farmers in an area changing from periods of wet to dry could exchange machinery with farmers in areas changing from dry to wet. These initiatives need more than just a few weeks of forecasts.
Precipitation is the most important factor for a farmer and seasonal forecasts are the most valuable, however advance forecasts of two or more years would also be very important.
Just a skinny outline Latimer but not too far off the mark.
Since climate time line horizons have traditionally been in excess of 30 years, I have a question:
Is it even possible to make meaningful 20 & 30 year climate predictions?
hunter,
The answer is no.
For many reasons.
Most have to do with how current science is not educated enough to understand how this planet operates and the constant changes it incures.
Current science still really does not understand what an Ice Age is for or the indicators to one, even though the timing states we should be very close to having one.
That last Mini-Ice Age had set the clock back, but current planet growth brought it all back again.
Do you believe the climate scientists will have credibility by next spring?
By government paying for these studies , yes. But the regular population?
Hurrell et al:
While man-made aerosols can be washed out of the atmosphere by rain in just a few days, they tend to be concentrated near their sources such as industrial regions, and can affect climate with a very strong regional pattern. Future changes in anthropogenic aerosols, therefore, could have very significant regional climatic impacts on decadal scales.
Has this ever been observed?
According to this map of sulfate aerosol concentrations, we should have definitely seen a strong cooling over Eastern Asia since about the 70s (especially if we are to believe the IPCC -0.5 W/m2 global forcing for the direct aerosol effect). However, nothing of the like appears in the GISS Surface Temperature Analysis for this period.
Could anyone explain why modelers are so confident about the strong anthropogenic aerosol cooling effect?
Thanks,
Mikel
I would recommend getting a time series of temp anomalies and then focus on the geographic area of interest.
How does that differ from the 1980-2010 map generated in my second link?
In any case, the anomaly series for, say, Shanghai, shows an equally remarkable absence of any cooling: http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=205583620001&data_set=1&num_neighbors=1
With everybody admitting that aerosols are the source of greatest uncertainty in the attribution and sentivity calculations, I’m quite surprised by the apparent lack of direct observational verification.
It is far from true that everybody thinks, much less admits, that aerosols are the greatest source of uncertainty. Natural variability is the greatest source by far. Aerosols are only a significant source of uncertainty in AGW, and this is because they are needed to get the observed oscillations without natural variability. Aerosols are an AGW trick.
Never be surprised by the lack of observational verification anywhere in Climatology.
That is not the way that this particular ‘science’ is done and is positively eschewed by all good modellers. We wouldn’t want any facts to get in the way of their theories of Thermageddon would we?
Mikel – There are informative observational data on aerosols, including those arising from volcanic eruptions such as Mt. Pinatubo in 1991, as well as data on aerosol optical depth over various regions and time intervals. However, aerosols typically comprise a mixture of warming and cooling components, with the balance differing by region. Industrial effluents are dominated by cooling moieties such as sulfates, whereas in many parts of Asia, warming components such as black carbon offset or even outweigh the cooling properties. A good review of some of these concepts is presented by
Ramanathan and Feng
While it is true that negative aerosol forcing can only be estimated within a relatively broad range, it would require aerosols to exert a net positive effect globally to account for the observed long term temperature changes without taking into account the estimated positive forcings from CO2 and other greenhouse gases.
Thank you Fred but that paper does not address the issue I was speaking about nor does it say anywhere that the effect of aerosols in Asia is as you describe.
R&F clearly state that the net effect of “anthropogenic brown clouds” is one of cooling (~-0.4Wm2 globally). With anthro aerosols being so localized, one should have a much stronger effect over the most contaminated regions, perhaps -5Wm2 or more and its cooling effect should be evident in the instrumental record. It is not.
Mikel – The paper does emphasize the combined warming/cooling effects of aerosols, with the balance depending on constituents. The atmosperic brown clouds that are abundant pver Asia contain a high level of black carbon warming components. For more details on the observational measurements, see some of the Ramanathan references.
Let’s correct that first link: http://commons.wikimedia.org/wiki/File:Gocart_sulfate_optical_thickness.png
Fred Moolton:
“While it is true that negative aerosol forcing can only be estimated within a relatively broad range, it would require aerosols to exert a net positive effect globally to account for the observed long term temperature changes without taking into account the estimated positive forcings from CO2 and other greenhouse gases.”
This statement is perfectly true. However, I can see no signs that the modeling community has made a serious attempt to constrain the “relatively broad range” by using all of the data available.
All of the models, when run in hindcast show a reasonable historical match to mean global temperature. This is achieved inter alia by using the degree of freedom offered by the (conveniently unknown) variation in tropospheric aerosols to offset model warming by increasing GHGs. See Kiehl 2007. Models with high feedback/sensitivity require more (reflective) aerosol addition than models with low/feedback/sensitivity. However, the validity of a model’s history-match is – or rather should be – dependent not just on its match to average surface temperature but on its ability to match all of the other key observations simultaneously.
During the satellite era we have good estimates of the trends in outgoing SW and LW, if not their absolute magnitude. I state with some caution that in terms of estimating the magnitude of the GH effect to first order, it does not matter whether or not the albedo change is due to changes in reflective aerosols or changes in cloud cover. It is however of supreme importance that each model should be constrained to match these observed SW data. At the present time, not even the lowest sensitivity models get close to matching the observed reduction in outgoing SW over the critical decades of the 80s and 90s. This leads to an underestimate of SW heating by 2 w/m2 and a compensatory overestimate of LW heating by GHG.
I cannot help feel that if all of the models were forced to match the decrease in OSR and the increase in OLR over this critical period then it would bound not only the acceptable level of variation of tropospheric aerosols, but more importantly the maximum sensitivity that can validly be attributed to CO2.
Such reverse-engineering for parameter estimation (or to impose constraints on stochastic realisations) is common practice in many other disciplines, and I am surprised that it has not been done – or at least not made public – by the climate science community.
Aerosol forcing is uncertrain but not infinitely so. Lidar-based and other observational data provide constraints under which models must operate. See also the Ramanathan and Feng reference I cited in response to Mikel above.
Regarding AR4 WG1 Fig. 9:3 – from 1984 – 1993, the models slightly overestimated heating (by underestimating outgoing flux), and from 1993-1999, they underestimated it. On average, the net difference was much less than 2 W/m^2.
You are correct that the models did not project the observed trend. The implications for CO2 climate sensitivity are unclear because of the short interval and the possibility that the decreasing OSR was a positive feedback response to GHG forcing. There is no evidence I’m aware of that an increase in reflective aerosols (Pinatubo aside) was a plausible explanation (see Ramanathan and Feng), and so a reduction in cloud albedo seems much more probable – some GCMs predict this as a feedback on GHGs. I certainly agree that intradecadal model projections are subject to significant errors. I don’t feel qualified to judge whether this requires all other improvements to await V&V studies as opposed to working on these aspects contemporaneously. Doing both doesn’t sound unreasonable to me.
I need to correct an error in my comment above, in the statement:
“There is no evidence I’m aware of that an increase in reflective aerosols (Pinatubo aside) was a plausible explanation (see Ramanathan and Feng), and so a reduction in cloud albedo seems much more probable – some GCMs predict this as a feedback on GHGs.”
I should have referred to a reduction rather than an increase in reflective aerosols. In fact, a slight reduction due to pollution controls is not implausible, although the Ramanathan/Feng paper does not suggest a major change, and reduced cloud cover remains a better explanation for most of the discrepancy.
Fred,
Thanks for the considered response. I agree with most of what you say. I was not claiming a mean error of 2 w/m2. But I don’t think it is a good thing to belittle the magnitude or importance of the error. It is similar in magnitude to the entire forcing attributed to CO2. An observed trend of 0.23 W/m2 per year (Hatzianastassiou) reduction in OSR is modeled as an order of magnitude less by even the “best” models, leading to a cumulative error of ca 2 W/m2 over the critical heating period. As you are aware the time integral of the difference gives us a clearer idea of the energy imbalance between the models and reality. This error in SW has to be compensated for by excess “retention” of LW – or overestimation of direct GHG effects.
We could probably have an entertaining conversation about whether the albedo reduction is due to reduction in reflective aerosols or cloud cover in the tropical and sub-tropical region, and, if the latter, whether it is a temperature-dependent feedback or partly controlled by solar effects. However, that is not the main point of my discussion here. My main point is that this is one of several gross errors that are apparent in the history matches of the models. A well designed V&V programme would insist on “extended Goodness-of Fit” tests with pre-set standards against the key observed time-series vectors. (Simple GoF tests do not work for time series, as other disciplines have discovered.) Only after each model has passed such a test should it be considered as “possibly credible”. I just feel that it is too early to launch a major new initiative on decadal forecasting without doing basic tests on the validity – at high level – of the existing CMIP models. Models which do not pass neecessary condition tests should be ruthlessly binned.
As far as models are formally invalidated, which is unfortunately still the case (see recent posts related to models’ verification and validation as well as their comments), all of these nice scenarios will remain highly questionable.
In a quite recent paper, N. Scafetta (http://scienceandpublicpolicy.org/images/stories/papers/originals/climate_change_cause.pdf) proposed a very simple but highlighting exercise. He considered the GISTEMP data starting 1880 and performed +60 years lag shift as well as a + 0,28°C T° shift. And here are his major findings:
1) There is an almost perfect fitting between the (shifted) 1880-1940 period and the (original) 1940-2000 period.
———————————————
2) This further confirms the existence of a 60-years’ natural cycle driving climate change.
2a) In particular, note the almost perfect correspondence between the warming trends during the 1910-1940 and 1970-2000 periods (that models fail to reproduce since warming trend calculated for 1910 – 1940 period is 2,5 times lower than measured one !).
2b) There is also a good matching between 1880 – 1910 and 1940 – 1970 cooling trends (that models are totally unable to reproduce).
2c) There is also a correct similitude (even if too early to conclude) between 1940 – 1970 cooling trend and what we observe since roughly 2000.
———————————————
3) This finding strongly contradicts the IPCC’s claim that 100% of the warming observed since 1970 can only be explained with anthropogenic emissions.
4) The figure also suggests a slight cooling starting in 2002-2003, which may last until 2030-2040.
5) If the general patterns (overall warming trend of +0,05°C per decade) observed since 1880 continue, it is very likely that global warming by 2100 won’t exceed 0.5°C, which is 4 times lower that IPCC’s most optimistic forecast !
Note that Scafetta is not the first one highlighting such reproducible climate patterns (about 60 years period) mainly resulting from Oceans’ oscillations and especially AMO & PDO switching into positive or negative mode.
– Schlesinger & Ramankutty (1994)
– Klyashtorin & Lyubushin (2003) [http://www.biokurs.de/treibhaus/180CO2/Fuel_Consumption_and_Global_dT-1.pdf]
– William Gray (march 2009) [http://tropical.atmos.colostate.edu/Includes/Documents/Publications/gray2009.pdf]
– Akasofu
– Latif during WCC3 as mentioned by Dr Curry in her thread [http://www.wmo.int/wcc3/sessionsdb/documents/PS3_Latif.pdf]. Latif also notices in his presentation that “model biases are large” and that there are “Errors of several degrees C in some regions”, also reaching the conclusion that “We need a better understanding of the mechanisms of decadal variability”.
– James Murphy (responsible for climate forecast at UK Met Office) also confirmed that oceans are a decisive factor for decadal variability and that “model error a key source of uncertainty in projections, and some types of regional climate variability not well captured”
But Scafetta is obviously the first one proposing such a crystal clear analysis and forecast.
CONCLUSION :
1) No need for a PhD or for powerful computers and GCM’s models to issue an accurate climate forecast. It is even probable that Scafetta’s small exercise already provides a much more accurate forecast than any of here above discussed scenarios.
2) Main models’ weakness is so far their inability to implement and reproduce (natural) decadal variability. That of course constitutes a key issue the main improvement track for the coming years.
CONCLUSION :
1) No need for a PhD or for powerful computers and GCM’s models to issue an accurate climate forecast. It is even probable that Scafetta’s small exercise already provides a much more accurate forecast than any of here above discussed scenarios.
Did the same thing a year ago on the WFT site (http://www.woodfortrees.org/ ). No GCM involved. It was an interesting and enlightening exercise. It’s been interesting to see it confirmed since then.
Judith,
I find it interesting that science has never recorded any object hitting the sun and all sun spots MUST be produced from the sun. Since our Solar System is travelling at 300km/sec and our biggest target is the sun.
Science generate graphs of finding a pattern that they may generate a “history” of what the sun was doing in the past from sunspots.
Our current science believes any object will melt or disintergrate long before hitting the suns corona. But at a speed of 300km/sec, the radiation would barely warm the object before deep pentration would occur and the object would, depending on size, disintegrate from friction as it is travelling to the core of the sun. This would generate a hole in the carona(sunspot).
OK. Lets assume your theory has some merit. Easy way for you to show that you’re at least in the right ballpark…
Please show the sums. Start with a body the size of a spacecraft in an orbit similar to the Earth’s and show how long it would take to get to the surface of the Sun. And what the cumulative radiation upon it would be.
Latimer,
It is not that easy.
Our technology could not generate the speed required in a simulation of a object outside our solar system that is close to nil motion. 300km/sec of our solar system has quite the energy force.
The craft size would also have to be at least 1/20 our planet size to have the same impact size deep in the carona.
Large masses and high speed.
OK
Try it with a body of whichever size you like. You must have done so to confidently make your assertions earlier.
‘But at a speed of 300km/sec, the radiation would barely warm the object before deep pentration would occur and the object would, depending on size, disintegrate from friction as it is travelling to the core of the sun’
Because otherwise, how can you know this to be true (or even plausible)?
A bullet shot into a blast furnace?
But even that has frictional slowdown of the bullet from the atmosphere.
From what distance?
You say
‘the radiation would barely warm the object before deep pentration would occur ‘
And though 300 km/sec is fast by our standards if you have to travel for 300,000 km, you;ll be exposed to the radiation for 16 mins…even if not deflecetd into a far longer spiral orbit.
So again, how do you know that the radiation exposure wouldn’t be enough to melt and then vaproise the object? The Sun is very big and very hot!
The Feedback Parameters in the models are tweaked to match historic data.
That fits the description of a curve fit and not the description of a model.
Climate models are dynamic models, not mere curve-fitting exercises.
Glib and superficial – hardly a surprise.
Please explain what you mean to Joe Sixpack. If there is any meaning at all.
This is a good place to start.
Taken from the link:
Forecasts of climate change are inevitably uncertain. Even the degree of uncertainty is uncertain, a problem that stems from the fact that these climate models do not necessarily span the full range of known climate system behavior
Tell us something the climate modeling community doesn’t already know.
Yet policy makers are told to base policy on this.
Policy makers make policy (duh); whether or they use climate model results is up to them. Indeed, its policymakers who are pushing climate models, hard, into areas that some in the community feel that they aren’t ready to tackle. Local, decade-scale predictions, for example.
Thats because the modellers keep stum about how inadequate their models are, probably a funding type issue ie there would be none if they were honest.
Do a Google Scholar search on “climate model validation” papers in the last 10 years. It returns 237 hits. Doesn’t sound like climate modelers are keeping quiet to me.
Fair comment D64. But on closer examination I found that 232 of them are by Gavin or Andy.
Being cute isn’t your forté.
Thanks you say the nicest things
Self-validation studies are very predictable.
A simple question.
Do any of the current climate models replicate known cycles like the ENSO or the AMO?
If they do not, there is no point in looking for their tactical forecasts.
Derecho64
Have we met before, Boulder Dec 2006?
Doubtful.
But not certain ?
Not a bit like you D64 !
How would I know you?
starting to work on Part II to this thread, hope to have it posted by tomorrow
Fred Moolten: The paper does emphasize the combined warming/cooling effects of aerosols, with the balance depending on constituents. The atmosperic brown clouds that are abundant pver Asia contain a high level of black carbon warming components.
Could you please point me to a link that shows specifically what you are asserting? And, while you are at it, could you show me some observational evidence of any strong cooling in any region distinctly affected by sulfate tropospheric aerosols? (bear in mind that a global cooling of ~0.5W/m2 provoked by a local/regional effect must by logic be particularly evident over those regions)
I don’t deny that the paradox I am ponting out has some explanation. I’m just asking for some convincing evidence.
Thanks,
Mikel
Here’s a link to some of the original observations – Atmospheric Brown Clouds – it’s unfortunately behind a paywall. Here is a more comprehensive analysis – JGR Ramanathan
Fred, I really appreciate your contribution. But your links do not address the issue I was talking about: lack of instrumental verification of the purported strong cooling effect of aerosols.
From the abstract in your first link: The sum of the two climate forcing terms—the net aerosol forcing effect—is thought to be negative and then they go on to explain that in some Asian regions aerosols also exert a lower atmospheric (tropospheric) warming effect.
The second link is very interesting but, again, Ramanathan et al calculate aerosol regional negative forcings at the surface of -10 W/m2 and megacity hotspot forcing of -20 to -60 W/m2. An order of magnitude larger than the GHG forcing. How can this not be observable in the instrumental record?
Mikel Mariñelarena:
Your point is good. And the arm waving in response to it is all you will get.
The aerosol input to the climate models is a fiddle to force the models to agree with past climate changes indicated by changes to mean global temperature.
But inputting the cooling to match the spatial distribution causes the models to indicate regions of surface warming and surface cooling that have no relationship to observed reality. I reported this over a decade ago.
ref.
Courtney, RS “An Assessment of Validation Experiments Conducted on Computer Models of Global Climate Using the General Circulation Model (GCM) of the UK’s Hadley Centre”, E&E, v10no5 (1999)
Importantly, that paper reports that this finding demonstrates the aerosol cooling was not the cause (at least, it was not the sole cause) of the failure of the model. However, the aerosol excuse is used by each climate model to hide the fact that no climate model emulates the climate system which exists in reality.
Aerosol cooling as adopted in the models is a myth: it has no existence in reality.
Richard
NOAA told us that 2010 had a record, Third Lowest Sea Ice Extent since they started keeping track with satellites.
I said, in that case, according to my Climate Theory, this would be a record cold and snow year, until the Arctic Freezes over again. When Arctic Ocean water is exposed, it snows and gets cold.
NOAA predicted a Normal or Warmer Winter with Normal or below Normal Precipitation.
My prediction was right and NOAA’s prediction was wrong.
NOAA’s theory says you can melt all the Arctic Ice and keep getting warmer without limit.
Ewing and Donn Climate Theory says that when you melt Arctic Ice and expose Arctic Water to the Atmosphere, it snows and gets cooler. You can look at NOAA’s data. Every record Low Sea Ice Extent year has a record cold and snow winter until the Arctic freezes over.
Ewing and Donn are right and NOAA is very wrong.
NOAA predictions from only two months ago are wrong and Ewing and Donn predictions from more than 50 years ago are right.
Consensus Climate Theory melts Arctic Ocean Ice and continues to warm the Earth without Limit. That can not happen!
When Arctic Water is exposed to the Atmosphere, It will snow without Limit, until the Arctic Freezes over again.
This is what keeps the Earth from getting much warmer. Consensus Climate Theory and Climate Models are wrong.
With all the talk about what controls the temperature of the Earth, look at the data. Every time it gets warm, it then gets cold. The past ten thousand years has been in a tight temperature band, compared to all of the time before. This is because of the current situation of the Arctic Ocean and the properties of Ice and Water and Water Vapor.
Nothing on Earth is more stabilizing for temperature than the properties of Ice and Water and Water Vapor and Clouds and Rain and Snow. You can throw away your trace gases or you can double them and it will still snow more when Arctic Water is exposed and it will snow less when Arctic water is frozen and not available. All of you who do not know this are very wrong.
Would any part of this post be necessary if any of the initial predictions laid forth in the AR reports, supplied by the UNIPCC, had even come close to the truth?
I think instead, what we would be experiencing, would be parties held in honor of Michael Mann, Phil Jones, James Hansen and Al Gore.
But, climate continues to act in ways almost exactly opposite from those predictions made by the current climate models being used.
I think it would be more prudent to let any scholar in this field examine the data and models that created those predictions so we might better understand how the theory and predictions of global warming have gone so terribly wrong.
Do you think it would be possible instead that we make available all the raw data and the global climate models used by the IPCC ? We could find out, with full transparency, what exactly went wrong with all the predictions.
Then maybe we can better inform the public and the governments that rule over . Then maybe we can accurately predict climate change and mans involvement.
Don’t you agree ?