by Judith Curry
The dueling climate null hypothesis papers by myself and Kevin Trenberth are now online.
Recall we originally discussed the background for these dueling papers on the previous thread Null hypothesis discussion thread.
From the WIREs Climate Science press release:
The Human Cause of Climate Change: Where Does the Burden of Proof Lie?
Dr Kevin Trenberth Advocates Reversing the ‘Null Hypothesis’
The debate may largely be drawn along political lines, but the human role in climate change remains one of the most controversial questions in 21st century science. Writing in WIREs Climate Change Dr Kevin Trenberth, from the National Center for Atmospheric Research, argues that the evidence for anthropogenic climate change is now so clear that the burden of proof should lie with research which seeks to disprove the human role.
In response to Trenberth’s argument a second review, by Dr Judith Curry, focuses on the concept of a ‘null hypothesis’ the default position which is taken when research is carried out. Currently the null hypothesis for climate change attribution research is that humans have no influence.
“Humans are changing our climate. There is no doubt whatsoever,” said Trenberth. “Questions remain as to the extent of our collective contribution, but it is clear that the effects are not small and have emerged from the noise of natural variability. So why does the science community continue to do attribution studies and assume that humans have no influence as a null hypothesis?”
To show precedent for his position Trenberth cites the 2007 report by the Intergovernmental Panel on Climate Change which states that global warming is “unequivocal”, and is “very likely” due to human activities.
Trenberth also focused on climate attribution studies which claim the lack of a human component, and suggested that the assumptions distort results in the direction of finding no human influence, resulting in misleading statements about the causes of climate change that can serve to grossly underestimate the role of humans in climate events.
“Scientists must challenge misconceptions in the difference between weather and climate while attribution studies must include a human component,” concluded Trenberth. “The question should no longer be is there a human component, but what is it?”
In a second paper Dr Judith Curry, from the Georgia Institute of Technology, questions this position, but argues that the discussion on the null hypothesis serves to highlight fuzziness surrounding the many hypotheses related to dangerous climate change.
“Regarding attribution studies, rather than trying to reject either hypothesis regardless of which is the null, there should be a debate over the significance of anthropogenic warming relative to forced and unforced natural climate variability,” said Curry.
Curry also suggested that the desire to reverse the null hypothesis may have the goal of seeking to marginalise the climate sceptic movement, a vocal group who have challenged the scientific orthodoxy on climate change.
“The proponents of reversing the null hypothesis should be careful of what they wish for,” concluded Curry. “One consequence may be that the scientific focus, and therefore funding, would also reverse to attempting to disprove dangerous anthropogenic climate change, which has been a position of many sceptics.”
“I doubt Trenberth’s suggestion will find much support in the scientific community,” said Professor Myles Allen from Oxford University, “but Curry’s counter proposal to abandon hypothesis tests is worse. We still have plenty of interesting hypotheses to test: did human influence on climate increase the risk of this event at all? Did it increase it by more than a factor of two?”
Trenberth. K, “Attribution of climate variations and trends to human influences and natural variability”, WIREs Climate Change, Wiley-Blackwell, November 2011, DOI: 10.1002/wcc.142 http://doi.wiley.com/10.1002/wcc.142
Curry. J, “Nullifying the climate null hypothesis”, WIREs Climate Change, Wiley-Blackwell, November 2011, DOI: 10.1002/wcc.141 http://doi.wiley.com/10.1002/wcc.141 [link null wires numbered] to full manuscript
Allen. M, “In defense of the traditional null hypothesis: remarks on the Trenberth and Curry”, WIREs Climate Change, Wiley-Blackwell, 2011, DOI: 10.1002/wcc.145 http://doi.wiley.com/10.1002/wcc.145
JC comments: Read the papers, they aren’t lengthy.
Trenberth’s arguments weren’t unexpected, given his previous essay on this that was published by the AMS. Actually, I don’t see much in Trenberth’s essay that is about the null hypothesis, rather it focuses on attribution of extreme events. Allen doubts that Trenberth’s suggestion for reversing the null hypothsis will find much support, and I have to agree.
The more interesting null hypothesis debate is arguably between my position and Allen’s. Whereas I argue for nullifying the climate null hypothesis as it relates to attribution, Allen argues for preserving the climate null hypothesis.
However, Allen completely misinterprets my argument regarding the null hypothesis. In the abstract, he states “Judith Curry’s counter proposal to abandon hypothesis tests as useless is worse still.” I did not recommend abandoning hypothesis tests. As discussed in my essay, you can test a hypothesis without using a statistical null hypothesis test. My statement about the null hypothesis is made in the context specifically of attribution arguments; it is not a general proposal to abandon hypothesis tests.
As stated in my paper, climate attribution hypotheses are particularly ill-suited for null hypothesis testing. Consider the following hypothesis, H1 (from the IPCC attribution statement):
H1: “Most [>50%] of the observed increase in global average temperatures since the mid-20th century is due to the observed increase in anthropogenic greenhouse gas concentrations.”
Is Trenberth’s statement “there is no human influence on climate” useful as a null hypothesis in the context of H1? It is not, since the statement is generally accepted as trivially false and its falsification lends no support to H1. A more logical null hypothesis for H1 might be:
H0: Less than half of the observed increase in global average temperatures since the mid-20th century is due to the observed increase in anthropogenic greenhouse gas concentrations.
Attempting to formulate a null hypothesis for the IPCC’s attribution statement reveals the illogical nature of H1. The issue regarding the human influence on climate is not a binary yes-no issue, whereby humans influence climate or they do not. The key issue is the importance of the anthropogenic influence on climate relative to the background natural climate variability (both forced and unforced). The binary nature of H0 and H1 implies that the distinction between 51% of the warming attributable to humans versus 49% is somehow significant, meaningful, or important.
Allen defends the use of ‘most’ in the following way: There is nothing imprecise about ‘most’: it means more than half. As it happens, this wording was introduced to replace the (vaguer but more evocative) phrase ‘contributed substantially’ in a nice example of the IPCC review process making its conclusions both more specific and less emotive. As Curry observes, an infinite number of statements could have been made, ranging from ‘it is extremely likely that the anthro pogenic increase in greenhouse gases has caused some warming’ (not very informative, since an infinitesimally small warming is of no policy relevance) to ‘it is about as likely as not that greenhouse-gas-induced warming exceeds the total observed warming’ (which indicates the size of the greenhouse signal, but understates our confidence in attribution). Far from being a ‘poor choice’, in Curry’s words, ‘most’ was chosen for precisely the reasons she advocates: large enough to be policy relevant, while small enough for the null hypothesis ‘not most’ to be rejected at an informative confidence level.
It is useful to refer back to my recent response to the reply to my uncertainty monster paper, where the issue surrounding ‘most’ was discussed in some detail. In this response, I also criticized the IPCC for the lack of traceability regarding ‘most’ and ‘very likely.’ Allen provides a rather astonishing description of the sausage making that went into ‘most’. Quoting from my uncertainty monster reply:
Whereas X et al. disagree with our statement, the IAC Review of the IPCC seems to share our concern: “In the Committee’s view, assigning probabilities to imprecise statements is not an appropriate way to characterize uncertainty.” Assigning a ‘very likely’ likelihood to the imprecise ‘most’ is not an appropriate way to characterize uncertainty.
I continue to point out the problems of posing the attribution hypotheses and/or conclusion in the form of H1. Understanding this point is an issue in basic logic, you don’t need to understand much about climate science to see the problems with formulating H1 in this way. Not to mention the difficulty of formulating a sensible null hypothesis that is not trivially true.
This ambiguity in the attribution of warming to anthropogenic factors becomes magnified substantially in the fractional attribution of individual extreme events to anthropogenic forcing, which is the topic of main interest to Trenberth and Allen.
I’m pleased that WIRE is making these papers publicly available. WIRE has issued a press release, but I will be surprised if this exchange generates much media interest. Nevertheless, this rather arcane debate over the null hypothesis has important implications for the framing of the IPCC’s attribution arguments, and I hope that this exchange will stimulate debate and discussion on this topic within the IPCC community.
Yes, “Arcane” is the word.Though I like your notion that Trenberth should be careful in pressing for this change in the null hypothesis, as now studies might be explicitly designed to disprove CAGW. That would be a refreshing change.
Thanks, Professor Curry!
You have demonstrated the patience of Job.
And Job had fewer persecutors. ;)
Trenberth and travesty just seems to go together, don’t they?
Made me laugh, Skeptic. The guy certainly has stones though.
pokerguy, you (very likely) have a great sense of humor.
I’m wondering if there’s a built-in assumption that some “fraction” of something like warming can be attributed to a specific cause, when we’re dealing with a very complex non-linear system displaying spatio-temporal chaos? Is this valid? Should there be a weighting for the hypothesis that the attribution hypothesis doesn’t make sense in the circumstance?
I’m with you on that AK, although the null hypothesis should be that the attribution hypothesis is 50% meaningful
You can always ask the hypothetical question of what would have happened otherwise, and figure the “fraction” that didn’t happen. But in reality, this is unknowable.
I agree that especially with regards to extreme weather events, I think the fractional attribution problem is ill-posed
Maybe I’m missing something, but I understood the claim to be that anthropogenic factors cause temperature rise, which in turn causes consequences. In this case, in principle, the attribution is a percentage of temperature rise that wouldn’t have otherwise happened. Extreme weather events are a consequence of temperature, according to this model.
Are you saying that there’s an alternate model whereby temperature is bypassed as the central variable, and anthropogenic factors cause extreme weather effects directly without manifesting themselves as temperature rise first? If this is the case, how to you figure feedback into the picture? That sounds like an impossibly difficult way to sort anything out.
The fact is that “temperature”, by which we mean some sort of global average temperature, doesn’t to anything. It’s a myth.
What actually does something is the specific temperature at a specific place and time, in combination with all the other specific temperatures. This isn’t the same thing, although many people simplify it in their minds (and papers).
For instance, consider a with/without comparison in which the average temperature rises 3 degrees with an increase in CO2, while without the increase in CO2 the average temperature would have risen by 1 degree, but its distribution would have changed, in turn causing crop failures and storms in unlikely places. How do we define attribution?
Trenberth has no actual physical evidence proving any more than a 5% contribution from humans to the 2.037ppmv/year average increase in atmospheric CO2 over the past 10 years.
Trenberth has no actual physical evidence demonstrating how much of the insulating effect from increased CO2 is merely replacing the insulating effect from clouds with nop net change to the overall insulating capacity of the atmosphere.
Trenberth still relates the effect from CO2 based on 100ppmv causing an increase of 0.6°C but does not subtract the 0.5°C of natural warming as recovery from the LIA that has nothing to do with CO2 emissions therefore producing an effect six times too high for the effect from increased CO2
Trenberth is not aware that CO2 is not increaseing at an accelerated rate as predicted by Hansen but at a near linear rate averaging 2.037ppmv/year so by 2100 the concentration will not be as predicted by the IPCC as per scenario A1 but merely reach a level of 573.11ppmv by 2100,
This is only in the case that CO2 increase is maintained but this may not happen as the rate appears to be slowing down with the average rate for the past 5 years being lower than the rate for the past ten years.
In trenberth’s 1997 energy balance the OLR is 235 the sloar flux is 342 and the albedo is 107 leaving zero energy left over to create fossil fuels
In his 2008 version Trenberth reduces the albedo from 107 to 101.9 increases the OLR from 235 to 238.5 leaving the solar flux similar at 341.3 leaving over 0.9watts/m^2 to create the fossil fuels so we can have CO2 emissions from them
According to the climate models the increase in CO2 from 363.47ppmv in 1997 to 385.57ppmv in 2008 when Trenberth revised his energy balance should result in a change of forcing of 0.3158watts/m^2; yet from 1997 to 2008 trenberth cahnged his value for the albedo by 5.1watts/m^2 changed the OLR by 3.5watts/m^2 and even added 0.9watts/m^ for energy to create fossil fuels that wasn’t there in 1997. Remarkably Trenberth uses this energy balance diagram as input for models that show just 0.3158waqtts/m^2 over the period of time thjat Trenberth has adjusted his numbers by more than ten times that amount.
Back in the day when science was not questioned because sciences asked questions instead of as the case for AGW where science just supplies answeres without even a question being asked, data physical measurement, and strict adherence to the scientific method were what drove science and there was no political or ideological influences biasing whart was done.
This whole AGW thing is so far removed from proper science that it doesn’t even qualify for application of the null hypothesis or any other form of scientific validation
Norm, I would rewrite your first sentence to read
“Trenberth has no actual physical evidence to prove anything”.
Surely this must be the first time in the annals of science since the time of Galilleo and Newton, that a claim has been made the the “science is settled”, and there is absolutley no observed data whatsoever to support this assertion.
So you are saying that we haven’t measured the spectral properties of CO2?
This issue is not with the accuracy of CO2 measurements, but with clouds convection and latent heat. I understand the cloud uncertainty to be so great that even the sign is unknown, let alone the amplitude. We could know the CO2 spectra to 0.01% and still not be able to determine whether H0 is true/false or to what probability because of cloud uncertainty.
e.g., using highly accurate line by line evaluation of all green house gases, Ferenc Miskolczi finds no detectable CO2 contribution to the global optical depth over 61 years, though there is very significant H2O impact.
Yet the published atmospheric humidity measurements he uses are so controversial that few are willing to even address his findings.
The uncertainties in global climate models vs empirical observations are equally great ranging from 10 C by 2100!
The measurement uncertainties alone are so great that Nigel Fox of NPL is proposing an in situ satellite calibration method that would improve the uncertainties ten fold. This could reduce the time needed to reliably detect anthropogenic influence from >30 years to ~10 years. See
NPL in space
We need to seriously address the full range of these experimental and modeling uncertainties.
Even Judith’s HO model needs further probability quantification.
So the statement that the idea that there is no physical evidence here is false?
Please read Norm’s statements – not that there is no evidence, but that: “Trenberth has no actual physical evidence proving any more than a 5% contribution from humans”
Compare Judith’s H0 with null = not greater than 50%.
Andrew, spectral properties are not observational evidence. The argument for AGW is theoretical, and spectral properties play a central role in that argument. Once such an argument is formulated it is then time to look for observational evidence, and there is very little, quite the contrary.
The spectral properties of CO2 are well known and don’t need to be measured. The measured radiative spetrum of the Earth’s thermal radiation demonstratesm that the spectrum closely approximates that of a black body with an absolute temperature of the radiating part of the Earth’s surface being measured.
This is something in the order of 300 k for the tropics and 250 k at the poles and everything in between.
The radiation spectrum for temperatures between 300 k and 250 k only includes a single absorption window for CO2 centred on 14.77microns wavelength (677cm^-1 wavenumber).
Arrhenius in his 1896 paper states that there were no measurements made for wavelengths above 9.5 microns so thye effect that Arrhenius assumed to be attributable to CO2 (which he called carbonic acid) was actually strictly the effect from water vapour falsefying the effect from CO2 claimed in the paper. (This was later noted by Angstrom who identified the actual spectral peaks associated with CO2)
The 1981 Hansen et al paper based models on CO2 having an absorptive effect from 7 to 14microns, but since the maximum window of absorption only ranges from 17 to 12.5 microns at the absolute outer limits the 7 to 12.5 microns of the Hansen claim are beyond the absorption limits of CO2.
The climate models are based on a relationship using Hansen’s modification of Arrhenius’ 1896 assumption and neither is valid because both wrongly attribut effect to CO2 that is beyond the range where this actually occurs
For some information on cloud variability see Nigel Calder on The trouble with clouds, especially his graph on Cloud Anomalies
Differing satellites can report the cloud anomalies as +1% positive or -1% negative for the same year – or worse.
Until we can quantify not just the sign but also the magnitude of the cloud anomalies, we should have little confidence in models.
Norm, your first sentence is wrong, anthropogenic emissions are responsible for 100% of the observed rise in atmospheric CO2. For details, see Cawley, “On the Atmospheric Residence Time of Anthropogenically Sourced Carbon Dioxide”, Energy and Fuels, Articles ASAP, the url is http:dx.doi.org/10.1021/ef200914u
Given you have made a grave error in a very basic issue, perhaps you need review your understanding of the more advanced issues addressed in your comment.
are the CO2 emission also responsible for the decadal change in the atmospheric Ar/N2 ratio observed by Keeling?
Actually it is you that has made the error. I have done the analysis and all that you have done is to quote a single paper.
If you check the MLO site http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo and look at the recent monthly CO2
You will see that the measurement responds instantly to change depicting the seasonal change due to the seasonal uptake by plants in the larger Northern Hemisphere temperate landmass.
If CO2 emissions from fossil fuels were the primary driver of the observed increase in CO2 the drop in CO2 emissions from 31915.9mt in 2008 to 31338.8mt in 2009 followed by the rapid increase to 33158.4mt in 2010 would have influenced this curve in some noticible fashion if your contention about CO2 emissions from fossil fuels being the prime source for the observed increase is correct.
As this is clearly not the case with the seasonally adjusted trend showing no change; it is not possible for CO2 emissions fropm fossil fuels to be the prime source of the observed increase and therefore your contention that this is the case is simply false!
Do your homework before you make unfounded comments!
Norm, you really do not understand how the carbon cycle works. Your desperation in making certain that the fossil fuel industry and we consumers do not play a role in the atmospheric CO2 increase is comical in its futility.
Why is it that whenever confronted with fact all you can answer back with is silly comments like this.
No further comment is necessary
You are only making assertions, not facts. CO2 released into the atmosphere by fossil-fuel combustion remains detectable as an excess concentration for centuries. This arises due to its slow diffusion into deep sequestering layers as it cannot replace the CO2 in the naturally established steady-state carbon cycle.
This following post describes my masterful derivation of the diffusion limited adjustment time of CO2 :
Dikran: disagreement is not error. Cawley is speculating.
No, Cawley paper contains a proof that man is 100% responsible for the observed rise in atmospheric CO2, assuming that the carbon cycle obeys the principle of conservation of mass, which seems pretty reasonable to me. The error bars on the data used to make the mass balance argument are no where near large enough to cast any doubt on the conclusion.
The mass balance approach assumes that there are no long-term changes in natural sources and sinks. http://retiredresearcher.wordpress.com/
no it doesn’t, read the paper. The only assumption made by the mass balance argument is that the annual rise in CO2 is the difference between total annual emissions and total annual uptake. It doesn’t make any assumption regarding the activity of natural sources or sinks, it doens’t even assume we know the magnitudes of the natural fluxes.
Exactly, that was his mistake. He did not consider that in the input-output = accumulation mass balance, that natural input and output rates are changing. Again, for more details http://retiredresearcher.wordpress.com/.
fhhaynie, the web page you link to seems to be primarily concerned with isotopic analysis, which isn’t part of the mass balance argument. The input-output = accumulation argument is valid whether natural fluxes are constant or changing. Anyone capable of operating a bank account ought to understand why.
You don’t seem to understand the concept of the mass balance model, especially since you say the isotope ratio has nothing to do with it. That change in the isotope ratio is the only real evidence that fossil fuel burning is contributing to the accumulation of atmospheric CO2.
fhhaynie, READ THE PAPER, you will find that isotopic analysis is not required in the mass balance argument.
Brilliant analogy !!!
Residence time is a coin getting freshly minted and finding out how long it circulates before it gets handed-off as change. When a different coin is exchanged, the person has lost track of its lineage but that does not matter as long as an equivalent coin takes its place. It doesn’t matter if the coin had identifying markers or not, as the excess coins are still circulating.
Adjustment time is a coin cycling through the system long enough that it gets decommissioned or gets lost or destroyed. It then permanently gets removed from the system.
The coin in the financial transaction system is equivalent to the CO2 molecule in the carbon cycle.
Now look at the relative rates of residence time of a coin versus that of the adjustment time. You will find that a typical residence time is measured in weeks, but the adjustment time is years. This is exactly the same rationale of a CO2 residence time in years and a CO2 adjustment time in centuries.
These kinds of studies have been done on circulation of money, look up the http://www.wheresgeorge.com/ project. I was able to use this data to generate a model for travel patterns of people here:
Perhaps not remarkably, this data shows the same fat-tail statistics that the CO2 does. This is all based on standard random-walk and dispersion arguments that the skeptics are pretty much clueless about. I decided to got into the climate science arena because I think I have lots of ideas that I can contribute in terms of modeling.
WebHubTelescope, oddly enough the paper concerned uses a coin-based analogy to explain why a short residence time is a red-herring.
Is that the Cawley paper? I can’t get access to that, but does that mean that I guessed correctly on how they are using coins as an analogy?
Norm is feeling guilty by his association with Big Oil so of course he is trying to marginalize the contribution of humans to the CO2 increase. 5% is ridiculously low-balled.
I agree with the marsupial that we are 100% responsible for this increase, the vast majority of it coming from FF and a fraction from positive feedback outgassing as the ocean’s trend warmer. The latter we are responsible for via indirection, as without FF CO2 induced warming this would not have happened.
Never before has CO2 concentration changed this dramatically over this short a time frame.
We can probably attribute this to conservation measures brought on by the reality of peak oil. The acceleration of oil production maxed out in the late 1960’s. Once the marketplace discovered this was unsustainable (USA hitting peak in 1970 and the oil embargo at the same time), all the efficiencies were put into place and energy waste was reduced so that we have assumed a much less than exponential increase regime since that time. Hansen probably should have read Limits to Growth or some of King Hubbert’s work.
BTW, quoting 2.037 ppm per year indicates that you are so very proud of the Mauna Loa team’s precision in measurements.
According to to news reports anthro CO2 emissions jumped a whopping 6% last year. Where the atmospheric increase comes from is unknown, a matter of speculation.
The first adder is China.
Consider another adder. Crude oil production has largely plateaued and is probably on the way down, obscured by the classification of oil into an “All Liquids” categories. Well, it turns out that All Liquids includes all the other liquid fuels such as biofuels, coal-to-liquids, natural gas liquids, and low-grade or heavy oil. These all have very low Energy Return on Energy Invested (EROEI) numbers so that they have a multiplier effect on actual fossil fuel usage. To some degree we may have lost control of liquids accounting, and we are seeing more from coal and natural gas to make up for it.
On top of that, the actual increase was about 5.7% compared to 5.2% the year before, which doesn’t make it seem like that much of a “whopping” increase:
P.S. One can also use Girma math and the increase translates to only 0.00022%. He’s your guy, skeptics :)
Web, you have missed the point. Peak oil is not the issue here.
Yes, I realize the real issue is rampant out-of-control skepticism driven by cherry picked data and illogical arguments.
Scientists should ignore anything that doesn’t “doesn’t even qualify for application of the null hypothesis?”
Well, that’s a new one.
I don’t see how it helps, Norm, to simply blurt out a long list of easily-verified untrue assertions.
There are a wide range of mistakes and imprecisions in climate science. To improve on these mistakes and imprecisions, you do your research and publish the results, thus making your contribution to the consensus in the field.
Hanging around on blogs sharing your mistaken opinions isn’t a very good use of your time.
It’s part of his job security, as Norm works in the oil biz. Oil production data has always been about misinformation. It takes strong governments, like the UK and Norway, to enforce accurate reporting. The USA reporting is handled locally and then lorded over by greedy energy consultants at the national level. Norm has learned well from Big Oil.
Oh yes, I forgot that corporations are now considered people so they are also now protected from ad hominen attacks.
I endorse Kermit.
Cut the ad hominem “big oil” and address the substance of the statements.
Ad hominen means “to the man” or “to the person”, and last time I checked Big Oil was not a person. Oops, the Supreme Court, Mitt Romney, and others seem to disagree. Poor little Mr. Big Oil, sorry to hurt your feelings.
The issue is not “hurt feelings”, but the logical fallacy of argument by rhetorical appeal to issues other than objective scientific facts relative to stated hypotheses, and theories. In this case you insinuate that his facts are wrong or his arguments fail because he works for “Big Oil.”
Say I work on improving combustion efficiency, improving alternative fuel extraction, or improving solar power, or solar fuel production. How could any of these invalidate my reference to facts, or addressing the logic of hypotheses under the scientific method?
Thus address the facts vs hypotheses, not the employment relationships.
Norm is appealing to rhetoric not me. He has admitted to the fact that “physical data is what is making these assertions not me!”
That is the most pathetic excuse for a scientific analysis that I have heard for a long time.
I do not have job security because I do not have a job. I am semi retired and still do a bit of consulting but most of this is voluntary helping solve seismic interpretation problems that are outside the range of experience of young geophysicists.
Big Oil is one of the greatest beneficiaries of this global warming fraud because all forms of Kyoto inspired energy sources are so much more expensive than oil it has allowed the oil price and profits to be at record high levels.
The problem with telescopes and the likely reason that you have chosen this as a moniker is that telescopes have an extremely narrow field of view!
What a great smokescreen, Norm. Oil price is at record high levels because it is a scarce resource and nothing can take its place for certain tasks, such as jet fuel. This has absolutely nothing to do with AGW and everything to do with the free-market.
Perhaps you are not a believer in the free-market?
I know it hurts everytime you read how much petroleum nd natural gas is really under ground, and how many orders of magnitude you are off, but if you would just admit you are wrong and move on, instead of doing this sad Paul Ehrlich impersonation, you will be much nappier.
Hunter, you show your ignorance as there is a difference between the ultimately recoverable resources (URR) and the original oil in place (OOIP). You first have to understand that you can’t get every last drop out of the ground because of the law of diminishing returns. This means that URR is typically around 40% of OOIP for crude oil. That is when the classical reservoirs are abandoned, as it costs too much in extra energy to get that last bit of oil out.
Now we are at the point that all the unconventional reserves, such as tar sands and shale oil, immediately start at the 40% level, since we have to use a lot of energy to even begin to extract the underlying oil efficiently. All that extra natural gas is going to get used to extract and process the low-grade oil.
Your baseless assertion that I am “orders of magnitude off” is really what should alert the readers to your ignorance of the real problem.
I do not have opinions I just have over 40 years working with physical data and the physical data is what is making these assertions not me!
In other words, the devil made you do it?
Norm, you are not getting off the hook that easy. The problem with you skeptics is that you tend to run off at the mouth and indiscriminately argue topics with the equivalent of a firehose.
You started your comment out with an assertion not of global warming but of the wrong attribution of atmospheric CO2 concentration increase to non-man-made causes. This was your lead sentence:
You actually said that humans have no role in CO2 increase, or at most to a 5% level. The implication is that we changed the level from 280 PPM to 285 PPM over the last 150 years and natural causes to the rest, according to your baseless assertions.
And you have the gall to say that “physical data is what is making these assertions not me!”. Hang it up, Norm.
The Null-hypothesis says: 1.”There is no human influence on climate” and 2.since AGW “entered into conventional understanding” …..3..”there is no need to provide evidence for AGW and 4. the burden of proof therefore lies with the sceptics….. OK folks,
fine, no problem. Lets do it then! Why back down? Sharpen the arrows and finish them off…..!
How? Its all transparently explained in ISBN 978-3-86805-604-4 as reference. No simulations, models, assumptions, guesswork, uncertainties, probabilities…..
….the evidence is given, the burden of proof is lifted from the grounds,
transparency for everyone is ensured……the approach is correct and
cannot be falsified, just ignored, as all AGW-proponents do it. Their
method is not to answer Emails, all of them not willing to respond…..
The AGW-method is : If sceptics are nearing, they convert into submarines and immediately hide under water (Tauchstation – under water dive station – until the danger has past…..
O my god…and they call themselves “scientists”….
Yes, the priests always say is you who have to demonstrate god does not exist. But I don’t remember whether the reason is it is “very likely”, or “unequivocal”, or what.
Hypotheses are not assumptions, and you can pick whatever hypotheses you want. And the null hypothesis is not the default that is taken to apply if it is not disproved. You never accept the null hypothesis, you only ever fail to reject it.
You pick the null and the alternative in such a way that the alternative hypothesis is the thing you want to prove, the new, positive statement you want to have accepted as true, and the null is simply its opposite. This is because science operates by falsification. You prove hypotheses to be false. If you disprove the null hypothesis, that proves the alternative. Failing to disprove the null achieves nothing.
Any sceptic wishing to disprove global warming would naturally start with global warming as the null, and then seek evidence forcing its rejection. There’s nothing whatever wrong with doing that.
So if Trenberth really wants to make global warming the null, I’m quite happy for him to do so. It means that he will never be able to conclude that it is true. I’m a bit surprised that a scientist of his experience wouldn’t understand something so basic to the philosophy of science, but there you go. It seems to me the suggestion was intended more for its rhetorical effect than its scientific merit.
Rather than a simple hypothesis testing problem, it would be better to reformulate it as an estimation problem, where you estimate the confidence intervals for the various climate contributions. Whether you estimate the value to be zero or non-zero, the logic is the same. Parameter estimation is implicitly still hypothesis testing – you are mapping out what ranges of hypotheses would pass or fail a test. The same basic misunderstanding can also occur with parameter estimation (it’s reliant on your statistical model of the parameter space being correct) but it’s usually less of a problem.
I was in good agreement with your (otherwise excellent) post, until it got to this bit:
“So if Trenberth really wants to make global warming the null, I’m quite happy for him to do so. It means that he will never be able to conclude that it is true.”
Which is clearly in correct, because as you say “science operates by falsification. “. Rejecting the null doesn’t allow you to conclude that the alternative hypothesis is true, as the null hypothesis may be rejected incorrectly. You can never conclude a hypothesis is true, only continually test it against reality and see if it survives.
The point you make about choosing the null to be the opposite of what you want to argue for is a very good one, and one the skeptics frequently flout when they argue that the non-statistically significant decadal trend means something. They are arguing for the null hypothesis used to establish a non-zero trend, when THEIR null hypothesis should be that warming has continued at the previous rate.
I am not a statistician, but yes, it seems reasonable the null can be the opposite of your argument or claim as in #1 and #2 below:
# 1. I claim I can taste the difference between Coke and Pepsi.
Null hypothesis: I can’t taste the difference.
Hypothesis: I can taste the difference.
# 2. You claim I can’t taste the difference between Coke and Pepsi.
Null hypothesis: I can taste the difference.
Hypothesis: I can’t taste the difference.
Of course these arguments meet the requirements to be statistically tested. The attribution null being discussed here does not
It doesn’t matter if they ever settle the science question. Reversing the null settles the policy question. That’s the only thing anybody cares about.
There was a thread on Trenberth’s attempt to change the null hypothesis here before. His belief is now what it was then – that if skeptics cannot now disprove CAGW, then governments should begin decarbonizing the global economy. He is using the term “null hypothesis” in its political context, shifting the burden of proof for policy purposes, but hoping to give his politics a scientific gloss with the term.
As a practical matter, where the burden of proof actually lies (as opposed to where it ought to lie) depends on who the jury is. If the jury is the IPCC, or the climate science consensus community, Trenberth is just describing the way things already actually are, and arguing that that reality should be recognized formally. The only real change such a shift would effect would be to be to lend scientific legitimacy to the political position he and the rest of the consensus have already adopted.
But the jury that really matters is of course the electorate, and more precisely the US electorate. And in that court, the null hypothesis and the burden of proof will remain where it properly belongs, no matter how the term is redefined by the Trenberths of the world. They can “redefine what the peer reviewed literature is,” they can shift the null hypothesis/burden of proof in their own little world, they can change global warming to climate change, and skeptic to denier. But it is all to no avail. Decarbonization is a dead issue…unless something disastrous happens in November 2012.
“if Trenberth really wants to make global warming the null, I’m quite happy for him to do so. It means that he will never be able to conclude that it is true. I’m a bit surprised that a scientist of his experience wouldn’t understand something so basic to the philosophy of science”
Trenberth understands, and you have not understood his statements.
The main issue is whether or not there is enough evidence to reject a null hypothesis. This means that a science research goal is not going to be to ‘conclude that something is true’: it is going to be to show that there is insufficient evidence to reject a particular null hypothesis. That is the logic of science i.e. falsification.
On the basis of the evolution of climate science research and knowledge over the years, Trenberth is describing a situation where there is sufficient scientific evidence to reject the traditional null hypothesis.
Actually he is not!
The satellite measured OLR for 1997 was 232watts/m^2 when Trenberth’s energy balance diagram shows 235watts/m^2 and the satellite measured data shows OLR at 232.8watts/m^2 in 2004 when Trenberth’s 2008 revision based on 2004 shows 238.5watts/m^2 for OLR. (the measured OLR for 2008 was 233.0 watts/m^2 for completeness).
Proper scientific research on climate would have used actual measured data for the OLR and used this to determine the Albedo effect but instead Trenberth assumed an albedo and used this to determine an OLR value to balance off the albedo and the solar flux and use this value to create a value for the influence of CO2 resulting in highly complex nonsense derived from suitably complex equations input into an even more complex model to produce results that state we should be warming at an accelerated rate as the Earth continues to cool.
There is sufficient scientific evidence but since this refutes AGW Trenberth is simply not using it
Nullius in Verba –
Yes. I know for certain that this was at one time taught in science, philosophy and psychology classes . Everybody except perhaps arts majors understands this. To paraphrase, what Trenberth seems to be asking for is an alternate hypothesis of: “humans have no influence”. But this is what researchers believe to be true in their heart of hearts, and what would confirm climate scientists’ most quietly hopeful speculations. Is this what Trenberth is really asking for? HELL NO! GaryM, I believe, pointed straight to a plausible answer. It is so that politicians who do not understand science might reach a desirable and scientifically unjustified conclusion.
Only climate change in the N. Atlantic is one governed by Law of Nature
and suggest rapid cooling in decade to come.
Prof. Curry, the problem of the null hypothesis is not the principal dificulty here, the major problem lies in a failure to explicitly state the alternative hypothesis. For example, your comments regarding the recent decadal trend being approximately flat. These comments are open to misinterpretation as they are too vague to be testable. Would you agree this as a statement of that hypothesis (and we can then explore a suitable null hypothesis)?
H1 the rate of warming 2001-present is less than the long term rate of warming (1979-present)
I would be interested in your response, both in terms of that particular issue, and also in exploring the question of null hypotheses.
What is the null hypothesis to test Trenberth’s hypothesis that there is enough heat from global warming to make the Earth’s rivers bleed red and it is hiding and just waiting to spring from the depths of the ocean like a phoenix to devour us all?
FWIW, I think that Trenberths point has been completely missed here. Hit talk about reversing the null hypothesis is only on the question than AGW exists, not on the strength of the attribution. This seems reasonable to me, there is plenty of basic physics (and paleclimate data) that indicate that CO2 is a GHG and adding more of it should lead to warming. Thus it is reasonable that the iconoclasts that seek to overturn the current climatological paradigm have the burden of proving that wrong. Essentially there is so much prior information (theory) in favour of AGW that it is reasonable for that to be the working (deault) hypothesis.
However that doesn’t mean that the null is reversed on EVERY question relating to AGW. So I would have though he would be happy with the H1 and H0 that Prof. Curry suggests for that particular question.
Say I believe that gravity doesn’t exist (the Earth sucks). Where does the null hypothesis lie there? Is the onus on mainstream science to prove that gravity exists? No, it is rightfully part of the current scientific paradigm. Trenberth seems to think that AGW is sufficiently well established that it is part of the current scientific paradigm, and that normal science is now refinining and elaborating the paradigm. I think he is right to think that.
Phil Jones said there has been no significant global warming since 1995.
No, he said the trend was not statistically significant. If you read the rest of the interview you will find that he explains that you would not expect such a short term trend to be statistically significant, even if AGW was continuing uninterrupted. Those with a good grasp of statistics will know that is becase the test has very little statistical power (the probability of rejecting the null hypothesis when it actually is false). He also says that you should look at long term trends where one should expect to be able to reach statistical significance if H1 is true.
Jones said no such thing. He was reported to have said that, but he didn’t, don;t believe all you read on blogs or newspapers.
And, there has been more cooling since then, right?
In 2008, an October snowfall in the UK was the earliest in 74 years. According to NOAA, October 2009 US temperatures were the third coldest in 115 years and more snow fell than ever recorded the month of October. In 2009 Germany had the coldest October in recorded history. Siberia had its coldest winters in history in 2009-2010. The elderly in the UK burned books in the winter of 2010 to keep warm. The Northern Hemisphere had the largest snow coverage ever recorded in February 2010. The coldest day ever experienced in New Zealand was July 2011. With rare October snow to hit Germany we see the same rarity in New England and New York has just experienced the largest October snowfall since the Civil War.
“The elderly in the UK burned books in the winter of 2010 to keep warm”
Funny, Wagathon, how you continue your previous comment as if there had been no intervening response.
The correct response when someone demonstrates that you are wrong is to gracefully accept your error. Instead to have made a comment that merely further demonstrates that you don’t understand the source of your error. Show me a non-statistically significant trend over a period long enough for the statistical power of the test to be 95% (so the test is equally balanced), and you will have demonstrated that you understand the statistica issue and proven me wrong and I will happily admit it. The ball is in your court, lets see your return of service.
Richard Lindzen observed that “there has been no statistically significant net global warming for the last fourteen years.” The only thing important about Phil Jones is that he concedes the point. It is irrelevant whether you concede the point. You can disagree with anything you want even if it is nuts to do so.
“’from 1995 to the present there has been no statistically-significant global warming’30. Jones also noted that it has been cooling since 2002, but that this trend was too short to be statistically significant.” ~Dr. David Evans
“Important admission. Leading member of the climate establishment, Dr. Phil Jones, again: the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 ‘are similar and not statistically significantly different fromeach other’”54. ~Dr. David Evans
No period is long enough for you.
Give a period that is long enough for you.
True. Take a stand and then admit it later.
“The reality is that the temperature and other data has become unfavorable to their climate theory, so they hide behind complexity and authority instead of simply telling you what is going on.” ~David Evans
@Kermit any trend over a timescale long enough to have a statistical power of 95% is fine for me. Go look up what statistical power is and you will understand that my request is perfectly reasonable.
@Wagathon Your inability to return the ball in-court is noted. Your attempt to deflect the discussion elsewhere is also noted and I am not inclined to fall for it, sorry.
Which is why The boffins of Japan compared climatology to the study of the ancient science of astrology.
Your continued inability to demonstrate a non-significant trend over a period long enough to have useful statistical power is noted. (hint statistical power is the probability of rejecting the null hypothesis when it actually is false).
The real story is finding the trend that global warming alarmists tried to hide. These ‘trends’ were known as the Little Ice Age and the Medieval Warm Period. Fyi– they’ve been rediscovered. Please make a note of it.
Another failed attempt at evasion, 40-love!
What exactly do you mean by the term “trend” when dealing with the global temperature time series?
Fit a linear model (preferably with ARMA(1,1) noise as the noise process is autocorrelated), the trend is the slope of that linear model (i.e. the coeffcient of the linear term of the model).
How well does that model fit the global temp time series, just how significant is the trend in ARIMA models of the global temp series (and over what period – I’m presuming you are arguing it emerges at some point post AGHG concentrations being significant), and was ARMA(1,1) noise the model Jones was really talking about in his comments?
HAS, on short time scales the trends are not statistically significant, we all know that. The point is that the test for statistical significance over such a short time span does not have useful statistical power – there is insufficient data to be able to reject the null hypothesis even when it is false. So the fact that the trend is not significant is essintially meaningless. This is why I have been asking for a test with statistcal *power* of 95%, because then the lack of significance would mean something becuase if the null hypothesis were false, the test would be likely to show that it was false.
Essentially if a hypothesis test fails to reject the null hypothesis there are essentially two explanations: (i) it wasn’t rejected because it is true or (ii) the null hypothesis is false but there isn’t enough data to determine with high confidence that it is false. In situation (ii) there isn’t enough data to make *any* claim about the trend because there isn’t enough data to detect the (lack of) signal in the noise. Scenario (ii) can be ruled out if the statistical power of the test (the probability of rejecting the null hypothesis when it is false) is high (e.g. 95%). It appears that few really understand statistical significance, and fewer still understand statistical power – however when you are arguing for the null hypothesis it is power that matters, not significance.
“Statistically significant trend” covers a multitude of sins.
No amount of statistical power analysis will help you if you aren’t clear about what model you are proposing to test your data against. Nor will you know what Ho you have rejected or failed to reject.
What are you asking for here? A flat line with slope of 0 containing a million data points satisfies your criterion of “non statistically significant trend over a period long enough for the statistical power of the test to be 95%” but I can’t see how it’s relevant to the discussion.
Forgive me for saying so, but when I read your posts, I get a sense that you have read a lot about statistics but don’t entirely understand them. Now this would be a fine thing, but you act like such a bully too.
HAS, I gave a perfectly satisfactory definition of the model. It is the same one that climatologists generally use when assessing trends. The ball is in your court, demonstrate that there is statistically significant evidence for a slowdown in warming if you can.
Brad wrote “what are you asking for here? A flat line with slope of 0 containing a million data points ”
hyperbole, it doesn’t need anything like a million datapoints, I am just pointing out that you need to do an analysis of the power of the test before you can claim that the lack of statistical significance is meaningful. I suspect that the minimum length trend is about 17 years or so, from what I have seen.
“Forgive me for saying so, but when I read your posts, I get a sense that you have read a lot about statistics but don’t entirely understand them. ”
A mere ad-hominem. I am not unduly concerned that you question my understanding of statistics but it would be more productive if you showed some concerned for the statistical question, rather than the person posing the question. ;o)
“Now this would be a fine thing, but you act like such a bully too.”
So asking for a proper statistical analysis, including statistical power, and pointing out evasive responses is bullying? Perhaps you should ask yourself why, given that I have explained how to prove me wrong, nobody has actually tried to do so and are blustering instead (including yourself).
Dikran “I gave a perfectly satisfactory definition of the model. ”
In fact you have probably made use of at least three models in the above thread. A straight linear model with no ARIMA features when discussing Jones, AMRA(1,1) when pushed by me, and some more complex model involving CO2 concentrations when discussing if the CO2 effect is or is not significant.
HAS, the posts I made on the subject of CO2 had precisely nothing to do with the statistical significance of trends and I think you know that perfectly well, I was just pointing out somebody’s incorrect statement regarding CO2. Yes, many climatologists do use OLS trends, I prefer it to be done a bit more rigorously and use ARMA(1,1) noise, however if it pleases you, perform the power analysis with either model at your discretion and lets see the results. That would be way more convincing that contunued bluster that merely highlights the fact that we both know that the decadal trends under discussion are too short to have meaningful statistical power.
You don’t understand my point at all, obtuse though it may have been. Trends that represent physical processes don’t exist in isolation. They are artifacts of that physical process. You cannot begin to discuss the trends until you have postulated the form of that physical process. Only then can you start to discuss whether they are significant, how they change etc etc and the power of any statistical tests you are seeking to use in that analysis.
So in criticising someone for their failure to deal with the statistical power of their test of a trend requires that you specify the model you are asserting for the underlying process. It is quite possible that you can reject the hypothesis quite significantly and powerfully with only a limited number of observations (“the temperatures all lie on a straight line” only takes three), but you might argue with the appropriateness of my proposed model (which is where I came in on all this).
HAS, before investigating the cause of some phenomenon, you first need to establish that there is significant evidence that the phenomenon actually exists in the first place. Trends are purely statistical entities that describe the effect of a physical process. Thus you can use statistical models to discuss trends without specifying the physical process, because you are only making statements about statistical entities, not causal processes. I you want to draw conclusions about what causes the trends, then you need physical models. The question under discussion is whether warming has slowed or not over the last decade, the cause of that warming is not specified, it may be CO2 or it may be something else.
You say: “Thus you can use statistical models to discuss trends without specifying the physical process, because you are only making statements about statistical entities, not causal processes.”
This is just wrong. You cannot use a statistical model without some assumption on the underlying physical process. You have to have some basis before assuming white noise, or AR1 or ARMA or 1/f or whatever and that basis is derived from the physical process. For example, if you chose a climate time series statistical model for a hypothesis test that violates that first law of thermodynamics, that would be pretty foolish.
The assumption of ARMA noise is an assumption about the STATISTICAL DISTRIBUTION of the noise process. It is not directly an assumption regarding the physical process itself. The essential point I have been making is that regardless of the model you use, you cannot ignore the (lack of) statistical power of the test. A a failure to reject the null hypothesis is not strong evidence that the null hypothesis is true unless the test has good statistical power. Whenever I bring this point up in discussions of decadal trends for some reason people will discuss ANYTHING but statistical power, perhaps we should be discussing why skeptics are not computing the statistical power of the test to provide the support they need to claim that the cooling trend is not just an artifact of the noise.
I have no issues with your comment on statistical power. I am however confounded whenever people make your argument that the assumption of the correct model for climate testing is statistical only and is done independent of the physical process. This is copacetic for stock price movements, but not for a physical process. To develop a statistical model without considering whether it violates a physical rule is foolish. How do you even know to model it as a stationary process if you don’t look at the physical process? Are you seriously claiming that physics is irrelevant for the proper model selection?
No, obviosly one wouldn’t use a model that was obviously in conflict with prior knowledge. However in assessing whether a trend exists or not, we are not developing a model, especially not a model to be used for predictive purposes. We are just determining whether the slope of a linear trend has changed, nothing more. Of course if we were building a model for predictive purposes or to help us understand the physical processes then of course expert knowledge of the physical system would be incorporated in the design of the model. Note that if on such a short time scale the coefficient of the linear term is not statistically significant, that would suggest there isn’t enough data to fit a more complex model anyway.
Gavin Cawley (are you also Dikran marsupial?)
It would be helpful if you reflected on two things:
First the term “statistical model” and exactly what one is doing when testing data against one (including looking at the power of any statistical tests).
Second, the difficult process of finding the trend in a sine curve (just by way of example).
Are you really going to attempt to parse between ” no statistically significant” and “no significant global warming”?
Good luck with that.
Read the interview with Jones @ the BBC and you will see that the discussion was purely to do with statistical significance. However it was not reported that way in the blogsphere and media, I wonder why?
It is possible for a word to have two different meanings in the english language. In one use it means important and in one use it means likely not due to chance. Rearranging words in a sentence can change their meaning.
That wasn’t too hard.
Yes, which is precisely why transforming a statement about “statistical significance” into one about “significance” is potentially deeply misleading. Phil Jones did not say that there “had been no significant warming” he was misquoted from a discussion of the statistical significance of the trend. They are indeed not the same thing at all.
you must have no clue about stats to say that.
And that’s OK, just not the pretence otherwise.
Indeed it would require good luck to get across to those not interested in learning it that a statement that a trend is not statistically significant refers to something very different from a statement that something isn’t significant or didn’t occur in a significant amount.
I am OK with stats.
Let us go with a generally acceptable definition of “statistically significant”
“a result is called statistically significant if it is unlikely to have occurred by chance”
And then with significant…let us think in terms of significant change:
for a nice survey and summation of determining significance out of noisy data.
Which relies on statistical techniques to test for significance.
And of course we look to the way back machine and find out what Jones actually said:
“He also agreed that there had been two periods which experienced similar warming, from 1910 to 1940 and from 1975 to 1998, but said these could be explained by natural phenomena whereas more recent warming could not.
He further admitted that in the last 15 years there had been no ‘statistically significant’ warming, although he argued this was a blip rather than the long-term trend.
And he said that the debate over whether the world could have been even warmer than now during the medieval period, when there is evidence of high temperatures in northern countries, was far from settled.”
so reinterpret Jones all you want, and by all means use circular reasoning and parsing all you need about what he said, if it helps you to continue……avoiding…… a serious discussion on this. And by all means continue pretending that skeptics are ignorant and not informed and have nothing but visions of Koch Brothers money in their heads.
Change that to 1994, or any other year prior, and what does he say?
You’ve chosen a period too short to allow for statistical significance in order to deliver some confirmation bias. It isn’t just incompetent, it’s transparent.
And you use “Dr David Evans” as your authority? A guy who has published nothing since 24 years ago, and has never published anything related to climate science and never worked as a scientists in a relevant field. Why don’t you rely on primary and authoritative sources? Do they not deliver the confirmation bias you are seeking?
Vince, just quoting your guy, Phil Jones.
Hi, Dikran, you say:
—–“Trenberth seems to think that AGW is sufficiently well established that it is part of the current scientific paradigm, and that normal science is now refinining and elaborating the paradigm. I think he is right to think that.”
The current science paradigm lost its credibility, please see the highly celebrated computer simulated Milleniums TAR and SRES forecasts of 2000 by 40 AGW institutes, showing the underlying power of CO2, heating the globe exponentially (feedback of newly generated CO2 increasing the forcing)….
Nothing of the flat temp-plateau we are on were predicted, not a single institute predicted the flat plateau….. with all their computerized power…
How can normal science refine the AGW paradigm when temps stay flat and the CO2 footprint expires in 6 more years (17 in total, Trenberth)?
He is not right because he knows nothing of orbital forcing, which is the key for understanding the temp. plateau of the 21 Cty.
Joachim, sorry, saying that the paradigm has “lost credibility” is rhetoric, not science. Proving the paradigm wrong, that would be science.
If you think that the flat temp-plateau was no predicted then that is rather ironic on a thread concerned with null hypothesis, given that it has not been demonstrated that the plateau is the result of a slowdown in warming rather than a continued trend disguised by noise. Try performing a statistical significance test and you will find that the evidence for a the rate havin slowed is not statistically significant.
I also suggest you read the paper by Easterling and Wehner (2009) (Google scholar will find it easily), which shows that similar slowdowns have ocurred before in the OBSERVATIONS as well as in the output of individual model runs. The reason nobody predicted it is because climatologists are interested in the forced component of clmate change, not in the short term variability (although decadal scale projections are getting to the point of being worth considering). So you are flat wrong on that point.
This is all basic statistics 101.
Hi, Dikran, you say:
1.—-“Joachim, sorry, saying that the paradigm has “lost credibility” is rhetoric, not science. Proving the paradigm wrong, that would be science.”
Its already done. If it was only by saying, you were right! See reference ISBN978-3-86805-604-4, all transparently and clearly presented, otherwise I would not make such statements…..it would be timewasting, I have better things to do……further:
2—–“If you think that the flat temp-plateau was no predicted then that is rather ironic on a thread concerned with null hypothesis, given that it has not been demonstrated that the plateau is the result of a slowdown in warming rather than a continued trend disguised by noise.”,
…. well, it is demonstrated, another of your inventions, please
see reference in point one….. further:
3—- ” which shows that similar slowdowns have ocurred before in the OBSERVATIONS as well as in the output of individual model runs….
….. This is trivial, there are flat temp times in GISS and HadCRUT,
this remark is below quality…
4.—–” The reason nobody predicted [the flat temp plateau] it is because climatologists are interested in the forced component of clmate change, not in the short term variability. So you are flat wrong on that point.”
What you are saying is that the IPCC TAR, the celebrated Millenium achievement of climate science did NOT take SHORT TERM VARIABILITY into account in their models, because they are not interested and therefore this is not included in their predictions? ??? Why should the TAR forecast regarding decadal scale not take this short term into account because they are not interested….? Please prove this…….
5. —–“This is all basic statistics…..” : Deriving assumptions (AGW) from the statistical temp. upward trend will show trendlines will continue to go up.
….But, as proven in quoted reference, the year 2000 is a clear tipping point, tipping into a flat plateau, which, after 2045, will start to go downward. It is useless to look into statistics, to prove this fresh tipping point, which is with statistics possible only after the 30 year WMO time period ( shall we hold still and suffer 30 year of AGW nonsense?).
Therefore, to uncover the underlying MECHANISM of climate change has to be done without statistics. In quoted reference, the word “statistics” is not even mentioned! Forget statistics, it will not get you to the underlying climate change mechanism….
“…. well, it is demonstrated, another of your inventions, please”
O.K. Give me a page reference where a statistical test has been performed that shows that the recent decadal trend has a slope that is statistically significantly different to the 30 year trend.
The booklet is new, a little more than one year (still in German) old. Next summer in English…. its tedious to do translation, everything has been said before, nothing new , therefore the fun is missing to do it, therefore slow…..
The point “Statistics”: I derive nothing from statistics, the word statistics is not even included in the text because the explanation of the underlying climate mechanism does not need any statistics, it needs a sharp pencil and a quiet surrounding to concentrate…..not dull feeding of simulation models…..and dull feeding statistics….
Fine, the one who insists on having his statistics can make one of himself, applying the given underlying mechanism and trend…. No problem, statistics is a pure secondary aspect which can also be delivered, adding a few more pages to the book…. I thought about it …..well, if there is demand …..? Since everything is selfexplanatory, statistics will reinforce the argument but the argument is in itself very strong, and putting some statistics on it does not lead to further additional insight….
In which case,you haven’t addressed that point as you had implied ”
see reference in point one….. further:” . Do let me know when you can.
First, we are not on as flat plateau, that would require sufficient data to achieve statistical significance.
Second, basing your argument on on an assumption about what the next 6 year’s data will be before we observe it is slightly presumptuous.
Orbital forcing has been part of climatology for as long as I’ve known of the existence of the field, although I’m not sure how it could give us a plateau over the last 10 years, even if said plateau actually existed. Further explanation would be nice.
I think you are confused about three things: the role of null hypotheses; the role of onus of proof; and the relationship between the statement “AGW exists” and either of the two previously mentioned concepts.
“Hi[s] talk about reversing the null hypothesis is only on the question than AGW exists, not on the strength of the attribution.”
Not so. Trenberth’s a die hard CAGW activist. In his comment that “Questions remain as to the extent of our collective contribution, but it is clear that the effects are not small and have emerged from the noise of natural variability.”
In progressive speak, “not small” is code for “catastrophic.”
In this case, I think he’s trying to sidle up to “most” in IPCC parlance. As in not small ==> greater than 50% ==> most.
If you do fuzzy math without your glasses on, you can do that.
It’s not like there was no such thing as global warming until humans arrived on the scene. The real question should be, if we have global cooling for the next three to seven decades — as has been predicted by some — is it be humans that are the cause of it?
“Some” have also said the Earth is flat. That doesn’t mean “the real question” should be whether they’re right.
Some claims are just ignorant.
Unfortunately corruption, abuse, conflict of interest, bias and superstition in science are not new. There is no accountability either.
The taxpayers are paying for science authoritarians to lie to them. The boffins of Japan compared climatology to the study of the ancient science of astrology.
“[The IPCC’s] conclusion that from now on atmospheric temperatures are likely to show a continuous, monotonic increase, should be perceived as an improvable hypothesis.” ~Kanya Kusano
“We should be cautious, IPCC’s theory that atmospheric temperature has risen since 2000 in correspondence with CO2 is nothing but a hypothesis.” ~Shunichi Akasofu
“Before anyone noticed, this [AGW] hypothesis has been substituted for truth… The opinion that great disaster will really happen must be broken.” (Ibid.)
“The fact is that the `null hypothesis’ of global warming has never been rejected: That natural climate variability can explain everything we see in the climate system.” ~Dr. Roy Spencer
Good example. Akasofu is a perfect example of one making claims based on ignorance.
And “That natural climate variability can explain everything we see in the climate system” is a crock of you-know-what (Spencer isn’t Japanese, by the way).
You are simply expressing your faith in the preaching of Al Gore. Akasofu and Spencer are scientists and place their faith in the scientific method.
Right, so Spencer is *not* a creationist, now?
As I have documented here, the IAC clearly does *not* share your concerns regarding that D&A-motivated phrase. In fact, that part of the IAC report that you quote was quite specifically aimed at the truly vague WG2, as can easily be checked by anyone who actually bothers to read what they said.
As Allen says, “most” simply means more than 50%, and is not at all “imprecise” in the sense that the IAC was using.
Apart from the undefined meaning of “most” in AR4 (which was subsequently clarified by the IPCC), the range 50.1-95% is rather imprecise in the context of attribution.
What the IAC said is what the IAC said. In the absence of any traceability of their arguments and discussion, why do you think you know what IAC meant?
The precision in question is quite obviously the precision with which the range of values that can be considered plausible given current knowledge is specified, not on the width of the interval itself.
If I have a 100-sided dice and I say “most rolls will give a value of 50 or more” then that is a statement of exactly the same form, would you say that 50.1-95% was imprecise by most peoples standards? 50% happens to be exactly the correct answer!
Note prof. curry has edited her comment so my comment above is now a non-sequitur, but it wasn’t when I posted it.
But how many times have you thrown the die to make sure that it isn’t loaded?
All die are loaded, they are no completely symmetrical, so the probability of it coming down on each of its faces will never be precisely equal. Like many hypothesis tests, when testing a coin or die or unbiasedness we are using a null hypothesis that we know from the outset to be false, so if we roll the die enough we will eventually reject the null hypothesis. This is one of the known problems with frequentist hypothesis testing. We test for significant evidence that a difference exists not for evidence that a significant difference exists. Normally we actually want the latter. However thanks for suggesting an exam question to set! ;o)
I’m guessing you’re not a native English speaker. 1 die, many dice.
I could roll the die x number of times, and say with y% confidence that “most rolls will give a value of 50 or more” , based on the appropriate math. But what if I had done my test over several days, how do I account for the plausible notion that somebody snuck into my lab and switched the die partway through?
Actually, I am a native speaker of English, but the eboard on my ntbook isn’t tha easy to use, so tere is an oasional mssing charater or wo in my osts.
Also sometimes I type slightly faster than I think and just get it wrong ;o)
But isn’t the weakness in comparing dice to the climate is that with the climate, you only have one “throw”?
No, it is perfectly reasonable to make probabilistic statements regarding the outcome of single events. The climate is rather more complicated than a die so the calculations are not as trivial, but the principles are the same.
If you can get beyond that logical hurdle, going from a single particle to an ensemble of particles, you are on your way to understanding statistical mechanics.
But if Vegas were to set 3 to 1 odds on an upcoming football game because they expect ‘X’ to be the decisive factor in the outcome – and the team still wins, but because of factor ‘Y’ – is Vegas “correct”? Or just lucky?
It depends what the question was. Often in probability/statistics the skill is in stating the hypothesis so that the result of the test is a direct answer to the question you want to pose. If you ask an ambiguous question, the answer will be similarly ambiguous. In your example, you didn’t actually specify the aim of the exercise.
The point I was trying to make (and I’m not a statistician) is that the observed outcome of one-time, non-linear events (ex. the behaviour of temperatures over the next 100 years, economic predictions, technological breakthroughs, football games) is not necessarily a confirmation of the prediction upon which it was based.
Can you assign a probability to the impact a new tax might have on the economy? Can you confirm it? Are any events not “predictable”?
Jim, in science you can NEVER confirm an hypothesis by observation, you can only falsify (Popper). Rejecting the null hypothesis does not confirm the alternative hypothesis. Popper would say that observations can “corroborate” an hypothesis, but never prove it. This is true regardless of how many observations you have.
“Can you assign a probability to the impact a new tax might have on the economy?”
Not a single probability no, but you could assign a probability distribution over the measure of the effect on the economy. This could be simplified for communication to the non-statistician by making a statement of the form “it is highly likely that inflation will rise above 3%”. Which is basically exactly the sort of statement that the IPCC have been using, the probability that something will lie in a particular range.
“Can you confirm it?”
No, hypotheses relating to causal relationships in the real world can’t be proven/confirmed, only disproven/falsified. You can however measure the quality of the prediction using information theory.
” Are any events not “predictable”?”
Yes, weather for instance is unpredictable beyond a couple of days because it is chaotic (deterministic, but very senitive to initial conditions). Climate on the other hand (long term statistical behaviour of weather) is probably not chaotic, and hence is predictable.
Popper states: “…. a statement asserting the existence of a trend at a certain time and place would be a singular historical statement, not a universal law. .. while we may base scientific predictions on laws, we cannot (as every cautious statistician knows) base them merely on the existence of trends. A trend… which has persisted for hundreds or even thousands of years may change within a decade, or even more rapidly than that…..There is little doubt that the habit of confusing trends with laws, together with the intuitive observation of trends …. inspired the central doctrines of evolutionism and historicism.”
Unlike engineering, which bases it’s models on data which can be replicated by any laboratory – and who’s outcome of models can be replicated by testing, climate predictions are not testable or replicable. You can’t “run” the next hundred years twice to corroborate an outcome.
I think the IAC here was talking about WG2 because they wrote it in the section addressing WG2, using examples exclusively drawn from WG2. If they had meant this criticism to apply to these sort of one-sided probability statements which are ubiquitous in WG1, it is inconceivable to me that they would not have pointed this out directly, as these are some of the most prominent and influential statements that the IPCC made.
The alternative interpretation, that they wrote it in the wrong section, using the wrong examples (which do suffer from a genuine failing), trusting that people like you would manage to interpret their true meaning correctly, hardly passes the sniff test.
“trusting that people like you would manage to interpret their true meaning correctly”
I certainly wouldn’t.
The quote you’ve used is taken from a section titled ‘Working Group II’. There is also a section titled ‘Working Group I’ where no equivalent point is made so it’s clear where their argument was targeted.
If you can I suggest you check with one of the IAC report authors whether or not the quote is relevant to your argument about late 20thC attribution. From my reading it isn’t. The preceeding paragraph introduces what they are talking about:
Many of the 71 conclusions in the ‘Current Knowledge About Future Impacts’ section of the Working Group II Summary for Policymakers are imprecise statements made without reference to the time period under consideration or to a climate scenario under which the conclusions would be true.
Obviously the WG1 statement in question does have an explicitly referenced time period and the climate scenario is the historical record of global change. The IAC’s main concern in this WGII section was that some statements appeared to be irrefutable because there was no clear indication when certain things were predicted to happen and under what circumstances. Again, this is not the case for a statement attributing most of the warming in the second half of the 20th Century to a certain factor.
Agreed that the issue with WG II is far more egregious in this regard. But high confidence (very likely) in an ambiguous and imprecise most is arguably not a good way to pose a hypothesis or a conclusion. If you don’t know whether it is 50.1% or 95%, I would argue that there is a substantial level of of uncertainty in understanding of attribution. Hiding this uncertainty with a very likely confidence level is misleading. The WG I attribution statement seems to me to be a clear example of the IAC’s concern: “In the Committee’s view, assigning probabilities to imprecise statements is not an appropriate way to characterize uncertainty.”
The statement does not hide the uncertainty in any way, it very clearly states that it >50% <= 95% is the range of uncertainty (according to the guidance notes). You may say that this level of uncertanty is high, but that does not mean that this high level of uncertainty has not been precisely stated.
Not to worry. The IPCC is about to invoke a 99-100% certainty for future warming. Perhaps AR5 will yield 110%.
Dikran’s right. 50.1-95% is a precise range. It’s not a well constrained range but it is precisely stated. What the IAC were talking about were phrases such as ‘negatively affected’, ‘some future impacts’, ‘pose challenges’ (these are all taken from a WG2 quote used by the IAC to illustrate their concern) with probabilities attached – the problem being that they are so vague future evaluations could find them true no matter what happens in the future. The fact that some people are attempting to question the anthropogenic GHG attribution range suggests that the statement contains precision within the terms the IAC were discussing.
I’m struggling to understand your viewpoint here, so I have a few questions. Would you be happier with a statement like ‘>0.25ºC of the observed 0.5ºC warming since the mid-20th Century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations’? On a hunch, I’m wondering if your problem is with the lack of numbers.
Were you happy with the TAR’s conclusion which was the same except with a likely rather than very likely.
Would you be happier with a statement like ‘60-80% of observed warming since the mid-20th Century is likely due to the observed increase in anthropogenic greenhouse gas concentrations’ (assuming that’s justified obviously)? If so, why would that be better?
As an aside, the form of statement used by the IPCC is similar in form to the PAC-bounds (Probably Approximately Correct) studied in machine learning, where it is impossible to produce a useul hard bound on the error rate of a learner over all possible distributions, so instead the aim is to produce a bound of the for “with high probability X the generalisation error of the learner is less than Y”, where X and Y are both probabilities. This is at the very mathematical end of machine learning, so they wouldn’t be interested in statements of this form if they were ambiguous.
that was a little evasive.
It’s absolutely clear that the critique you have been levelling at WG1 actually relates to matters in WGII. Accountability and traceability of your arguments would be enhanced by open acknowledgement of this.
“If you don’t know whether it is 50.1% or 95%”
Do you actually not know what “> 50%” means? Because it certainly does NOT mean either “= 50.1%” or “= 95%” or anything like that.
Read again what you just wrote above.
here is a simpler way to look at it. They didnt mean 51%
Saying Most allows you to imply 75 and defend 51
If you’re suggesting they were too conservative, or perhaps lacking in ambition, when constructing the conclusion you may have a point.
Then again, this is about informing policymakers. Would policy be any better informed if they concluded with a tighter attribution range and lower confidence? or even the same confidence? What extra value would be conferred to policymakers in the knowledge that, say, 70-80% of recent warming was due to anthropogenic GHGs, rather than a 51-95% range? It seems to me the key takeaway point is that anthropogenic changes to GHG concentrations are the largest component in recent global climate changes. Anything more than that is window dressing in this context.
So what you are saying is that a range of 51%-95% is close enough for government work? That’s how they do their budgeting in Greece and a lot of other places. Get my takeaway point, Paul?
Paul S., Well, said “…constructing the conclusion…”; wink, wink, nudge, nudge…need I say more.
The null hypothesis is best stated such that the magnitude of its acceptance or rejection can/should influence policy. Public policy is what is at stake after all and asking the questions: is the world warming? if so, to what degree?; is man >90% likely to be driving climate change? The individual responses to these questions are contingent upon the prior question being true. Any falsification of the prior question renders the next questions moot. We could have yes; then some, modest, lots; and yes. This would still not necessarily lead to mitigation policies because the question: in a warming world will we be better off? has not been addressed. Maybe a better question overall would be: can we adapt to a dynamical system? which allows for cooling, tipping points, and a more comprehensive approach to sustainable human activity.
I think Allen is wrong when he says there is nothing imprecise about the word “most”. While technically, most can simply mean >50%, many people, perhaps even “most” people, consider “most” to usually mean something much greater than 50%. It all depends on the context in which it is used. In the context of the IPCC attribution statement, as a lay person, I would take it to mean something much greater than 50%, e.g., 80-90%, upon first reading.
Do you suppose the ambiguity was deliberate?
Hard to say. Allen and Annan seem to think it is a very precise term that can only mean >50%, but if you asked a bunch of people or check a number of definitions, I think you will see it is indeed ambiguous and imprecise.
So on the assumption that the null hypothesis that Trenberth wants to employ is well and usefully defined then to make it interesting it needs to be of the form
“There is clear evidence that the accumulation of anthropogenic CO2 is causing climate change and as time passes that climate change will become significantly more dangerous to humanity than the climate without anthropogenic CO2.”
The good thing about a null hypothesis is that there should be much evidence that supports its predictions and no demonstrable counter evidence, if Richard Feynman is to be believed.
Newtonian mechanics used to be the null hypothesis, then replaced by Relativity and Quantum.
So let’s examine the null hypothesis
Himalayan glaciers gone by 2035 voodoo
A discredited hockey stick
Flatlining global temperature since 2000
Conspiracy implied to keep sceptical papers from being published.
Missing tropospheric hotspot
Cern cloud chamber experiment indicates atomic particles increase nucleation so Svensmark Cosmic ray theory now has laboratory evidence.
Antarctic not warming overall and ice increasing.
Sea level rise slowing
Hurricane energy not rising and showing no correlation with CO2.
Historical evidence for MWP
Dendrochronology can only supply one tree to reflect global temperature;>)
Killimanjaro snows lost because of deforestation.
There seems to be nothing to fear but fear itself.
Null Hypothesis FAIL 0/10
Do not forget that Lysenko redefinition of the null hypothesis led to the deaths of 10’s of millions and the imprisonment, torture and murder of scientists who were skeptics.
I am no scientist to comment but just as an observer i can see that CO2 is affecting climates around the world and i believe its just part of the whole problem. As we get more urbanized with the huge populations of China, India and other developing countries moving from villages to cities, there will be an ever increasing need for everything – food, wood, clothes,coal(electricity), plastic etc, and most of it is either fossil or non-renewable. only 0.2% paper is recycled in these developing countries. We as a species are failing to protect and balance is difficult to maintain once its tipped on one side, with 7 Billion more people coming in the next 50 years, we have to look at solutions to protect out beautiful planet.
That is some eyesight you must have jay “I can see that CO2 is affecting climates around the world…”
How many climates do you see that are affected?
Where I live, Wisconsin, USA, the winters are colder, the summers shorter, and not much in between. I’m all for global warming, please send it our way!
And, the elderly were burning books in the UK last winter to stay warm.
It is a strange world where someone is questioned for not being parochial on global matters. Perhaps jay reads as well as writes?
Trenberth’s earlier contribution on this topic was just confused rhetoric, in which he contradicted himself. In this follow up article he hasn’t done much better, but has introduced the idea of a Bayesian prior to give some vestige of respectability to his argument.
In essence rather than just saying we need to shift the null (which is rubbish, as others have noted the null can be anything) he is now saying any statistical inference should be performed on the basis that we have prior information on the probability that humans are causing some of the warming, and that information should be included in any experiments (presumably now regardless of the null being tested).
Of course the problem with priors is that they need to be determined independently of the information used in the subsequent experiment. Because of this need for independence one basically doesn’t get the claimed benefits Trenberth seeks in the balance of the article. In the end you can only use information once, whether you are a Bayesian or otherwise, not twice as Trenberth wants.
He should know this (and probably does – which just makes all the more reprehensible).
I wonder if those academics that support Trenberth will be going back to their universities to tell their students that what they told them before about the definition and use of the null hypothesis is nonsense and they now need to think of it turned through 180?
Climate Science really is amazing. Its practitioners seem to lack any sense of logic. Look, the issue is not what we make the null hypothesis. Why? Because that makes no difference to the observations, their significance, the competing alternatives to the theory we are considering. None.
It makes no difference. You still have to prove that catastrophic man made warming is likely or is not likely. You still have to do proof regarding feedbacks.
All that Tremberth is doing is begging the question, yet again. He is just saying, we must start by assuming I am right. But he is not giving any additional reason.
In exactly the same way, people use Pascal’s Wager to justify measures which the evidence does not justify taking. That too, called the precautionary principle, is another silly exercise in begging the question.
If all these people would just focus their energies on understanding the climate, instead of finding crazy ingenious ways of reinventing centuries old logical fallacies, we might make a bit more progress in “climate science”, a field which seems more deserving of inverted commas with everything one reads.
Are we facing a climate catastrophe caused by CO2 from human enterprise?
That is in no way ‘begging the question’.
The null hypothesis simply sets out the variable to be tested. Either formulation would be acceptable.
The proposal is that we should regard CAGW as proven and accept it until it is positively refuted, rather than as an hypothesis which is on test and not accepted until established.
However, this proposal is unaccompanied by any new argument or evidence that the hypothesis is correct. It therefore amounts to a proposal that we should accept it.
This has real public policy implications, and this is why its a case of petitio principe. What is being proposed is that we should stop thinking, and get on with dropping CO2 emissions. Without doing further investigation to show that this is either necessary or sensible or effective.
Because after all, its the null hypothesis isn’t it?
Idiots. Absolute idiots.
For another discussion on this, see
The greenhouse effect is said to cause 2.4 watts per meter of warming globally.
There are 5.1×10^8 square kilometer on earth.
2.4 watts per meter is equal to 2.4 million watts [2.4 x 10^6 watts] per square kilometer.
So in total 1.224 x 10^15 watts per second and per day
1.0 x 10^20 watts per day.
1.0 x 10^20 watts added per day to the earth is claimed to add 33 C
to earth average temperature.
1.0 x 10^20 watts equals 1.0 x 10^20 joules.
The energy of gallon of gasoline if chemically reacted with oxygen produces
1.3×10^8 joules so 7.6 x 10^11 gallons of gasoline produce same amount of heat. So if 7.6 billion people were to consume 100 gallons of gasoline per day it would create the same amount of heat. It doesn’t matter whether gasoline is put in car or pour on the ground and lit on fire. Though if used in a car the energy is more even distributed, as compared being poured on one spot and lit on fire.
Now, there isn’t 7.6 billion people nor on average do they burn 100 gallons each, but the earth average was −19 °C, and there was that many people using that amount gasoline daily then the temperature would apparently rise to similar temperature as we currently have- due to this one source of heat. It the same energy per day as the entire greenhouse effect.
Though it would only warm entire planet if 7.6 billion people driving cars or lighting gasoline across the entire globe in a uniform manner. If they only drove and/or lit fires in urban areas most then the urban areas would get warmer and uninhabited areas would get much colder. So yo have blazing hot cities and frozen rural areas.
Since urban areas are only a small percent of the earth surface- somewhere around 1% one could use a smaller number of people- say 1/10 the number and 1/10th the gasoline consumption- .76 billion people and 10 gallon per day. Or 3.8 billion people and 2 gallon per day.
But let’s put aside about burning gasoline and consider how much heat human make by continuing to live. Humans generate about 100 watts- sitting and standing quietly.
Humans on Monday apparently reach the magical and lucky number of 7 billion souls. I missed the celebrations- busy doing something, I guess.
So that is 7 billion joules per second.
The greenhouse doing 1.224 x 10^15 joules per second.
Compare 7.0 x 10^9 to 1.224 x 10^15. So roughly 6 orders of difference.
If we just consider urban areas then it’s roughly 4 orders of difference.
So 7 billion people mostly in urban areas living a quiet vegan existence, eating rice and beans- mushroom, herbs, and bean sprouts for flavoring and carefully managing the sewage- perhaps storing the methane, or perhaps the poor fools living not in the tropics could use it for heating in their well insulated homes to stave a coldest of nites.
Would not warm the −19 °C world by very much- a thousandth of needed
warmth one gets for all gases in the greenhouse effect. Or about 1/10th of the effect from the CO2 greenhouse gas.
Now we have forbidden eating any animals, can we permit people having pets, particularly carnivorous such cats and dogs? Perhaps only goats or cows for milk and cheese. But this could be slippery slope- you might also permit solar panel and windmills- which means one needs a transportation system, and then we back to people living searing heat.
Obviously, the AGW True Belivers have thrown in the intellectual towel when Trenberth says the null hypothesis of global warming should now be reversed, thereby placing the burden on humanity to prove that it is not influencing climate.
When one is incapable of proving their hypothesis and the heat is turned up on the lie… this becomes a prefect stance to take. deflect the responsibility from yourself…
I have lost all respect for Trenbreth.. course when he was mixing tree ring proxies with actual temps it killed it too.
That is where this debate has been headed for years. This is how a hoax dies.
“Dr. T, you had a good run, you were feted and honored, but the day of reckoning up the cost has come and gone. Like some book said, you and the other un-indicted co-conspirators have been weighed in the balances, and found wanting. At this point, you have two choices — accept it and move on, or bitch about it. I strongly advise the former, but so far all I see is the latter.” ~Willis Eschenbach
“So that is 7 billion joules per second.”
Oh darn, I mean 700 billion. So the vegan life isn’t going to help.
How do you establish a null hypothesis for the climate without quantitative knowledge of natrual climate variability? In the first IPCC report there was a graph of estimated global temperatures for the last two centuries. It showed a medevil warm period and a little ice age. The hockey stick graph superceded it in a later report. Which of these two representation of global temperature is closer to the truth for the last two centuries? If its the old graph is more correct the earth is comfortably in that range. If its the hockey stick graph we are already beyond the range (at least for the last 800 years). If you can’t establish what the range of just the average global temperature is for the current climate optimum (since the last ice age), how can you begin to establish what the null hypothesis is, particularly as it relates to attribution of natural vs. anthroprogenic. And that’s not even counting average precipitation or orccurance of extreme events which are part of the climate but much more difficult to reconstruct.
This is the point that Ghil made in his paper i referenced in my article.
“It showed a medevil warm period and a little ice age. The hockey stick graph superceded it in a later report. Which of these two representation of global temperature is closer to the truth for the last two centuries?”
The latter, because the former was a schematic of central England temperature only (Lamb, 1965/1982), not global average temperature.
Climate null(?) hypothesis=> http://bit.ly/ocY95R
Do you not think increased CO2 concentrations will add something over and above this pattern? At all?
I’m probably not as sceptical about this as I should be – I work hard to avoid giving significance to ‘patterns’. I see people/us creating the most extraordinary beliefs out of natural/chaotic/coincidental patterns and the power contained in them is often enormous. So it is an area where I would normally express huge doubt. But obviously I’m commenting here…..
How long have you been working on this?
The graph is great BTW
As I might have said before, in IPPC-speak it would be ‘Although we cannot give significance to such a short period of time (14 years), this lack of warming is exactly what we would expect if there were to be a 30 year hiatus’
I look forward to next spring/early summer when the first two temperature records cross the 15 year mark of slight cooling. You should offer bets on predicting the month (like at the Blackboard) :)
My argument is the global mean temperature pattern has not changed since its record begun 160 years ago.
And there is early evidence for the continuation of this pattern as shown in the following graph
Contradicting IPCC’s Fourth Assessment Report claim of “accelerated warming”
To show visually just how extraordinary your claim of “the global mean temperature pattern has not changed since its record begun 160 years ago,” let us examine your claim vs. the actual, based on BEST, and with a glimpse of the CO2 trend tossed in to satisfy Anteros’ question:
We see the last half century, with 95% lower and upper bounds and linear trend lines through a dual pass prime 13/11 filter (which should minimize distortions), and for comparison both Mauna Loa and the last line, “plot/best-lower/from:1960/trend/detrend:0.9” in dark blue, far below the real global temperature curve, maintaining the slope Girma claims is the actual temperature trend “unchanged in 160 years”.
Patently absurd claim, Girma.
Wicked graph, Bart R.
Earlier today I did stick that style of trend on the bottom of mine.
Best data is for land only, it is not global!
According to the data from NASA and the Hadley Center, the global mean temperature pattern has not changed since record begun 160 years ago.
This single pattern has a long-term global warming rate of 0.06 deg C per decade and an oscillation due to ocean cycles (http://bit.ly/nfQr92) of 0.5 deg C every 30 years as shown in the following graph.
Before the 2002s: http://bit.ly/sxEJpK
There is also early evidence for the above pattern continuing with current slight global cooing as shown in the following graph.
After 2002: http://bit.ly/szoJf8
It is a travesty that they have convinced our kids of man caused inundation: http://bit.ly/rzLXCe
It is a travesty that the educated class has not yet said the emperor has no clothes regarding Anthropogenic Global Warming
So.. this hypothesis you have that there’s a vastly divergent land-sea trend over the past half century, what exactly is your evidence?
The data you claim supports you is clearly coherent with BEST, not with your claims.
And if you’re hoping to impress any American with John Stossel’s vast wisdom and the integrity of 20/20’s record on interpreting science.. you really spend either far too much time watching television, or far too little.
Nothing in the graphs you’ve presented (some tens of thousands of times all over the internet), overcoming all refutation by packing up like a carpetbagger and changing your colors but not your shams, touches more than mere denial of patently obvious observable trends by obfuscation and improper methodology.
Certainly nothing you have ever presented in the small fraction of your campaign to flood the blogosphere with your opinions that I’ve read has touched the validity of BEST, compares with it for technique, precision or expertise, or reflects the least dimension of actual statistical knowledge.
I don’t see why any of what you’ve written under the same cover (or unknown pseudonyms) ought be considered any better than what I have seen of your ideas. Likewise, I don’t see why we might expect the seas to maintain one century trend (not actually revealed in the data) while the land begins on a completely new and steeper one (actually shown in the data).. or if they do, how that needlessly complicated assumption flying in the face of Occam’s Razor is of any comfort to us.
A world where the land warms 2C/century while the seas warm only 0.2C/century is a pretty strange place, if the trend holds.
Is the following graph wrong?
How about this one:
There you go. Sea below Land, but still far more than the 160 year trend, as expected.
Nice chart. I think doing that running mean with a width of 12 months really helps in filtering out the seasonal noise.
Is the following graph wrong?
How about this one:
Can a graph be right or wrong, in and of itself?
Certainly, there are invalid uses to put graphs to.
Using HadCRUT when BEST is demonstrably superior isn’t per se ‘wrong’ so much as indicative of poor judgement or possibly cherry picking for an agenda; especially this is so when one has so many datasets available and instead of treating each one separately and distinctly and attempting to confirm one’s hypothesis on just one of them at a time and commenting on differences among them, one stitches together exactly the pieces one can force into a persuasive but meaningless shape in what can only be viewed as a spoof of graphical analysis.
So, is the first of these two graphs you offered ‘wrong’? Sir, it is a mockery, either indicative of outright duplicity or of pathological condition. If that’s ‘wrong’. This has been pointed out to you many dozens of times, on so many valid bases of refutation as it is hard to believe you continue to hold fast to it without blushing.
Here, looking at BEST, we see your supposed natural pattern of variations degenerate and vanish.
The bicentennial trend lines clearly diverge from the past 30 or 50 or hundred years, and the most closely fitting explanation for this behavior is anthropogenic causes shifting the trends leaving only a shadow of natural variability superimposed on the sharp centennial scale rise, at about an order of magnitude smaller amplitude than the changes associated with GHGs and dampened by man-made aerosols.
The second graph of yours, as you well know, or ought know, as it is so short, is far too uncertain in meaning to tell us anything meaningful whatever.
As a methodological note, the trend +/- offset isn’t particularly useful unless you accompany it with a CI, as BEST did with it’s 95% CI upper and lower bounds curves.
Indeed, R-value, or CI, or, well, any ordinary statistical figure associated with presentations of this nature whatsoever appear to be missing.
Do you need help with producing these numbers? If so, there must be elementary statistics websites handy where you can pick up these skills. One commends wikipedia for the novice.
There you go: http://www.woodfortrees.org/plot/best/mean:13/mean:11/from:1993/plot/best/from:1993/trend/plot/best-upper/from:1993/trend/plot/best-lower/from:1993/trend/plot/hadsst2gl/mean:13/mean:13/from:1993/plot/hadsst2gl/from:1993/trend
The closest thing to a graph that can propose meaningful information about the timespan you are attempting to capture. Enough time to overcome signal:noise difficulties and represent global climate data, as opposed merely weather-scale events averaged over the globe. Separately showing also a (poorer quality) representation of sea surface trend, in general agreement with sign, though of course slightly lower.
How much time do you spend at sea?
Unbelievable. I read Trenberth’s “paper” and i wanted to puke. What a pile of BS from first letter to last period. And this passes as science? How sad it has become. I, non-scientist, could write pages showing the assumptions, lack of logic, and pure speculative arguments in his paper. Take this:
“The times when extremes break records are especially the cases when natural variability, such as El Niño, is working in the same direction as human-induced warming”
Breaking records? Does he not understand that our record system is puny? It’s barely 100 years old, and old records which are thin and sporatic. Take temperatures. When records started to be taken EVERYDAY was a record breaker! As time goes on, and more records accumnulate, the number of record breaking temperature days drops in a decay curve. This is because slots of what the temperature can be start to get filled in. For example. If the temperature of July 1 of any year can be no lower than 20c and no higher than 40c, with 0.1c slots, how many years would it take to fill them all? That’s 200 slots to fill!! In only 100 years? Please…
I decided to test how long it would take to fill all the slots with a simple simulation. If you just use a random number generator, I fould that it would take some 1000 years to fill all the slots. If you use a gausian curve to add probability for any given slot (higher/lower range temps the least likely), then it would take some 6000 years to fill all the slots!! Record breaking days is an accounting issue, not a indicator of changing temps.
The FACT is, for Canada and a few other locations I have checked, the VAST majority of record summer high temps were BEFORE the 1950′s.
His Fig 1 is also grossly misleading because there is no time frame. One assumes that, if temperature is used, that the “normal” was from 1945-1975 and the “abnormal” shifted curve is 1975 to 2000. Hence using that as the dataset for comparison is missing a major property of the climate system — oscillations. What if the apex of the curve vibrates with a frequency of more than 30 years? All Trenberth is doing is showing some portion of that vibration.
Of course the most rediculous comment in his paper was: “Besides, there is no other viable explanation to the observed temperature changes.” Is he a god of some kind? Geeze!! Classic god of the gaps. “Just because we don’t know a natural cause means it MUST be a human cause.” And this is science? Please…
Yes! No 100 year period of data can be used to predict or explain climate.
If we have to use this type of create-a-null-hypothesis-and-build-on-it approach, here is one way to proceed. Instead of applying the approach to the scientific arguments, apply the approach to the policy arguments. Concede the following null hypothesis:
H0: Human activities have an effect on the climate.
and, just to avoid an argument, concede this one too:
H1: Human activities have caused a warming effect.
But now the real science is to test these hypotheses:
H2: The warming effect that human activities have caused are bad for the health, happiness, and sustainability of mankind (as measured in some meaningful way).
H3: The negative effects of human activities on the environment can be reduced through changes in human behavior. (This hypothesis could be divided up into a bunch of sub-hypotheses wherein both the negative effect and the proposed behavioral charge are specified).
H4: The result of changing human behavior in the manner specified in H3 (and its sub-hypotheses) do not have the consequence of some other negative effect which exceeds the negative effect which is being reduced in the first place.
It is through this type of reasoning that the approach of posing a null hypothesis and then posing the question of what to do is much more meaningful.
I am so glad that someone brought this up. We can change the argument from a science discussion to a systems discussion.
And since we are now in the arena of systems, we need to bring in some other hypotheses that can have the same negative outcome as AGW. The one that immediately comes to mind is that of oil depletion. The same mitigation factors are in place for oil depletion as for global warming — that is a concerted effort to reduce our dependence on hydrocarbons and to plan for alternative energy sources. This turns into a systems approach whereby you have to consider all factors that can apply, otherwise the policy decision criteria are incomplete and thus flawed (this is actually Decision Science 101).
So we have two hypotheses we must consider, and for simplicity let’s set them both down to the 5% level. That means that there is only a 5% chance that the observed behavior can occur by chance. So if AGW is at the 5% level, and the fact that we have hit Peak Oil is at the 5% level, then assuming independence of AGW and PO, the possibility of both these happening by chance is now down to 0.05*0.05 = 0.25%.
This number is critical, because what we have now is a very small probability that neither of these potentially dire outcomes will occur. And since the mitigation strategy is the same for either outcome, AGW is reduce hydrocarbon usage and Peak Oil is reduce hydrocarbon usage, the policy decision becomes more and more certain. In other words, the Uncertainty Monster is being tamed, and the energy policy is really cast in stone for the world.
The other possibility is that the oil depletion and AGW outcomes are somehow dependent events and therefore the probabilities are not IID and so cannot be simply multiplied together. But you have to ask how this would work out. I believe that the dependence between the two makes the outcome even worse. Consider that we start to use lower-grades of oil and other fossil fuels that require greater investment in energy to extract and process the fuel (for example, tar sands and oil shale). For the same consumption levels of equivalent energy, this leads to a magnified emission of CO2 due to the lower EROEI (energy return on energy invested). In other words, more CO2 is pumped out for the same barrels of oil produced, possibly at as high as a 1.5:1 ratio. Consider that we may use the equivalent of 1/2 a barrel of natural gas to process 1 barrel of oil from shale kerogen or to crack oil from tarry goo. We are thus at the same or lower significance level of 0.25% because we may be accelerating AGW by this decision to use lower grades of fossil fuel to produce the equivalent amount of crude oil.
The final possibility and one that some skeptics seem to blindly accept is that both AGW and Peak Oil are totally contrived and fictitious outcomes. To them, none of these significance levels holds any meaning, and there is no use in applying either science or systems thinking.
While I welcome your support for the general ideas of treating these hypotheses at the system level, you failed to proceed with your argument to H4 above. That is, what are the other consequences, intended or not, of the proposed solution? Will it slow human progress? Will more people starve due to increased inefficiencies? Will the third world be left to wallow in its filth? It is a system and the system is complicated. Sometimes the most direct solution is no solution at all.
The realistic premise seems to be that people question the importance of considering alternative energy schemes. Cheap oil has been the world’s economic engine for about a century now, and no one knows what will happen if this changes. I don’t think the “no solution” solution applies, as the free marketplace has not even suggested a germ of an idea for an alternative energy source.
The final possibility and one that some skeptics seem to blindly accept is that both AGW and Peak Oil are totally contrived and fictitious outcomes.
Although arguing against AGW is a lost cause (according to KT anyway), arguing against Peak Oil might not be. W called oil an addiction. Let’s say it is, and let’s suppose no rehab facility comes along in time to cure our addiction. What happens then?
Our addiction gets the better of us. We start robbing convenience stores. Then banks. Then Fort Knox. Not for element 79, Au, gold, mind you, but element 12, C, carbon.
We kick our carbon addiction when we run out. When is that?
Well, 400 parts per million of CO2 in the atmosphere might not sound like much, but here’s a different statistic that sounds even less threatening, at first. How about 4 parts per million?
That’s how much carbon is in the atmosphere, compared to what’s still in the ground waiting to drive us to work, run our televisions and dish washers, and so on. For every million atoms of carbon in the ground there are only 4 atoms of carbon in the atmosphere today. Every one of those atoms still in the ground could potentially be sucked through some future carburetor to beat the light, or power station furnace to push electrons out to houses worldwide.
Some of it we can still easily get to today. As that part starts to run out we gradually increase our robbing of convenience stores, banks, Fort Knox, and so on up the carbon chain.
This isn’t going to happen all at once. We have centuries to perfect the requisite technology! The Second Amendment is on our side (figuratively speaking today, but future legislation may broaden it appropriately).
Peak oil is a joke today. Peak oil is when all the planet’s carbon is in the atmosphere. That’s centuries into the future.
By then atmospheric pressure will be 100 times what it is today, because the mass of Earth’s carbon, when converted to CO2, is 100 times that of today’s atmosphere, 98% of which today is oxygen and nitrogen. The amount of oxygen in this future atmosphere, in the form of CO2, will be 360 times what it is in today’s. There’s no problem supplying that since the abundance of Earth’s oxygen is a couple of orders of magnitude higher than carbon.
Surface temperature is governed primarily by the lapse rate of 10 °C/km, roughly the same as on Venus, with greenhouse warming merely ensuring that the lapse rate doesn’t decline (very little is needed). At that point the surface temperature of Earth will be well above the ignition temperature not only of paper but of all carbon-based compounds, turning them all into the gas CO2. This is patently not the sort of climate that is hospitable to life, which will have gradually migrated vertically, learning to live in the treetops so to speak, non-CO2 carbon’s last stand.
How high? Well, when most of the Earth’s carbon has been converted to CO2, life will only be possible 60 km above the surface, where the temperatures will be comparable to those of the 21st century.
Skiing will still be in fashion 3 km higher, on slopes constructed of the same material supporting the cities a bit lower down and well supplied with snow cannons. Your lift pass will include enough breathable oxygen for a day even on the black diamond slopes.
Science fiction, to be sure. But some science fiction has a tendency to turn into fact. Can this? Will this?
Except that there is still much uncertainty with how it works. CO2 is going up but the most abundant hydrocarbons methane and ethane are going down. And what is water vapour doing?
Interesting dynamics. According to the book “Introduction to Organic Geochemistry”, in 1998, CO2 had a radiative forcing of 1.46 W/m^2 and methane was 0.48 W/m^2
which indicates that it was 1/3 the strength of CO2 at one time.
This plateauing and now decline of methane is well known and I wonder if that has any effect on the recent moderation in temperature increases we may be seeing. We also have to remember that methane is a GHG with a much shorter residence time compared to CO2, so the atmospheric methane concentrations can jump around over short time spans. In other words, stop pumping it in the atmosphere, and the excess will within a few years decompose fully (it is an energy source after all).
Vaughan, my central point is, if you are correlating CO2 with population growth then why is CH4 and C2H6 going down? and also what is water vapour(the most abundant greenhouse gas) doing? is water vapour going up or down?
Kermit, if you’re asking whether I have more insight into the cause of the decline than Aydin et al, the answer’s no. They attribute it to reduced emissions of methane and ethane from fossil fuel since the 1980s. While I have no easy way of confirming this, I have no reason to suspect that Nature is pulling the wool over my eyes and am inclined to take their explanation at face value.
That’s the empiricist in me talking. The theorist in me has asked politely to pass on the following unsubstantiated theorizing.
With the primitive state today of CO2 sequestration, it remains as the only seriously uncontrolled emission at this point. All other fossil-fuel emissions represent unburned hydrocarbons, the technology for capturing which has improved by leaps and bounds in the past few decades. CO2 aside, engines and power stations are burning cleaner than ever, and it should therefore come as no surprise to see CO2 continuing to rise while all other fossil fuel emissions decline.
What’s your view?
I agree with this view. If someone wants to look into this more deeply, the number of conservation and efficiency techniques that were first instituted in the 1970’s is incredible. Many do not remember this but crude oil was often poured onto dirt parking lots just to keep the dust down (talking about crude) and then there is the amount of NG flaring that has been reduced. Natural gas was often found with oil and being secondary to profit was simply flared off. This still happens with offshore platforms where they can’t avoid it due to a lack of an NG collection system.
Night-time satellite pictures reveal where most of the NG flaring occurs in the world.
Nigeria : http://i.treehugger.com/images/2007/10/24/gas%20flaring-jj-001.jpg
This is a composite of the entire world put together by the National Geophysicial Data Center :
Notice no signs of flaring left in the USA but renegade areas such as Nigeria are in a free-for-all, and inland Russia likely has few pipelines to carry the gas away. From what I read on the situation, there is very little accounting of how much is flared.
The savvy observer will point out that flaring will burn away the methane in the natural gas, but in that case it will create CO2, which turns it into a lose-lose situation. Some of the methane will also escape without being combusted and then there are all the seeps that are not flared that will not show up.
Interesting topic. Another big unknown is if the permafrosted peatlands start warming up and decomposing.
Score one for the AGW outcome being worse. The point is that many people think the oil depletion scenario will have a harsher short-term impact. That’s why my premise was set up as an either-or conundrum, and which is the way we have to think about it to test for significance.
That is indeed possible if the extraction technology comes along. What percentage of the remaining hydrocarbons is in decaying organic matter versus living plants? If we have to start burning wood for energy, then that is the last straw.
Years ago there was some guy trying to sell a field, I think in Colorado, which is loaded with oil – if you’re willing to break rock for a few molecules.
Colorado is loaded with oil shale which is sedimentary rock mixed with kerogen, a solid version of a hydrocarbon. Get the nanobot technology going and you will have plenty of slave labor to transform this into a liquid fuel. Otherwise we can use natural gas to heat up the kerogen and crack it into liquid. Either way, we have to expend a lot of extra energy.
I just laugh at these fools. My wife worked in the Africa for three years. To travel from the airport to the site, they often had to be in an armored column defended by British Marines. One day they could not raise a response from a nearby site owned by a European concern. They were all dead: shot to death and throats slit. This is all to keep oil in the tanks. In the three years she was there, around 125,000 people were killed by rebels.
Folks, it getting noticeably harder.
I remember reading the book of SF anthologies inspired your colleague, Paul Ehrlich in the early 1970’s/ late 1960’s.
It was the same sort of silly stuff you are projecting today about skiing and such: Fun scenarios, totally worthless for planning the future.
I fully agree, Hunter. I had fun writing it, and I don’t seriously expect anything remotely like that to actually happen, for two reasons.
1. Humans couldn’t possibly be that stupid.
2. Inertial confinement fusion should be operational before the CO2 has hit even 600 ppmv. I don’t believe that’s science fiction, unlike magnetic confinement which is impossibly unstable. Thereafter the only carbon on the planet at any risk is that inside humans as they continue to devise ever more ingenious ways of killing each other.
An optimist would note that we’ve gone two-thirds of a century since the last world war. A pessimist would point out how mentally unready such a long interval will have made us for the next one. In the two decades from the end of WWI to the start of WWII, technology improved our ability to firebomb cities to ashes immensely. We really aren’t prepared for a third world war if it’s going to be fought anywhere near as seriously as the previous two.
But this then raises the question, is it better to plan for a third one, or to plan on not having a third one?
Personally I prefer the latter, but opinions are divided.
To have been more rigorous on the maths in my mis-spent youth and to have been in one of your classes…….
I completely agree on option two irt WWIII.
The most dismal end of the world scenario that did not involve aliens or big rocks from the cosmos was the third book in the Gaea trilogy by John Varley, “Demon”.
The xenocide occurs, not in minutes or even a few days, but over years of time as the diminshing powers that be unleash hell in ‘small’ doses for no actually known reason.
Of course there was a senile alien involved, but it was still pretty bad.
If your pals there in the Bay Area would hurry up with useful fusion, it would be greatly appreciated….
“The final possibility and one that some skeptics seem to blindly accept is that both AGW and Peak Oil are totally contrived and fictitious outcomes. To them, none of these significance levels holds any meaning, and there is no use in applying either science or systems thinking.”
Another possibility is government doesn’t have the authority to solve such problem [if they existed] of too much CO2 or not enough oil.
True, Totalitarians may not understand this concept- but totalitarian are a very tiny minority.
And other than American Senate would probably never sign a treaty regarding this issue- there little reason a vote 95 against to 0 should flip to 66 in favor.
But if the Senate were to magical change, the people could consider it wrong and could revoke such decision which could voting out such a Senate and altering any law or the constitution if they wished. True the existing constitution does not grant such authority, but just to make sure such thing crystal clear, an amendment could crafted that specifically forbids such or similar overreach by the government at any time in the future.
By your extended response, you seem to agree that the likelihood of the combined (P1 U P2) outcome is strong, but you don’t agree with any mitigation strategy. This points to the real agenda behind the skeptical viewpoint, which is to avoid policy planning because the thought is that the free market will always somehow manage.
“By your extended response, you seem to agree that the likelihood of the combined (P1 U P2) outcome is strong, but you don’t agree with any mitigation strategy. ”
P-1 is meaningless- I leave my door open, I heat the world. For it to have any meaning it would need to be quantified.
It’s only a slogan that works on people who have been frighten by scary movies. I prefer to be warm. If I preferred cold, I would live in Alaska.
Instead I live in a warm place, and I may move to warmer place and be as happy or happier.
P-2 I also disagree with- no creature as died from “global warming”- creatures can die from hot weather, but far more die from cold weather- many birds flee cold weather
I merely wanted to point out an obvious problem of this lefty religion- which can be stated more simply as no else wants to live in a authoritarian police state. This bone you chase, you won’t even like if you ever get it. Are liking it so far? Or are still waiting for magic moment?
“This points to the real agenda behind the skeptical viewpoint, which is to avoid policy planning because the thought is that the free market will always somehow manage.”
Is there anything wrong with people being free?
Is the thought too distressing?
Or do you believe people are not smart enough or moral enough to decide what they actually need and want?
That experts are needed.
How much CO2 should people be allowed to emit?
What do the experts say?
So we have 2.4 watts per square meter and some fraction of that is CO2?
You don’t say. Let’s tax them for a few trillion dollar per year.
And we need governmental “protection” from the rich, people like Warren Buffett, Bill Gates, Larry Page, and Sergey Brin?
Because their wealth is distressing. We want some of what they have.
And at same time let dictators be free to decide whatever they dictate- no sense imposing on these guys- they just mostly murder, torture and generally oppress other people- people we don’t know.
Instead let’s focus world attention on what is really important- CO2.
Yeah, my beach house could ravaged by the foot per century sea rise- and it could worse maybe 2 feet or higher.
But we could use free markets. But to use free market you need to understand what they are. Free markets generally don’t like investing billions of dollars and then having politicians deciding that their investment are too successful. So free market really, really like stable laws, laws they can trust and make plans years in the future.
So if people understood that one simple thing, we could do just about anything.
Careful there gBaikie. Mentioning those two means that you are veering dangerously close to describing a collective. Only a single person getting rich on his or her own has any meaning in a capitalist free market driven society. Page and Brin collectivized their talents and scarily approached the socialist ideal. Having anyone depend on another person, or worse a group of people as in a cooperative can only lead to disaster. Google has further lead us down this slippery slope that was first started by that other collectivist garage startup organized by the pinko pair of Hewlett and Packard.
Sarcasm usually flies over the heads of the clueless, so what I am saying is that people will make collective and cooperative decisions based on solid information, independent of what you believe in.
the most likely scenario- and the one Web lises least- is that he was just wrong.
It is the taxpayer who funds Trenberth, Hansen and the Manniacs.
These climate scientists hold these people who fund their work with contempt, they hold the values of their Institutions in contempt and hold the ethos and ethics of Science in contempt.
They think they can get away with their massaging of data, cherry-picking, perversion of statistical methodology and corrupting peer/grant review.
Their time will come, they got “Bernie” Madoff for stealing $17 billion, which was peanuts to the champions of cAGW.
Climate is too imprecise a concept for us to construct any meaningful hypothesis about it. The idea of global climate change is particularly ridiculous, since climate is a local, not global, phenomenon.
True. Climate is presumed to be local weather averaged out and incorporating seasonal factors and natural variability. Global climate is even a more nebulous concept upon which to base hypotheses of any kind – null or otherwise.
I have fun with many GW faithful on a number of forums and comments in media posts when they claim such and such weather extreme is because of human activity. I ask: “By how much?” I ask them to tell me if the human component is 100%, to which they have to say no. So then what percent is it then? 50%, 25%, less than 1%? What? It’s funny to watch the contortions of replies I get next. Definitely a soar spot to rub salt in. I highly recommend it.
This null hypothesis argument is all about this “by how much?” Trenberth is now claiming it’s 100% by default.
Sure it is. He is 100% confident that 100% of the climate changes is because of our CO2, unless someone shows otherwise.
The null hypothesis simply frames the question, it isn’t a statement of accepted reality necessarily.
And ithink Fred mooltons take on it was right – trenberth was arguing thr rhetorical case that the null hypothesis has been rejected across so many studies that we are very confident of those results.
It’s simple. Does the universe work on its own? Must one assume that, from first principles, all events in the universe have natural mechanisms as the cause of those events? Yes. Hence Trenbirth is returnig science back to the Medieval times, rather ironic.
trenberth was arguing thr rhetorical case that the null hypothesis has been rejected across so many studies that we are very confident of those results.
Who is the intended audience for this null hypothesis?
Those accepting AGW don’t need to view it as a “null hypothesis” because they see no need to test it at this point. For those not accepting it, it’s backwards for the reason I gave earlier: always formulate your null hypothesis so that it leads to a contradiction.
Karl Popper said that when he learned that Carnap treated the probability of a hypothesis as a mathematical probability, he “felt as a father must feel whose son has joined the Moonies.” I routinely do this kind of thing, but it is good to keep these chastening views in mind while doing contemporary statistics, or when thinking that unique events have “probabilities.”
If your probabilities are Bayesian rather than frequentist there is no problem with the idea of the probability of the hypothesis being true. As long as you don’t mix frameworks the mathematics is sound. Just because Popper said something, doesn’t mean he was right.
Having read the “dueling papers” from Drs. Curry and Trenberth, I have the impression that each person more or less accomplished what she and he set out to achieve. Dr. Curry has emphasized the need for a precise definition of hypotheses relevant to the magnitude of anthropogenic and natural climate change, including an appropriate recognition of the uncertainties surrounding each.
In the case of Kevin Trenberth, I suspect that he has never cared much which null hypothesis is applied to climate change. Rather, the recommendation to reverse the null hypothesis is a rhetorical tool he has used to emphasize the weight of evidence he sees for substantial anthropogenic influence on global temperatures and its consequences, including a contributory role to extreme weather events. His paper was not intended as a statistical treatise, but as a framework onto which he has hung the types of evidence for specific consequences that he wishes readers to incorporate into their thinking.
In that sense, “dueling” is a false metaphor, but it will be interesting to know how both arguments resonate with the readership of WIREs Climate Change.
Fred, Trenberth’s paper is suffused with the doctrine that extreme weather events are being caused by global warming. Muller very convincingly debunks this idea in his video on climate change. Basically, because we have better measurement methods today we “see” more extreme events that were missed in earlier times. In the early 20th century hurricanes that originated and died in the Atlantic were not measured because there were no sateliitles and ships didn’t go in to this area because there were hurricanes there. Basically, there is measurement bias on extreme weather events. Characteristicaly, Trenberth totallly ignores this idea. What does that tell you ? It tells me that he is being dishonest.
It’s not just the records, which prior to the 1950s is spratic at best, but the small time frame Trenberth is using. Only 30 years or so???? Like the planet didn’t exit prior. Dishonest is giving him credit. He is a disgrace to his profession, and science. If this was any other discipline he would have been thrown out long ago.
David – Trenberth states that global warming contributes to extreme weather events, without claiming that particular events are caused by global warming. His conclusions are very widely held and are supported by data indicating changes in the ratio of extreme hot to extreme cold, as well as other data. I don’t think the principle is in much doubt, but the quantitation is difficult.
I posted the Muller video down thread. The severe events stuff is just before the climategate part, about 25 minutes in. There are also some interesting observations about climate science and the tendency to hide data that doesn’t fit the theory. This is a video everyone should watch and shows why I really like Muller as a straight shooter.
Yah, yah, everything contributes to everything. The question is how much, not whether or not.
Now Fred, I must ask you to watch Muller’s full length youtube video, the one where he talked about Climategate. There are a couple of other longer ones too, where he looks at things like wildfires, tornados and a few others. Even the claim that they are increasing in frequency and/or intensity is just not supported by the data. You need to use the SAME measures over time, for example he uses strong tornados in the US. There is no trend. Same for Hurricanes that hit the US, the rate is GOING DOWN. Wildfires in the US, the same thing. He is very blunt about the cherry picking. “In climate science no one ever shows the data that disagrees — they think the public is too stupid to understand.” I really like Muller and his blunt style.
Even so, Fred, your justification of Trenberth is like saying that vertebraplasty doesn’t really work, but it might contribute to people feeling better. Scepticism is the engine of progress in science, not respect for authority as embodied in the “refereed” literature.
You are suggesting that Trenberth is not seriously pushing his hypothesis but is using it as a dramatic prop?
Then you use the ‘dancing on heads of pins’ equivalent in claiming that AGW is ‘contributing’ to extreme weather, which is fairly amazing.
Do you recall the object lessons about painting one’s self into a corner?
Hunting for cover
Trenberth dives in the null hole.
“Whereas I argue for nullifying the climate null hypothesis as it relates to attribution, Allen argues for preserving the climate null hypothesis.”
Curry got that right !
“As stated in my paper, climate attribution hypotheses are particularly ill-suited for null hypothesis testing.”
Right again !
I see many calculations that use π(3.14159) as part of the calculation in climate science. Some places they use it many times.
That calculation is for a perfect circle Space and science are never into perfection.
So, I am reconfiguring π to create vortex or spiral(expanding solar systems).
Again this deals in motion.
Hey Skeptics, sorry to say but this is your guy and it’s your call to make.
So, I am reconfiguring π to create vortex or spiral(expanding solar systems).
Indiana considered legislating π to be 3 in the late 19th century. A mathematician happened to drop by on unrelated business, and hearing of this was able to talk them out of it.
As it turned out he happened to belong to the Flat Earth Society. Many people nowadays consider the Earth to be spherical, which is to say, not flat. For them the ratio of the circumference of the 60th parallel to the distance between any two opposite points is precisely 3. (The 60th parallel very precisely separates the Yukon Territory, the Northwest Territories, and Nunavut from British Columbia, Alberta, Saskatchewan, and Manitoba.)
For that circle, and all others on Earth of the same radius (of which there are uncountably many), π is exactly 3.
I have posted multiple requests to Joe to get the help he obviously needs, but he apparently found the line was filled with AGW true believers, 911 truthers, alien abductees, Bilderbergers, etc. and decided to wait for his appointment here.
He is most assuredly ‘not our guy’, unless some skepticwho visits here is a mental health professional and is under that person’s care.
Let us at least agree to let the obvious ‘troubled souls’ of either be recognized as such.
Could come in useful if a blackhole pops up somewhere. Maybe he works at the LHC.
As a poker player, I see people imposing patterns on random events all the time. Gamblers are always in the midst of some streak or other, streaks which feel both self-sustaining and meaningful. If I’ve learned nothing else playing poker, it’s that the long term…that is the point where skill starts to overwhelm luck…is much, much longer than most people think.
Granted the analogy is facile, but it continues to mystify me how meaningful conclusions can be drawn about climate over a period of decades. I can’t understand how people can talk about 1998 say, as the warmest year on record, as if that really means something..
Climate is random and warming is on a winning streak? Well, that’s a novel notion.
Well, there’s still lots of betting on continued warming, but the wheel of fortune has turned and is now coming up always black, as in frostbite.
The clever punters have now moved to the extreme game, and are playing it for all it’s worth. But a colder globe has less energy in it, and overall, fewer extremes.
Oh these gamblin’ fools, don’t even know the rules of the game. Or the house odds.
What this has anything to do with physical processes, I have no idea. Climate science does not classify as a zero-sum game.
Climate science does not classify as a zero-sum game
In the long run yes. Those still denying global warming aren’t in it for the long run, for them it’s a zero-sum game today.
What was it that Keynes said about the long run? Something like “in the long run, we’re all in an ice age”?
Your attribution of motive to those with whom you disagree is annoyingly illogical, especailly for someone who claims to be an expert in logic.
Consider that it is AGW true beleivers who write books about how to destroy the industrial and technological infrastructure which you had a significant part in creating and upon which literally billions depend.
It might even be logical to consider that skeptics care very deeply about the future and happen to believe that the AGW obsession and the policies AGW believers are demanding would harm the future if carried out.
Hunter, if it would save you any time I could write your stuff for you. I understand your arguments, and reckon I could duplicate them very faithfully. Nowhere in them would I attribute motives to skeptics.
As I get to see more of your work, I would not be surprised if you could my arguments better than I, lol.
I appreciate your clarification irt motive- in your later posts today it became clear to me that you had no intention of attribution of motive, but thank you for taking the time to explicitly state it.
As this great disputation moves forward through time, my appreciation for your perspective and insight increases quite a bit, even though we disagree on this specific topic.
even though we disagree on this specific topic.
Yes, but the believers find just as much wrong with what I say as do the skeptics. I don’t believe in taking sides in this debate, I’d like to think that I could figure out what’s really going by myself.
That said, I rarely if ever find myself in disagreement with competent physicists on climate matters. As does everyone else, I judge competence by whether they agree with me. ;)
Perhaps drug manufacturers should just say, this treatment is simply too important to delay and we have this consensus opinion that proof is important so we should require skeptics to prove the drug doesn’t work.
+1 internets to you today sir.
Labmunkey,…so you are the person who musta sold a boat load of Niacin to the VA. Sorry about your future sales. We stoppped itchen.
I read the Trenberth article and one thing that struck me was the attribution of extreme 2010 weather events to anthropogenic global warming. I really question this. Just look at Muller’s presentations on this. He says that hurricane activity and severity is not increasing. He points out that in the past we couldn’t detect all hurricanes and now we have satellites and aircraft. For example, Katrina would have been a cat 3 in previous history because that is what it was when it made landfall. But it was a cat 4 shortly before that. If you look at recent hurricane tracks, a lot of them originate and die in the Altantic. Muller points out that in the early 20th century these hurricanes were undetected because there were no satelites and ships didn’t go into these areas because they were afraid of hurricanes.
This thing about extreme weather is just so much nonsense in my opinion. The main problem is that we now have better detection methods which tend to increase the number of extreme events and increase their measured severity.
David – We can probably agree that attribution of extreme events is difficult because (1) they are uncommon, almost by definition, and therefore good data tends to be sparse, and (2) they are rarely due to a single factor, but rather are associated with a confluence of conditions.
Nevertheless, there are data that support the notion that the frequency of extreme weather-related events is increasing. Among them are those compiled by Munich Re, the giant re-insurance company. They have compiled data on thousands of extreme events over the last several decades. One can compare those related to, for instance, extreme precipitation v. earthquakes, to conclude that, despite reporting biases, the number of the former are increasing disproportionately. See the website of Munich Re, Geo Risks Research, NatCatSERVICE
There are other sources that provide similar evidence.
You’re making the mistake of using insurance claims as a proxy for weather events. What that’s really telling you is that insurance claims are up, because the world is getting wealthier, and more assets are at risk.
No. Check out the the website, the data, and the analysis.
Insurance claims are based on how much wealth is accumulated in the path of a catastrophic weather event.
Irene in 2011, a completely un-notable storm, caused a lot of damage because a lot of stuff was in its way.
Camille in 1969, an amazingly dangerous storm with >200 mph winds on shore, did relatively little damage because the Misssissippi coast in 1969 was no where nearly as developed as it is now.
See my comment above.
To be clear, the data I am referring to are comparisons of the numbers of different types of events, not dependent on insurance claims or costs. Interpretation is not without demographic complications, but the evidence is nonetheless strongly suggestive.
Put more stuff in the way for weather to hit, as I pointed out in my examples, and the number of claims will go up.
I am involved int he insurance industry and follow this fairly closely.
If you would like a tough minded analysis of how cat claims and extreme events have actually behaved over historically interesting lengths of time, visit Pileke, Jr.’s blogsite for some good examples.
Thanks hunter, I am familiar with Pielke Jr.’s arguments, and I understand your point regarding claims, but I am talking about comparative analyses that attempt to account for the biases you refer to.
If you are involved in the insurance industry, you are surely aware of how seriously the industry is taking climate change. If not, check out Lloyd’s of London, Ernst & Young, etc. (as well as Munich Re).
Pat, I am also aware of how often the industry gets it wrong. And that the bias, as often as possible, will be to the side that leaves more cash in the reserves till.
What happened with the reinsurance business after 2004 comes to mind.
Pat, It looks like to me they only have data from 2004 to present. Am I missing something? Also it looks like I have to login to access it?
You know Bishop Hill has a post about a paper that Munich Re funded which showed quite conclusively that claims were not going up! Muller is great on this.
In the US, where records are actually reliable, severe tornados, hurricanes are both decreasing over the last century. There is no recent 30 year uptrend either. Wildfires there is no trend. See my earlier response to Fred up thread. These assertions about severe events are generally very questionable.
David – For some reason, my reply — with links — seems to evaporate in cyberspace. But it only urged you to read the primary literature (including the Munich Re paper) before accepting Muller’s pronouncements, or what Bishop Hill said (or what Pielke selectively quotes). I know you hold the primary literature in low esteem, but actually reading the wildfire research, etc. might give you pause. Or might not
OK, If you can try the links again, I’ll look. I’m skeptical on these things because there are so many other factors. In Colorado for example, you can look at old photos in the physics library at CU Boulder of the area in the 1860’s for example and you see that there are almost no trees near the flatirons. In the last 50 years, by controling fires, tree coverage and combustible material concertration has increased dramatically. You know this is Trenberth’s stomping ground so I can’t believe he has not seen them, but maybe he’s too busy calling other scientists names and intimidating journal editors. In fact, I read somewhere that more of the US was forested now than in colonial times. The point is that wildfires are influenced by so many things, its strains my credulity that a global warming “signal” can be extracted.
Munich Re: A Trend Analysis of Normalized Insured Damage from Natural Disasters
. You will see that difficulties abound, but interesting results are nevertheless forthcoming.
Wildfires: Western U.S. Forest Wildfire Activity
Pat, See my comments on the papers downthread
Whoa, is this completely backwards or what? In a court case you have two legal teams arguing the opposite sides of the case. If I was the client of one of the teams and they took the position that the null hypothesis was my side, I’d fire them instantly and get a new team.
The only way to win in court is to use the opposition’s position as the null hypothesis and establish its impossibility. Sure there’s the presumption of innocence for the accused, but what jury has ever taken that seriously? The client is better off if you start from the presumption of guilt and convince the jury that’s it’s ridiculous. Everything’s possible, what impossible is the only thing that has a prayer of getting the jury’s attention.
On the afternoon of Dec. 8 at the annual AGU meeting in SF, session GC43B. Global Environmental Change General Contributions II, starting at 1:40 pm, I’ll be examining the null hypothesis that humans are not the cause of the rise in temperature that happens to merely correlate by pure accident so perfectly with the rise in population and the CDIAC’s 250 years of fossil fuel consumption records.
Did you notice the italics? NOT (in case your terminal doesn’t have italics). That’s what we’re going to be looking at on Dec. 8, that humans did NOT cause global warming. Capiche? Anyone coming into my session with the assumption that they did will be kicked back out as unhelpful to what I want to argue there.
Proofs of a proposition that begin by assuming the proposition are circular proofs, and that’s what Kevin is proposing here. Although I was trained in physics I made my career in logic, and I can spot an illogical argument 300 cubits away.
Please post your paper/preprint when available.
Please address how long it will take to reliably detect anthropogenic warming and the probability of doing so, in light of the uncertainties in the satellite record. e.g., See Nigel Fox proposing improving equipment to reduce that from 30 years to 10 years.
See NPL in space
Nigel Fox lecture
Accurate radiometry from space: an essential tool for climate studies Phil. Trans. R. Soc. A (2011) 369, 4028-4063 doi:10.1098/rsta.2011.0246.
Even with 30 years, how are we doing when I have seen graphs that current temperatures are running BELOW Hansen’s 1988 No CO2 growth projection?
Similarly, why does the growth of CO2 in the atmosphere appear to be running well below “emissions”?
Interesting times for science vs alarmism.
Similarly, why does the growth of CO2 in the atmosphere appear to be running well below “emissions”?
Because the graph only includes human emissions from fossil fuel burning, neglecting those from land use changes (e.g. deforestation). If land use changes are factored in you get this. The blue line is output from a simple model which adds together emissions from fossil fuels and land use over the past 160 years – more details here.
Unless the land use changes are permanently away from vegetation, as in paving a large area, the net carbon emissions are zero since whatever gets removed will grow back and thus consume the excess CO2. This point is hard for many to understand: the carbon from deep within the ground in the form of fossil fuels does not have a recovery sink for it to cycle back to. Once in the atmosphere, the excess combusted CO2 is competing for sequestering spots against the natural CO2 for a long time.
When forests burn down that is the source and then the same area becomes a carbon sink when the trees and vegetation grow back. Except for that short regrowth interval, no net CO2 will get introduced from deforestation. Unless that deforestation is permanent and no vegetation is allowed to grow, this should not contribute to AGW. It would be permanent if the area was deforested and then paved for an airport, for example. But what proportion of the land’s surface area is that?
If it was deforested and then a banana plantation was put in place, it would be closer to net zero emissions.
The Land Use data I used comes from here.
That looks like a good accounting of net, i.e. net = source – sink
I hope you will post your paper when you can. It sounds very interesting, as well as something that should have been done many years ago.
Coming up soon. I hope to have it ready well before the AGU meeting.
“Sure there’s the presumption of innocence for the accused, but what jury has ever taken that seriously?”
No offense, but I see defendant’s who avoid conviction all the time based on the presumption of innocence. The reason so many prosecutors have 90% plus conviction rates isn’t that they are such super lawyers, or because juries don’t follow the judge’s instructions. Frankly, those high conviction rates are primarily a product of prosecutor’s wish to get a better job in the future. They simply do not try (for the most part) cases where they are not firmly confident that the evidence is sufficient to meet their burden of proof. The rest are pled out, or dismissed.
In the civil context, the plaintiff has the burden of proof, and that burden is often the deciding factor in jury trials where there is conflicting evidence or no evidence at all. I know progressives have utter contempt for mere stupid voters, and therefore similarly for stupid juries, made up from the voter roles, but they are dead wrong – in both instances. Yes, there are dumb jurors. But there are more than a few dumb scientists as well (and yes dumb lawyers too).
You are on the right track.
Climate Scientists tell us that science is always skeptical.
Then they say they have no doubt. They are no longer skeptical.
A scientists who is no longer skeptical is no longer a scientist.
Anyone on any side of this who is no longer skeptical of their own opinion is no longer a scientist. A scientist will engage people who disagree and will both listen and talk. Correct theory can come from unexpected places or people. A scientist must always be skeptical and consider ideas other than their own.
After reading the three papers, I like Dr. Curry’s best.
But, does anyone besides me smell the “appeal to nature” as an implicit assumption in all three papers? Here is one way of putting this: Why exactly should I care about the effect size of the human contribution to climate or extreme weather probabilities? Suppose we have control over those two things (climate or extreme weather probabilities). Presumably, we would have some valuation (or disutility) for the overall levels of those things, independently of whether their current values are the result of natural variability or human agency. Those might include ways in which we value biodiversity and so forth. But why in particular should we care whether those current values are “unnatural” or not?
It seems to me that, in one way or another, all three of these papers implicitly assume that we should care about the relationship between the human and nonhuman components of climate (or extreme weather probabilities). In Trenberth, it is simply shifting the null as to whether humans are or are not affecting the climate; in Curry, it is what that effect size is.
Suppose the climate was gradually warming for entirely solar reasons, but we thought this had an adverse impact on human welfare broadly construed (say to include effects on biodiversity etc). Suppose we also knew that action X could reduce this adverse impact at a cost less than the adverse impacts. Would we want to do X? If you are swayed by the “appeal to nature” then you might say no. But an act-utilitarian might say yes, even one who values environmental goods.
Earth does not have a stable temperature. Temperature is always changing. when it is warm, it always gets cool. when it is cool, it always gets warm. there is powerful negative feedback to temperature or this would not always work this way. Earth has a stable temperature cycle. Warm, cool, warm, cool, etc. When we are warm, sea ice melts and it snows like crazy and cools us. When we are cool the water freezes and there is no source for moisture and ice melts and retreats and we warm. We are now warm, the snows have started and we will now get cool.
it is extremely likely that the anthro pogenic increase in greenhouse gases has caused some warming
Weasel words are not science. It is extremely likely that throwing a rock in the ocean will cause some rise in sea levels.
“the importance of the anthropogenic influence on climate relative to the background natural climate variability (both forced and unforced)”
You really want to claim relative cluelessness about the importance?
The importance is not lost on people around the world, or most scientists, so why is it lost on you?
I am left relatively clueless as to the relationship between the paper you cite and the quote you appear to be deriding. Can you perhaps explain?
You only quoted part of a sentence.
Herman A (Alex) Pope | November 4, 2011 at 1:37 am | Reply
Correct theory can come from unexpected places or people. A scientist must always be skeptical and consider ideas other than their own.
Few if any scientific theories have stood the test of time. The more we discover, the more likely a theory is to be replaced by one that performs better.
The current climate theories have done a very poor job of explaining why temperatures have leveled off while CO2 levels have shot up. CO2 cannot play the “dominant” role in climate change, otherwise it would have been physically impossible for temperatures to level off while CO2 was increasing.
“CO2 cannot play the “dominant” role in climate change, otherwise it would have been physically impossible for temperatures to level off while CO2 was increasing”
This statement reflects less knowledge of climate science than what is known and experienced by a tree.
Again unclear – are you saying that we had more trees that ate the extra CO2 and therefore temperatures leveled off?
With what organs does a tree ‘know’ anything about climate science?
In my “Kim” voice:
I think that I shall never see
A thermometer as accurate as a tree
Now that is really funny:
Tying kim and Martha in, with perfect timing, and keeping the point alive- lolol
Martha, why do you want to bring Keith B. into this?
“The current climate theories have done a very poor job of explaining why temperatures have leveled off while CO2 levels have shot up. ”
Is this what you mean? See the graph on this link.
Its pretty obvious there is a definite relationship between temperature and CO2 levels from the ice core records. Yes, a warming world can increase CO2 levels. And it turn CO2 levels have an effect on temperature. Incidentally at 390 ppmv, and quickly rising, CO2 levels are shooting off the scale of the graph.
‘Incidentally at 390 ppmv, and quickly rising, CO2 levels are shooting off the scale of the graph’
Wow.That’s really really scary. But I don’t suppose that they could think of using a different scale with a wider y-axis? Or is that too complicated to contemplate?
But thanks for posting the chart. Which still shows that first the temperature goes up and then the CO2 concentration follows it – by about 500 years. When the earth colls, the CO2 concentration And that effect is pretty consistent over nearly 500,000 years.
So whatever the graph illustrates, it isn’t that CO2 necessarily drives temperature. What do you think it tells us? And what do you think is making the temperature change?
The so-called “null hypothesis” is also used, by science rejectionists, to argue against emissions controls. The argument being that because CO2 levels are rising at some 2ppmv per annum, due to human emissions, the onus of ‘proof’ is on those who object to this increase to show it is harmful.
Am I missing something here? How is that a “null hypothesis”? Surely that would be the other way around. It should be down to those who claim CO2 levels of 400ppmv, and higher, are safe, to show they are safe. If they can’t, we should do whatever it takes to stop this happening.
NO, not quite. Though this is not to say that i don’t agree that we need to reduce ALL pollutant levels- the distinction you missed was that you assumed reducing co2 had no detrimental effects.
IN this case, drastically reducing co2 can have direct harm, especially on developing nations (it also redirects funds from far more pressing matters like mass child starvation).
So in fact, in this instance, those campaigning for drastic co2 reductions need to show that more harm would be offset doing that, than dealing with more immediate problems. Which i’m sure you’ll realise is a chicken-egg scenario.
BEst bet? Reduce pollution as quickly as we can while simultaneously dealing with our immediate problems.
Well if you look at the CO2 emissions per capita globally then you’ll see that the poorer countries have the lowest emissions. So it makes sense to reduce the emissions in the richer countries first. Not that this will necessarily make them poorer. The Swiss and French have the combination of low CO2 and high living standards.
Many of the richer countries are slow, to say the least, in putting a price on CO2 emissions. Its only just happening in Australia. Is it going to make Australia any the poorer? No, of course not.
It should have happened at least 5 years ago. So what has happened to the money that has supposedly been saved in the meantime? Oh yes, I forgot you’ve already given me the answer to that question. It’s all been spent on “more pressing matters”. Like preventing child starvation in Africa!
It’s good that you told me that. I’d got it all wrong previously.
“Is it going to make Australia any the poorer? No, of course not.”
Huh, we’ll have to wait and see on that.
” So what has happened to the money that has supposedly been saved in the meantime? Oh yes, I forgot you’ve already given me the answer to that question”
No need to get antsy. ‘We’ are spending billions every year on climate change research and policy. At the same time 5 MILLION children die every year of easily preventable factors.
Now, i know we’re trying to mitigate climate change to protect the children of the future- but surely the children NOW have the same right?
I just don’t get people who can’t see that.
OK several million children die every year of easily preventable factors. Agreed.
That’s without any carbon taxes.
So why not argue to use the proceeds of the carbon taxes in the way you are arguing for?
Or isn’t that what you are arguing for? Do I sense a “red herring” here ?
In many cases, you’ll find that the only reason for the low per capita figure is that the few obscenely rich and powerful in these poor countries are responsible for the lion’s share of emissions, whilst the millions of obscenely poor people have virtually no hope of improving their lot, due to high fuel costs.
As for France and Switzerland, France relies heavily on nuclear power and Switzerland on hydro power.
What a strange world we live in – skeptics, who say a little warming is no biggie, possibly even advantageous, fight tooth and nail to show there is no additional or accelerated warming, while warmers, fearing the worst, would naturally welcome a little slowdown, fight even harder to prove the opposite and their fears correct. Strange indeed.
While the scientists simply ask for those making claims either way to back them up with statistically significant evidence. For instance, is the evidence that there has been a slowdown (rather than just a continuation of the trend temporarily masked by the noise) statistically significant? The answer to that question is “no”, so the skeptics should not be claiming that there has been a slowdown as if it were an established fact, it isn’t.
Nobody wants AGW to be a problem, but it doesn’t help anybody to ignore the science or the statistics.
Forgive my ignorance as I am not qualified to scientifically comment one way or the other, but how do you quantify the “temporary noise” as opposed to the “usual noise “. Sorry if it’s a dumb question, if so I’ll just go back to lurking.
Not a dumb question, but ambiguous writing on my part. The physical processes causing the noise hasn’t changed (at least I have no reason to assume it has), however the random nature of the noise means that sometimes its effects on the signal are greater than at other times. For instance El Nino/La Nina oscillation is a key component of the noise, so if you look at a period that starts with a strong El-Nino and ends with a strong La Nina, then this gives a cooling bias to the observed trend that can be larger than warming that would be expected from increasing CO2, and so temporarily masks the warming signal. The longer the timescale over which you compute the trend, the less likely this is to happen as the noise averages out the more data you look at. This is why short term trends are essentially meaningless, they tell you about the noise, not the signal.
Just another few more dumb questions: is noise random because it is the consequence of a physical process that is causing it, or is it random because you defined it that way; on what basis can it be said that ENSO is noise; on what basis can you say that any trend is constant (no discontinuities); and taking all that into account on what basis can you say that noise averages out the more data you have, leaving behind a pristine trend?
Not a dumb question, but a dumb answer. ENSO (and other oceanic/atmospheric oscillations) are not noise – they represent climate change. If you look at a period that starts with strong La Ninas and ends with strong El Ninos (for instance the late 20th century), that will be a period of warming (and vice versa).
By the way, ENSO3.4 is at -0.9 at the moment.
The 90s were far more El Nino active and the 00s more La Nina active. Yet the 00s were warmer than the 90s.
If you subtract ENSO from global temperature the warming remains, which strongly suggests at least that the warming is mostly caused by something else.
lolwot, on the first sight that “ENSO correction” is amazingly pseudo-scientific (aiming to confirm).
“In particular, the Thompson et al (2008) paper (discussed here), used a neat way to extract the ENSO signal from the SST data, by building a simple physical model for how the tropical Pacific anomalies affect the mean.”
A neat way? A simple physical model? Color me skeptical. It’s in the plain sight that ENSO was on the cool side in ~1950/60/70’s and on the warm side in ~1980/90’s. One can not smooth the peaks and claim “ENSO corrected”! If we have a prolonged period of negative ENSO (more La Ninas) and the Globe cools in the next decades, are you gonna claim it’s actually warming if you correct for ENSO?
To be fair the positive ENSO in the early 90s was somewhat countered by a big volcano.
What is noise? I guess a simple answer is that it’s everything else other than what we are trying to measure.
The temperature record would be a smooth line if we could remove all the randomness from the measurements. There may still be some waviness in the lines due to oscillations. The most obvious one is the 11 -13 year solar cycle.
Is that noise? Well it probably is in the context of looking for long term temperature trends – so it’s fair enough to remove the effect of that from the results with suitable filtering.
Of course there may be longer term oscillations as well. Here the filtering becomes more problematic. If we err on the side of assuming every change is due to an oscillation, and apply severe filtering, we inevitably end up producing a flat line from any set of results. That may please the denialists, of course, but the idea is to do our best to learn what is happening, not just fool ourselves.
Ok. Can we just have once and for all, laid down in stone, exactly what we can all agree on as ‘statistical significance’. Because at the moment the definition keeps changing. First it was 10 years, then 15, today’s magic number is 17 ….if it doesn’t warm up for another couple of years then it’ll somehow become 20 or 25….
Write it out here please so that if we bump into it in the future we will be able to recognse it Unambiguously. In advance. Black and white. Signed. With you real name on it.
Becasue otherwise ‘statistcial significance’ looks to be one of those great cliamtological fuzzy and soggy concept things like ‘consistent with’. Which means exactly what you want it to mean at the time.
Statistical significance is only half the question. The level of statistcal significance depends on alpha which is the probability (in the sense of a long run frequency) of rejecting the null hypothesis when it is true (in other words finding a “statistically significant” trend when the real trend is actually flat.
The other half of the question is statistical power, beta, which is the probability of not rejecting the null hypothesis when it is false (i.e. not detecting a statistically significant warming trend when it actually is warming). The more data we have, the lower beta will be, becuase the more evidence we have the more likely we will be able to show that the null hypothesis is false when it actually is false.
In common statistical practice, alpha is chosen first (often alpha = 0.05, giving a 95% significance level), and then set the sample size to ensure that the test has adequate statistical power (typically beta = 0.8).
This is what I have been asking for on this thread, namely that those claiming that there has been a meaningful pause in warming follow standard statistical practice and consider the power of the test. Note that in this case an argument is being made for the null hypothesis, so it seems reasonable to set beta at 0.95 so that the test is equally balanced between false-positive and false-negative errors and does not favour either hypothesis over the other.
Hey Latimer Alder/Stirling English sockpuppet, you first.
Oh yeah, I forgot, I haven’t seen you do any worthy analysis. Never mind.
Forget about the stats. Just look at the graphs.
You mean this graph: http://WoodForTrees.org/plot/best/from:1970/mean:12/plot/best/from:2002/trend? (You can read the URL yourself if you put a carriage return before each “/plot” as follows:
I don’t think Latimer would like that graph any more than statistics, not ONE. Little. Bit.
It plots BEST from 1970 with mean 12 (months, smoothing the seasonal cycles, which have nothing to do with long term climate change), and then plots the obligatory trend line for the period since 2002.
The result is a bit like holding a face recognition camera right up against Mona Lisa’s nose and having it recognize her face as Jimmy Durante’s. If you saw what the camera saw you’d be convinced it was right, but if you backed off four times as far away you’d say “Stupid camera, that’s obviously Mona Lisa’s nose.”
The point is not so much that the trendline is pointing down, but that it obviously should be pointing up. It can’t see the Wood For the Trees!
I’d been meaning a week ago to show the above plot (without the trend line) to Sebastian Thrun (the Santer contact I mentioned last week) to point out the apparent “defibrillating” effect of the massive Pinatubo cooling at 1992 (fibrillation for the two decades before, steady heartbeat for the two after, like the sound of a can right after being struck). I had my chance at yesterday’s faculty lunch, but before I could say a word, Sebastian said, “That’s wrong, it’s supposed to be flat at the top for the last decade.”
So I changed 1970 to 2000 and removed the seasonal smoothing, and trend line giving this plot, which he instantly recognized. He grabbed the laptop and spent the next few minutes playing around with the WoodForTrees settings.
BEST. Demo. Ever.
Well Vaughan, I hope he tried this one.
He might have thought to try it if he’d seen enough Girmagrams. Flying the Incredible Lying Machine solo requires several copilot hours looking over other people’s shoulders. ;)
The record indicates that the truth is quite the opposite of what you claim regarding AGW promoters.
The record shows warmers consistently over stating the risk, inflating the numbers, manipulating the data to achieve the pre-desired results.
The media obsesses on catastrophic claims which consistently fail to under review.
When the IPCC was confrontned with the fact that they were wrong on the Himalyan glaciers, their own charman dismissed those pointing out the IPCC error as practicing voodoo.
I would suggest that you study this much more thorouighly.
This isn’t at all a straight forward questions. Trenberth is quite clearly advocating this change to support his position, not because he thinks it’s actually applicable. So while we can dismiss his suggestion as a case of BBB, we should be careful not to dismiss the WHOLE question.
I’ve always operated under this principle regarrding the NULL:
The Null hypothesis is whatever is required to make the presented theory either impossible or highly unlikely.
The NULL is NOT the opposite of the theory. To take climate, the null is NOT that the warming is natural, but that humans are not (significantly) influencing it through co2 emissions. There’s a subtle, but important distinction here.
The ‘usual’ NULL touted by skeptics is that man isn’t the cause. This is wrong, as once C02 is ‘out of the way’, you still have aerosols, land use, thermal release etc etc. So the NULL cannot be that man is not causing global warming.
So, what is the NULL in this case? Hell if i know, it’s going to require a lot more thought than i can currently allow (as my sons waking up as we speak and trying to do a ‘Houdini’ out of his cot), but we SHOULD be able to identify it.
What one, or multiple facets of the climate would we need to establish to eliminate, or greatly reduce the likelyhood of c02 being the primary temperature driver? This could perhaps be the identification of cyclical patterns in the temperature records that explain recent warming, the GCR experiments, an unkown feedback (or the full understanding of clouds, removing their possibility as a positive feedback as described by thr IPCC) or something else.
It can in fact be many things the term NULL hypothesis can be misleading by suggesting only ONE possibility, this is incorrect.
So for my money, if we want to examine the NULL we should be trying to identify factors that make the primary theory impossibly.unlikely. This of course requires a greater understanding of the climate, but the great sideeffect of REALLY looking into this would be the increased knowledge we’d get as a result. This may end up wupposting the main hypothesis, but either way we’ll be in a far better position than we are now; shooting in the dark.
The null is whatever you think it is worth trying to disprove. The extent to which it adds to the great sum of knowledge depends on what you choose and the outcome of your experiment.
Exactly, so if someone claims there has been a slowdown in the rate of warming over the last decade, theur null hypothesis should be that the warming has continued at the same rate and try to disprove that. For some reason they have not done so and are instead arguing for the null hypothesis used in the test to see if there has been warming (which is why they need to perform the power analysis, which they haven’t done either)
I always go back to definitions. Null Hyposthesis:
“A type of hypothesis used in statistics that proposes that no statistical significance exists in a set of given observations. The null hypothesis attempts to show that no variation exists between variables, or that a single variable is no different than zero. It is presumed to be true until statistical evidence nullifies it for an alternative hypothesis”
So what are the two variables here? Events in the climate that are percieved to be changing is one, and CO2 emissions is the other. The Null Hypothesis, by definition, claims there IS NO CONNECTION between those two variables.
Hence Trenberth’s attempt to assume that the default position that those two variables are connected flies in the face of all other science. The man is a charlaton.
Agree, but it’s not CO2 emissions – it’s the atmospheric CO2 concentration, that allegedly causes warming. Not that I am convinced that it does.
Except we already know it doesn’t from the geological record. Of course, one has to define what “warming” means. Increase in the global average temperature is totally misleading. First, let’s assume that “warming” means retainin more heat now than it did in a pervious time frame. How does one know that the current time frame of warming is abnormal? Maybe the planet is just returning to the normal state of being warmer? Second, is the problem with defining what “warmier” means. The only way to do that is to look at what the actual temperatures are doing. I have done that for Canada’s 100 temperature records ate Environment Canada. Summer TMax has been DROPPING since the 1930’s. There are FEWER heat wave days today than then. Winter temps have been getting less cold. Yes, being specific here. Winter isn’t warming, it’s getting less cold. Why do I say that? For the same reason one can define if a glass is half full or half empty. The point is, average temperature is indeed increasing but only because the range of yearly temps is narrowing. So I’d like nothing more than the AGW faithful to explain how our emissions of CO2 is making summers cooler.
Richard, suppose I claim I can taste a difference between Coke and Pepsi. The null is I can’t taste a difference, and according to your definition, the null is presumed to be true.
In a blinded taste test, using 10 cups of Pepsi and 10 cups of Coke, I fall short of correctly identify the contents of enough cups to reach a 95% confidence interval. So given your definition, I think you might say the null is not only presumed to be true, but in this case is true. And we know truth never changes.
But the following day, in a second blinded taste test, I correctly identify the contents of enough cups to satisfy the 95% confidence interval. So now the null is not true.
But wait, didn’t we prove the null is true yesterday in the first test. Since we know truth never changes, should we ignore the results of today’s taste test?
Sorry, I’m not following the logic that the truth never changes. Truth is what we measure. We can, and often have, mis-meseaured. That’s what Ponds & Flieshman did with cold fusion, mis-measured. Happens all the time. Doesn’t change the basic definition of what the null hypothesis is. Or are you claiming we should change the definition? Please propose one.
BTW, I can tell the difference, dislike Pepsi. Have from the beginning. Which shows a flaw in your logic. You doing two tests is NOT a statistical test (a requirement of the NH). Now, take a thousand people doing the same test, and then we can talk about if this premise of yours fits a null hypothesis test. I can tell you the result right now. There is no difference between the two for the simple reason that taste is in the mind of the beholder.
Richard, if something we think is true changes to not being true, then it wasn’t true to begin with. Truth doesn’t change.
The Coke/Pepsi hypothesis was meant to show the definition you cited was misleading in presuming the null to be true. If I didn’t demonstrate Coke and Pepsi taste different to me in the first test, I haven’t rejected the null. However, that doesn’t mean the null is true. I did show I could taste the difference in the second test, thus rejecting the null, and showing it not to be true.
If you claim you can taste a difference in these two soft drinks and I claim you can’t, your null can be “you can’t” and my null can be “you can.” Which of these do we presume to be true?
I failed to complete a sentence in my previous post.
Incomplete: “However, that doesn’t mean the null is true.”
Complete: However, that doesn’t mean the null is true, so if rejecting the null doesn’t mean the null is true, why begin with the presumption the null is true?
Aw, heck !
My previous post, should read:
However, that doesn’t mean the null is true, so if failure to reject the null doesn’t mean the null is true, why begin with the presumption the null is true?
Doesn’t anyone other than me catch these errors?
I’m beginning to think the only person reading my stuff is me. If that’s the case, I’ll stop being critical of myself.
The null hypothesis isnt about what’s true or false. It’s a premise. The premise being any two measured variables are not connected, no cause and effect, until such connection is emperically deturmined, that “truth” comes AFTER. The null hypothesis should not be confused with a default position, “truth” comes into play there. The default position prior to science was some god controled everything. Now science has found that there are properties of the universe such that events can be purely shown to have mechanisms, as the causes of those events, all on its own. The “truth” about the universe is it works with no one turning the knobs, works all on its own purely through natural mechanisms. Now Trenberth is trying to turn science upside down by assuming that the default position is “someone” is turning the knobs (god replaced by humans). It is clear that their default position conflicted with the null hypothesis, hence, and instead of changing their default position, which would be heresy, he wants to change the null hypothesis. Thus he has turned climate science into a religion.
Richard, I’m afraid the null hypothesis is about what’s true or false. Otherwise, there would be no point in hypothesis testing. If a statistical tests reject the null, the null is highly likely to be false(not true), providing the test is not flawed.
I think you’re both getting caught up in semantics. From my experience of actual practical science the null is often NEVER actually defined.
It is simply the alternative concept.
For example, i posit that a protein is only active if factor x is present. If i can show that factor Y infers activity to the protein i’ve ‘proven’ the null, without actually having to define it.
Or to put it the other way, the null is the default state reached when the theory presented is ‘disproved’.
Now granted, you can specifically define a null and in the case of climate this would be useful and possible- but don’t get caught too much on the assigning the falsification criteria. You need only show that something is NOT possible, NOT offer the alternative explanation.
Huh, ‘actually’ snuck in there a few more times than i’d realised…. grammar me good.
This is wrong, as once C02 is ‘out of the way’, you still have aerosols, land use, thermal release etc etc.
When will CO2 be “out of the way?” As I pointed out here, CAGR for CO2 emissions from coal, oil, natural gas, flaring, and cement production averaged 3.08% for the period 2000-2010, peaking to 6% in 2003 over 2002 (though there was no hysteria that time) and again in 2010 over 2009 (much unwarranted hysteria about a single year, even by professionals but perhaps overblown by the media as usual who may have been selective about who they quoted!).
Name any aerosol, land use, etc. that has been climbing at anywhere near a sustained 3% p.a. rate. Aerosols, methane, etc. have been declining since the 1980s, as Kermit commented a few days ago.
Anthropogenic thermal release is at 15 terawatts, natural (geothermal) is at 30 terawatts, CO2 forcing is at 850 terawatts. (Divide these numbers by 510 to convert to W/m2, e.g. 850/510 = 1.67 W/m2.) Solar heating from absorbed sunlight is 121,000 terawatts, over 99% of which Earth radiates to space. It is the radiative forcing from CO2, H2O, O3, CH4, etc. that keeps us at a nominal 288 K, 33 °C above 25
We are not being cooked by our thermal emissions, nor by nature’s geothermal emissions, it is CO2 (nature’s 280 ppmv plus our 110 ppmv) that is cooking us. Ironically the 110 ppmv part doing its share was burnt years ago and most of it has long been at 0 °C and below. This is serious cooking with a seriously cold heater, which is such weird physics that hardly anyone can believe it. Yet it’s happening!
What one, or multiple facets of the climate would we need to establish to eliminate, or greatly reduce the likelyhood of c02 being the primary temperature driver? This could perhaps be the identification of cyclical patterns in the temperature records that explain recent warming, the GCR experiments, an unkown feedback (or the full understanding of clouds, removing their possibility as a positive feedback as described by thr IPCC) or something else.
“Cyclical” maybe, or maybe more like the sound of pebbles rolling around in a drum (but with time slowed down a million-fold). As long as climatology can’t account numerically for the large natural fluctuations that obviously dominated climate change prior to 1950, we cannot say what proportion of the current warming is of human origin. Claiming that all the natural fluctuations threw in the towel over the last 40 years is just as bad as claiming that CO2-induced warming is not happening. Both are extreme positions, neither is scientific.
Vaughan – according to this, since ~2006, cows are farting more often!
In short Trenberth only wants to play the game with his set of loaded dice to ensure he cannot lose . Well that idea would certainly not float in Vegas and doesn’t float anywhere else either.
You won’t get the pot just by declaring you have the winning hand. You must have the cards to prove it.
What are Trenberth’s dice and how are they loaded? Show your cards.
This all looks more by putting the ball in the others courts. I had other thoughts about a null hypothesis in science, like for instance, if you want to understand the effect of “greenhouse-gasses” in the atmosphere, the null hypothesis would be, the effect of an atmosphere on a planet without greenhouse gasses.
1) Observed temperatures less than IPCC projections even for zero greenhouse gas emission
2) No evidence of the characteristic greenhouse warming finger print in the mid troposphere
3) Evidence of oscillation of global mean temperature due to ocean cycles
4) Evidence of correlation of global mean temperature with cosmic rays
5) No change in global mean temperature pattern (0.06 deg C per decade warming and oscillation of 0.5 deg C every 30 years) since record begun 160 years ago
With all these multiples lines of evidence, it is not if but when the funereal of AGW will be conducted.
Well, that proves it. Send this list to NSF and the Academy, and we can forget about AGW.
The reason Trenberth is arguing for a change in the null hypothesis is simple. No one has been able to demonstrate statistically that humans are changing the climate through CO2 emissions.
The leveling of temperatures, especially ocean temperatures as measured by Argo, has thrown the statistical connection between temperature and CO2 out the window.
Climate science is now trying to say they under estimated the effects of aerosols. In reality they over estimated the significance of CO2 because they didn’t allow for natural variability. They assumed that climate was a static system, that without any change in the forcings climate would not change.
This is a nonsense position, as the climate system is dynamic, it is always changing as it redistributes energy around the globe. Every school child learns this.
Put a beaker of water over a heat source, the water inside the beaker will begin to circulate in a pattern that is ALWAYS changing. Even when the system is in equilibrium. It is beyond that capability of science to predict how this circulation will change.
Even in something as simple as a beaker of water, the most powerful supercomputers on earth can only show how the circulation MIGHT change, not how it will change.
When you throw a fair dice, how much computer power does it take to predict the next number that will be thrown? When computers are powerful enough to accurately predict the next number that will be thrown on a pair of dice, they will be in the range of power required to predict climate change.
pokerguy wrote: “As a poker player, I see people imposing patterns on random events all the time. Gamblers are always in the midst of some streak or other, streaks which feel both self-sustaining and meaningful. If I’ve learned nothing else playing poker, it’s that the long term…that is the point where skill starts to overwhelm luck…is much, much longer than most people think.
Granted the analogy is facile, but it continues to mystify me how meaningful conclusions can be drawn about climate over a period of decades. I can’t understand how people can talk about 1998 say, as the warmest year on record, as if that really means something..”
M. carey replies
“Climate is random and warming is on a winning streak? Well, that’s a novel notion.”
I admitted it was a poor analogy. I could try substituting natural variability for luck (noise), with skill perhaps correlating with a some statistically significant climate signal, but I’d be in way over my head.
HOwever my main point, poorly expressed as it was, has to do with the human tendency to superimpose meaningful patterns onto short term data. There was an excellent recent post on wuwt beautifully showing, by means of graphs, how the recent supposedly alarming uptick in temps begins to look more and more benign as one steps back further and further in time…
Pokerguy, the link you provided is an excellent post which has, so far, drawn very little comment from both sides of the AGW fence. Perhaps the reason for this is that both sides of the debate have been extrapolating from short term data. Tony Brown has highlighted the point that one should look at short term inclines AND declines in its proper historical context.
Pokerguy, step back far enough in time and the place of my birth was under a sea, way under.
No, I am not a fish (no gambling pun intended).
When you are “winning the argument” (i.e. the data out there are all supporting your premise) there is no need to try to redefine the “null hypothesis” in your favor.
The premise of alarming AGW caused primarily by human CO2 emissions, as espoused by Kevin Trenberth, is currently not in that enviable position. In fact, the “holes” are beginning to become increasingly apparent.
Medium-term warming forecasts made in 1988 have shown to be exaggerated by more than 2:1, despite the fact that CO2 has increased at a slightly higher rate than predicted.
Longer-term temperature and CO2 observations since 1850 show that the rate of warming has been less than half that projected by the climate models, spawning postulations of “missing energy hidden in the pipeline” to rationalize the dilemma.
The upper ocean (where the missing energy was supposed to be lurking) has been losing heat since comprehensive ARGO measurements replaced the old inaccurate expendable XBT devices in 2003.
The atmosphere at both the surface and the troposphere has stopped warming for 11 years now (or has it been 14 years already?), despite the fact that CO2 levels have risen to record highs.
The tropospheric hot spot – the “fingerprint” of greenhouse warming – has proven to be as elusive as a desert mirage.
Extreme weather events have not increased over the recent years as predicted.
So now is the time to change the focus from these worrisome observations and argue for changing the “null hypothesis”.
Call me skeptical if you wish, but it looks like a ruse to me, folks.
Manacker says:”When you are “winning the argument” (i.e. the data out there are all supporting your premise) there is no need to try to redefine the “null hypothesis” in your favor.”
You have hit the nail right on the head.
Dr. Curry writes:
“I continue to point out the problems of posing the attribution hypotheses and/or conclusion in the form of H1. Understanding this point is an issue in basic logic, you don’t need to understand much about climate science to see the problems with formulating H1 in this way. Not to mention the difficulty of formulating a sensible null hypothesis that is not trivially true.”
Your position is unassailable and a breath of fresh air. The IPCC has been reckless in its use of statistical claims as the IAC Review makes clear. Climate science would be greatly improved if climate scientists could understand the basic logic that you set forth. You continue to make great contributions to climate science.
“The atmosphere at both the surface and the troposphere has stopped warming for 11 years now (or has it been 14 years already?), despite the fact that CO2 levels have risen to record highs.”
I notice when I’m filling the bathtub with hot water and I turn on the cold water, the temperature of the bath stops rising even though hot water is still running into the tub. I can’t figure out why this happens, but I suspect it has something to do with the thermometers in my house being poorly located.
Sorry to hear that you have a bathtub temperature problem.
What your bathtub problem has to do with the most recent lack of warming of our climate system despite ever-increasing CO2 levels is beyond me, but it was a neat side-shuffle.
I thought perhaps something simple you can do at home might enlighten you. The point is opening the cold water tap moderated the warming effect of water flowing from the hot tap, thus the temperature of the water from the hot tap no longer correlated with the temperature of the water accumulating in the tub.
Similarly, a La Nina cooling effect (the cold tap) can moderate, offset, or even more than offset the warming effect of an increase in atmospheric CO2 (the hot tap) on global (the tub) temperature.
On second thought, perhaps you shouldn’t try the bath tub experiment at home. I’m not sure you could do it without hurting yourself or flooding the bath room.
The main point is an increase in CO2 concentration by more than 74 ppm has not changed the global warming rate at all.
Where is the evidence for increase in CO2 concentration increasing the global warming rate?
The evidence is from physics.
Girma, why would you expect every increase in CO2 to result in an increase in global temperature when you know about cooling influences like La Nina?
You only start to worry when the warming or cooling rate exceeds the previous maxima. There is no evidence for that as shown in the above link.
So at what point on the red line would you start worrying?
We are moving UNDER the zero emission yellow line:
Girma, you need to be clear what you mean by “no evidence”. Lolwot or m. Carey would be quite right to point the well known physics meaning increasing GHG will have the effect of slowing the rate heat escapes. What you mean is whether that is detectable in global temperature measurements. I am sure you know and agree with this, but I do think you need to be careful to always carry these distinctions in your comments – IMHO.
Lubos Motl at The Reference Frame makes the following comment:
‘…This proposal by Trenberth only has two problems: the “man-made climate change” claim cannot become a new “null hypothesis” because it is not “null” and because it is not a “hypothesis”, either. It’s not null because for a claim to be “null”, it must say that various a priori unknown effects have to be zero. The AGW doctrine says that they’re not zero so it can’t be null…’
Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected
Global cooling of 0.1 deg C per decade!
Now they are telling us the “lag” is expected.
What a joke!
Girma, when I use the UAH sattelite-based temperature anomalies, which skeptics believe are more reliable than HadCrut anomalies, your argument disappears.
Even better start in 2000.
What is the trend value that YOUR link gives?
#Least squares trend line; slope = 0.0145243 per year
A warming of 0.14 deg C per decade.
What is IPCC’s projection?
And the IPCC’s Fourth Assessment report told us we have “accelerated warming”
Please, please, please M. Carey!
What does the global warming rate YOUR link gives?
Least squares trend line; slope = 0.00443133 per year
A warming trend of only 0.04 deg C per decade, less than IPCC’s prediction for zero emission of 0.1 deg C per decade.
Why do you want to callously increase the cost of living for billions of the world’s poor?
I want people to use less electricity and gasoline, so they can save money like I have by using less less electricity and gasoline. Now, I have more money to spend on other things, like beer. Don’t you want people to have more fun?
The way to “spend less money on electricity and gasoline” is sure as hell not to slap a direct or indirect carbon tax on motor fuels or fossil fuel generated electrical power.
Don’t be silly.
Max, a carbon tax is a terrific idea. It would encourage efficient use of limited fossil fuels, and tax could be used for …
A. Paying down the national debt.
B. Reducing the income tax.
If you are against paying down the national debt and reducing the income tax, I can see why you might not like the carbon tax. Also, you might night like it if you own a coal mine, and want to deplete the coal ASAP.
M. Carey – then propose such a tax. Explain what you want to use it for (reduction of the debt, reduction of personal income taxes, etc.). Do not tell us it will save us from ourselves when in fact the ultimate goal is to simply increase our taxes to be spent on whatever whim the politicians find today. There is not enough money in the world to satisfy their whims, and this is just another back door attempt to get more money so they can spend it as they wish.
I think people call that “truth”, which is an anathema to politicians.
It’s been warming since 2007
Yeah, and it;s been cooling since 2010.
It’s also been cooling since 2001.
Peter Davies wrote: “Pokerguy, the link you provided is an excellent post which has, so far, drawn very little comment from both sides of the AGW fence. Perhaps the reason for this is that both sides of the debate have been extrapolating from short term data. Tony Brown has highlighted the point that one should look at short term inclines AND declines in its proper historical context.”
Thanks Peter. I agree. It’s a classic post and deserves to be widely read. Doesn’t it all come down to sample size? I’m still mystified how BEST can make definitive assertions about GLOBAL warming, when he’s only talking about 30 percent of the earth’s surface..I get how polling data can use a tiny sample, but somehow this doesn’t seem like the same thing. The land and the ocean are two distinct entities (obviously)…
Congratulations on an excellent paper and successful promotion of the rational null hypothesis regarding AGW.
I am re-linking to information on the limnology paper I mentioned before that you said you would like to review at some point (knowing you are incredibly busy).
Here is the link:
The implications regarding carbon are significant.
This null hypothesis arguement is a strawman that diverts us from doing research to determine if our efforts to mitigate a rise in global temperature will benefit us more than it will cost. The big question is if a rise in temperature is bad, how much is natural and how much is attributable to our burning of fossil fuels? So how much does our burning add to the observed rise in atmospheric CO2 and how much is natural? The last few months, I have attempted to answer this last question analyzing the Scripps data and the reported global emissions. I completed this effort and have placed the results on a blog. http://retiredresearcher.wordpress.com/. I would like for your readers to “peer review” it, replicate or improve on it, and if they wish, write it up for journal publication.
“So how much does our burning add to the observed rise in atmospheric CO2 and how much is natural? The last few months, I have attempted to answer this last question analyzing the Scripps data and the reported global emissions.”
So you comparing C-12 and C-13.
“The ratio of 13C to 12C is slightly higher in plants employing C4 carbon fixation than in plants employing C3 carbon fixation. Because the different isotope ratios for the two kinds of plants propagate through the food chain, it is possible determine if the principal diet of a human or other animal consists primarily of C3 plants or C4 plants by measuring the isotopic signature of their collagen and other tissues. Deliberate increase of proportion of 13C in diet is the concept of i-food, a proposed way to increase longevity.”
“C4 plants have a competitive advantage over plants possessing the more common C3 carbon fixation pathway under conditions of drought, high temperatures, and nitrogen or CO2 limitation. When grown in the same environment, at 30°C, C3 grasses lose approximately 833 molecules of water per CO2 molecule that is fixed, whereas C4 grasses lose only 277 water molecules per CO2 molecule fixed. This increased water use efficiency of C4 grasses means that soil moisture is conserved, allowing them to grow for longer in arid environments.
C4 carbon fixation has evolved on up to 40 independent occasions in different families of plants, making it a prime example of convergent evolution.C4 plants arose around 25 to 32 million years ago during the Oligocene (precisely when is difficult to determine) and did not become ecologically significant until around 6 to 7 million years ago, in the Miocene Period.”
Would be better if there was a higher ratio of C-13?
I don’t know what you are trying to do but it doesn’t look like it follows any physical model. Are you simply fitting sinusoidal functions and using that as a heuristic?
The modeling elements that you need to include are the classic physics or engineering formulations:
1. Green’s function (physics) or Impulse response function (engineering)
2. Forcing function or stimulus
4. Data source. Go to CO2 Information Analysis Center at ORNL to get emissions data for the last 160 years
5. Assume some level for historical baseline for CO2
5. Conversion factors between carbon and CO2
The only difficult part is estimating the impulse response function. For CO2 the response has very long tails because it is governed by diffusion.
The natural carbon cycle is net carbon neutral. Ignore all the isotope arguments in trying to figure out anthropogenic content of CO2. The adjustment time is important and not the residence time. By the time the CO2 goes through the carbon cycle it is thoroughly mixed, although not adjusted to being semi-permanently sequestered.
There is your peer review. My suggestion is to follow this recipe, which is first-order physics and includes good assumptions regarding the carbon cycle. You will likely not get a paper accepted if it relies on heuristics. This topic is way too fundamental to use arbitrary curve fitting.
Blah, blah, blah.
What have we learned from this? The concepts of null hypothesis and burden of proof can be abused as much as statistics. Duh.
‘Human emissions have not affected climate.’ is about as valid a null hypothesis as ‘The speed of light is zero.’
Dangerous? Well, somewhere around 97% of climate researchers think so. If 97/100 of oncologists said you had a dangerous tumor, would you seek treatment or not? The treatment is going to be expensive; does that mean you don’t have cancer?
Example dangerous scenario.
Hadley circulation has a large influence on where rain falls.
Anyone doubt that?
Hadley cells are growing as the world warms.
Anyone doubt this?
Agricultural yields are negatively impacted by drought.
There are large areas of agricultural production just poleward of arid regions with low agricultural production.
So, go all around the globe, northern and southern hemisphere, and erase 2-4 degrees latitude of agriculturally productive regions per degree C of warming, where they are poleward of any arid regions centered around 30 degrees north or south.
What do you think, is that a dangerous thing or not?
Chris G. Time goes to O.
“If 97/100 of oncologists said you had a dangerous tumor, would you seek treatment or not? The treatment is going to be expensive; does that mean you don’t have cancer?”
If someone said to me “97/100 of oncologists said you had a dangerous tumour” I would not seek treatment.
Kermit: A better example:
Lets stay with IPCC_talk “very likely”:
If your plane will take you to the destination “very likely”…do you have confidence and board the plane? The plane will arrive with a 95% certainty according to modelling…. will you get on?
……..ergo: Will you believe IPCC AGW and their 95%? or say: Wait a minute?
It sounds like Virgin Galactic
Kermit, We all know that Mr. B. is only in it for the ‘green’…
95%…wait a minute.
Since the IPCC only offers 90% likelihood, not 95%, the difference is dramatically in favor of not actually doing as they suggeest.
‘blah balh balh’ is a pretty good summation of the AGW ccommunity reasoning and critical thinking ability.
Thanks for that and the strawman oncologists.
I think you do not understand metaphor or what is meant by the term strawman.
Metaphor: Most people would rely on oncologists for the diagnosis of cancer and most people should rely on climate researchers for the diagnosis of climate change. (BTW, let me introduce metaphor’s friend, analogy.)
Strawman: A misrepresentation of someone’s argument, where that misrepresentation is easy to refute.
In this case, I have used a metaphor, not a strawman.
I get the distinct impression that Trenberth is reacting to a common abuse of the null hypothesis paradigm, saying that it is up to the climatologists to prove that human emissions have an effect on climate. That is just like saying, prove that changing the chemical and thermodynamic properties of the atmosphere has an effect. Umm, how could it not?
Just because the data you may choose to rely on allows you to make a prediction as to the likelihood of a safe arrival that is statistically significant with a confidence interval of 95%, that does not mean you have a 95% chance of arriving safetly. You may have a zero percent chance of arriving at all if there is no fuel in the plane. The distribution of your data does not reality make. The biggest problem the AGW True Believers have isn’t that they know nothing about statistics or even that they use corrupted data; the biggest problem is that they CHOOSE to use corrupted data.
You are clinging to a belief that you know more about climate science than the members of every major scientific body in the world. Let me say I am skeptical of your belief.
I see that no one has challenged any of the points leading to the conclusion that our current path is dangerous.
Chris, science is the belief in the ignorance of experts.
To correct your analogy: You don’t have a choice; you are going to get on one of two planes, one has a 95% chance of taking you someplace you want to go, and the other has a 5% chance of taking you someplace you want to go. Which plane do you get on?
I do not take the plane which arrives with 95% security and do not believe
statements with 95% correctness as the truth – what about you?
The 5% plane is out of the question – I would go overland or by ship which provides me the 100%…….
At some critical point of WWI, the French, in order to boost morale in the field, shot every 20th soldier of their own in line……would you enter this line and be confident? Yes or No?
Well, Joachim, by choosing to do nothing, you have chosen to get on the 5% plane. We only have one planet; so, we only get one shot at the course of action to ride into the future on. There is no such thing as 100% certainty in the choice we make.
Chris, for 2000 years we know, there are many ways to get to Rome….
If you tell me, your way will lead with 95% security to Rome ….and you gamble, (with a 95% security will the number YY in the roulette to come up, because it hasnt come up for 300 rounds…… ) ….. not with me, science is
only established facts (the way 100% to Rome), nothing less …….
If you believe the 95%-argument, you are the poor bugger, because
1. AGW-proponent will in 20 years time on, when the CO2-fingerprint has expired, excuse themselve: “We only have seen the fingerprint with 95%, and did our best…. sorry 1000 times that you believed it and payed for it….
but, but: We will not give you your money back….!
2. Better shop around for the 100% way, its already there on the market, I gave the ISBN number various times in the blog….
Why is a raven like a writing desk?
How about every year you have a choice and this choice cost nothing.
One choice is to have world warm by .02 C or cool by .02 C.
What are you going pick for next 2 decades. The same every year?
So either . 4 C of warming or .4 C of cooling?
And after a century would be 2 C of warming or 2 C of cooling.
I think it’s of more value not to have 2 C of cooling.
I would pay more to not have 2 C of cooling than pay to not to have
2 C of warming. Or I think global loss to be higher with lowering of 2 C.
Now lets bring this to national level, how much does one trillion dollars buy? Not one trillion per year. But just 1 trillion dollars, whether spent in one year or 100 years.
And I suppose what we buying is cooling [even though I would prefer warming and/or insurance against too much cooling].
So how cooling can bought with 1 trillion and how much cooling needs to be bought [do we need to spend less or more than 1 trillion].
Next question is how is money spent. How much of should spent on science related to climate studies, what types science or developing technologies need more funding and more time of funding?
I think you have shown the fundamental null hypothesis from which all other null hypotheses can be viewed as taylored to a specific detailed natural process.
Therefore, you showed that Trenberth violates the most fundamental of null hypotheses, it makes him look not just incidently anti-science but indeed Medieval.
Care to detail the premises your fundamental null hypothesis? That is the fun part.
As I noted in a post above, one must be careful to separate the null hypothesis from a default position. I kinda mixed the two in the post you quoted. The null hypothesis is nothing more that one must assume any two measurements of events do not have a mechanism between them. That is event A is not the cause of event B. That is the null hypothesis until emperically (not computer models) shown otherwise. Thus the null hypothesis is a premise.
That the universe works all on its own is a default position. Prior to science, the default position was that some higher being turned the knobs of the universe. Science has shown that is not the case. Events happen because of the properties of the universe.
Trenberth is clearly returning to the era of “someone” turning the knobs as the default position. We knew that because that is what the IPPC dictated. However, their default postion violates the null hypothesis that no two events are related until emperically shown otherwise. So instead of changing their default postion, a clear heresy, Trenberth is attempting to turn science upside down by trying to change the null hypothesis.
He’s gotta be getting a large amount of flak from other scientists in other disciplines over this. He’s gotta back down. That very may signal the end of their default position because Trenberth is admitting their default position violated the very basic premise of science, the null hypothesis.
Trenberth has openned a large can of worms with this. I don’t think he understands how bad this looks on climate science. He’s going to find out soon enough.
There is some confusion here about what constitutes the Null hypothesis. This has no presumption one way or the other in it (e.g. “causality” or “no causality” is equally acceptable – but just take care with the concept of causality in all this).
The Null hypothesis is just the hypothesis to be “NULLified” (its derivation).
Trenberth is perfectly at liberty using an H0 that “AGHG cause more than 50% of the 20th century increase in global temperatures” (say). In which case he should be spending his time designing experiments to falsify (nullify) this proposition i.e. doing what lots of good skeptics spend their time doing.
Just as a postscript I did look at Trenberth’s published output for 2010 and as far as I could see there was no formal hypothesis testing I could see in any of it.
Null for thee but not for me.
Climate Science knows.
The definition of the null hypothesis is clear. Measurements of event A and measurements of event B have no relationship unless emperically shown otherwize. Thus, in climate science, the heat wave of Paris (Trenberth is wrong about it being in all of Europe, during that very period Berlin’s temps were perfectly within the normal) is Measurement B, where as CO2 emissions is Measurement A. Trenberth is claiming A caused B by default, but since that defies the premise of the null hypothesis, that the NH must be changed to match the default position. This violates the very core of how science works, and he should know that.
Richard, perhaps contemplate measurements of event not A and measurements of event B to understand why what you are saying about what is appropriate in science is incorrect.
“Finally, to that we have to add the general failure of what few predictions have come from the teraflops of model churning in support of the AGW hypothesis. We haven’s seen any acceleration in sea level rise. We haven’t seen any climate refugees. The climate model Pinatubo prediction was way off the mark. The number and power of hurricanes hasn’t increased as predicted. And you remember the coral atolls and Bangladesh that you and the IPCC [and Dr. Trenberth] warned us about… the ones that were going to get washed away by the oncoming Thermageddon? Bangladesh and the atoll islands are both getting bigger, not smaller. We were promised a warming of two, maybe even three tenths of a degree per decade this century if we didn’t mend our evil carbon-loving ways, and so far we haven’t mended one thing, and we have seen … well … zero tenths of a degree for the first decade…” ~Willis Eschenbach
The temperature anomaly in 2000 was +0.33C and in 2010 +0.63C a warming 0f 0.3C in the last 10 years.
Let us the data:
#Least squares trend line; slope = 0.0136622 per year
A global warming of 0.13 deg C in the last decade!
Let us the data:
#Least squares trend line; slope = 0.00281115 per year
A global warming of 0.03 deg C in the last decade!
This is less than IPCC’s projection of 0.1 deg C per decade warming for zero greenhouse emission!
Latest anomaly +0.11C
I hope that Joshua is OK. I don’t want him to come back, just that he be OK.
Please advise. Did something happen to Joshua?
One of his posts was deleted in a previous thread. I didn’t see it, but it must have been blatantly offensive, as myself and others including Josh have gotten away with some pretty bad stuff. He appears to have taken it badly enough to have abandoned his filibustering.
The first sentence in the Conclusion of Dr Curry’s paper is the main point: This discussion on the null hypothesis has highlighted the fuzziness surrounding the actual hypotheses related to dangerous climate change and their falsifiability.
Trenberth’s paper proposed changing the null hypothesis from what it sort of is (CO2 [or all of human effects] is negligible), to something sort of different (some aspect of human activity is having some effect.) But he does not propose an actual null hypothesis.
Consider testing the climate sensitivity (CS, equilibrium change in mean surface temp in response to a doubling of CO2. ) We might test the no feedback hypothesis (H0: CS = 1.2k) vs a particular feedback hypothesis (H0: CS = 3.4K.) that would seem to be in the spirit of what Trenberth wrote, but he did not in fact write it. Or, we might have H0: CS = 0K (the Earth temp is near its maximum permitted under a regulatory hypothesis of the sort that Willis Eschenbach has proposed), but Trenberth would want to turn that around to H0: CS = 1.2K (vs H1, CS = 0K.) (In this formulation, it would be harder to reject 1.2K than 3.4K on short time series of evidence.) Trenberth did not say that either. His proposal remains completely vague.
Lots of the CAGW is vague: somewhere or other, some time or other, some way or other, CO2 will cause a disaster of some sort related to too much heat retention in the atmosphere. It’s no wonder Trenberth would want that to be the null hypothesis: there is no way to disconfirm it (except in the very long term) if the system has any natural variability independent of the .factors that we can measure.
In reply to: David Young | November 4, 2011 at 7:23 pm (reply button not working?)
David — It is curious to me that you seem to be quite willing to take Muller’s word for so much, rather than (apparently) examining the data (or primary literature) for yourself.
For a pretty good discussion of the use of Munich Re data (and attempts to account for the obvious biases), see A Trend Analysis of Normalized Insured Damage from Natural Disasters
. You will see that difficulties abound, but interesting results are nevertheless forthcoming. (Pielke Jr. quotes rather selectively from this paper; I don’t know what Bishop Hill said. I just read the paper for myself.)
With regard to wildfires, see Western U.S. Forest Wildfire Activity:
“Thus, although land-use history is an important factor for wildfire risks in specific forest types…the broad-scale increase in wildfire frequency across the western United States has been driven primarily by sensitivity of fire regimes to recent changes in climate over a relatively large area.”
Of course these studies are not definitive, but I believe that they, and others like them, present a far more accurate assessment than Muller’s facile pronouncements.
I read key portions of the Western US wildfire paper. I saw no statistics and no probabilities, just statements based on expert judgment. Maybe I missed it, but it looks like to me that earlier snowmelt is associated with more wildfires, a reasonable connection. You know this last year, snowmelt never stopped in the Cascades because of very heavy winter snowfall. I suspect this year will prove to be an almost record low year for fires in the West, possibly excepting New Mexico and Arizona. But the Northern Rockies where the article asserts the largest increase has occurred has seen very few fires. These things do tend to go in cycles.
OK, so now for the unbelievable part, the connection to AGW. You know of course that this same climate pattern of much drier conditions has happened before and is usually cited as the cause of the decline of the Anasazi Indian civilization. Climate is always changing and these changes have consequences.
methods: see the online suppporting material.
These things tend to go in cycles:
Sure, but we are trying to detect trends, right? “Cycles” do not preclude trends.
Climate is always changing:
Yes, indeed. Seems to be doing so now.
Pat, You need to do better than this. I am looking at the Munich RE funded results.
“In this article, we have analyzed whether one can detect a trend in data on
insured damage from natural disasters. Whilst we have not found any evidence that normalized insured damage has trended upward at the global level, for developed countries and independently of the type of disaster looked at, our detection of an upward trend in insured losses from non-geophysical disasters and certain specific disaster sub-types in the US, the biggest insurance market in the world, and in West Germany represents a finding to be taken seriously in the risk analysis undertaken by
insurance and re-insurance companies. ”
Translation, global warming (which I assume is geo-physical) has not measurable impact, but other types of disasters (non geo-physical?) are causing more damage.
No, non-geophysical means weather-related. Geophysical is earthquakes.
Look David, none of these studies are definitive – detecting trends in data never gathered for these purposes is hard. But please do not dismiss the efforts and conclusions of those who are attempting to do this by scholarly study as glibly as Muller does. For instance, if he said there were no trends in wildfires in recent decades, he’s flat-out wrong – by all accounts. So go to the primary sources and decide for yourself.
Actually, Muller says number of wildfires has been going down significantly and the trend for the last 30 years is monotone down. Area burned in the US has trended up somewhat recently, but the data varies all over the place. That’s for the whole US. So Western wildfires have been increasing but its a very noisy function. You know I really doubt if the last 30 years are unique in this regard. So Muller actually looked at the numbers and the trend of severe tornadoes is SIGNIFICANTLY down over the last century. Likewise hurricanes that hit the US, the trend is down even though not by much. This is true no matter what category you are looking at. So Muller has the raw data correct. You can always cherry pick some part of the data, like Western wildfires that are showing a trend. This is exactly what Muller is complaining about. I guess the question is, if a cursory look at reliable data shows decreasing severe events, it kind of makes me not want to waste my time on straining at knats and swollowing camels. It’s not balanced and its not honest to do this.
So, if severe tornadoes and hurricanes have been decreasing in numbers, but there is an increase in adjusted insurance claims that would indicate to me that the issue must be Pielke’s point. We keep building more and more in hurricane prone areas. This is even truer for tornadoes.
That’s interesting. When I was a kid prairie fires were something really old people talked about. None of us ever saw one. Now prairie fires are something young people talk about. They complain they can’t use firecrackers.
So you choose to believe that Westerling et al. (and Barthel and Neumayer?) are cherry-picking, rather than analyzing specific data-sets possibly amenable to distinguishing causative factors? If so, we disagree. Perhaps “a cursory look” is insufficient to draw reliable conclusions. Thanks for reading the papers.
Just seems to me like there is a contradiction between the hurricane and tornado data and the insurance claims data and that we are talking here about small effects compared to past events like the dust bowl. It’s fine to study these things so we can adapt to changes. What I don’t like is the same thing Muller doesn’t like, viz., people like Trenberth claiming its due to human caused global warming and cherry picking the data to make it seem worse than it really is. In reality things are not getting worse on average and that’s pretty clear from the data.
I’m really tired of “tragedy TV” and the constant exaggeration and searching for the worst and finding the worst. Yelllow journalism has come back in the last 20 years and I can’t stand to watch it. One might call Trenberth a corporal of the “tragedy climate science” corps that is constantly predicting worse and worse disasters and having to process the data more and more selectively to find them. You know this search for disasters and the feeling that they are getting worse and worse is a mental disorder. Kind of like the feeling that pollution is getting worse and worse or that our food supply is being contaminated by more and more “chemicals,” or the feeling that flouridation is a plot to contaminate our precious bodily fluids. I’m being sarcastic, but there is something real here. There is a part of human nature that feels that mankind is contaminating the world and that if we just went back to “nature” things would be great. Bertrand Russell in his analysis of Rousseau in his History of Western Philosophy shows how a lot of modern western thought is shot through with this irrational line of reasoning and how it leads to all kinds of real tragedy.
David, Now Trenberth says the missing heat is in a Russell’s teapot at the bottom of the ocean.
We have the null hypothesis in climate science. The repeating cycles of around 100,000 years in the grip of the Ice Age we’re in with brief respites of interglacial for c15,000 years in between
“Given that global warming is “unequivocal”, to quote the 2007 IPCC report, the null hypothesis should now be reversed, thereby placing the burden of proof on showing that there is no human influence. ”
Global warming can mean the planet is warming. Global warming can also mean the planet is warming due to humans.
The only graph showing earth is warming due to humans is Mann’s hockey stick. And everyone should aware that Mann’s hockey stick is “unequivocal” as far as being incorrect or if you like fraudulent.
In addition the computer models using the theory that increasing CO2 level would cause increases in global temperature have over time been shown to be incorrect.
So the definition of global warming as something caused by humans is not
“unequivocal” but rather the exact opposite. Or has been proven to be wrong with zero hope of somehow being correct.
Therefore if what is meant by global warming means is earth’s average temperature has been rising for more than century, the burden of proof can’t be that there is no human influence. But rather climate experts have proven themselves to wrong and must somehow get out of their holes. And that is very doubtful considering the amount denial they seem to be expressing.
Not all climate expert are in complete denial, some have stopped denying that natural variability has been a significant factor in global warming over the last century. Apparently giving up on the idea that pre-1950 wasn’t caused by CO2, but instead are claiming the warming caused by CO2 has occurred sometime after this point.
And some climate scientists (like our host here) are questioning the validity of the IPCC premise that “most” of the warming since 1950 was “very likely” caused by human GHGs (primarily CO2), much to the dismay of the others.
What I can’t understand about the repeated assertions by people like Dr Trenberth that “the evidence for anthropogenic climate change is now so clear that the burden of proof should lie with research which seeks to disprove the human role.” and “Humans are changing our climate. There is no doubt whatsoever”, is that when they’re asked to explain what the evidence is, they’re never able to do so.
It simply isn’t true that a correlation between rising CO2 levels and global average temperature proves the case. Axiom: Correlation does not prove causation. All he or any other ‘climate scientist’ has to offer here is modelling results, which are just calculations and prove nothing. Furthermore you can grind statistics until the cows come home without proving any physical effects.
At the same time, all of us with any understanding of physics know that there are very good scientific reasons for disbelieving any such causation. My Q&A fact sheet paper is available for anyone who would like to email me for a copy at firstname.lastname@example.org.
It’s not hard to understand when you accept that there is not a ray of sunlight separating Trenberth from a lifetime Lefty liberal government bureaucrat like Al Gore.
There’s a reason why statisticians use the null hypothesis that X=0 (whatever X or 0 may mean in context), and that is because the statistical tools don’t actually work backward. You can demonstrate beyond a reasonable doubt (say, α<0.05) that X≠0 assuming some reasonable estimate of random variation; but you cannot prove that X=0 since Δ may be arbitrarily small. Why not produce OC-curves (or power curves)? Why not use the dynamic tests developed by W.Shewhart at old Bell Labs back in the 1930s?
Wonderful goods from you, man. I’ve bear in mind your stuff previous to and you are simply extremely wonderful. I actually like what you have acquired here, really like what you are stating and the way in which you assert it. You are making it enjoyable and you still care for to keep it sensible. I can not wait to learn much more from you. This is really a terrific website.
Reblogged this on I Didn't Ask To Be a Blog and commented:
Trenberth and Curry weigh in.