Unknown and uncertain sea surface temperatures

by Tony Brown

Over the years I have become intrigued at the manner in which historic surface temperature records- that were never intended to be more than a broad reflection of the micro climate around them- have been used as if they were highly accurate scientific data and subsequently used to inform policy. I have written two articles about their historic accuracy, both of which can be accessed through this link.

The oceanic equivalent-sea surface temperatures (SST’s) have even more dubious origins than land temperatures, but are also perceived as a scientific record that accurately informs our global understanding of SST’s back to the middle decades of the 19th Century. In some ways they are even more of a key measure than land temperatures as the ocean constitutes 70% of our globe.

In this article we tackle the methods used to calculate SST’s-most famously by Hadley/Met office who compiled the graph of SST anomalies from 1850 shown here. Methodology is described by the Met Office Hadley Centre here:

The SST data are taken from the International Comprehensive Ocean-Atmosphere Data Set, ICOADS, from 1850 to 1997 and from the NCEP-GTS from 1998 to the present. HadSST2 is produced by taking in-situ measurements of SST from ships and buoys, rejecting measurements which fail quality checks, converting the measurements to anomalies by subtracting climatological values from the measurements, and calculating a robust average of the resulting anomalies on a 5° by 5° degree monthly grid. After gridding the anomalies, bias corrections are applied to remove spurious trends caused by changes in SST measuring practices before 1942. The uncertainties due to under-sampling have been calculated for the gridded monthly data as have the uncertainties on the bias corrections following the procedures described in the paper.”

The Wikipedia defines sea surface temperature here:

“Sea surface temperature (SST) is the water temperature close to the oceans surface. The exact meaning of surface varies according to the measurement method used, but it is between 1 millimetre (0.04 in) and 20 metres (70 ft) below the sea  surface.”

However, the complexities of defining sea surface temperature are elaborated upon here:

SST is a difficult parameter to define exactly because the upper ocean (~10 m) has a complex and variable vertical temperature structure that is related to ocean turbulence and the air-sea fluxes of heat, moisture and momentum. Definitions of SST provide a necessary theoretical framework that can be used to understand the information content and relationships between measurements of SST made by different satellite and in situ instruments. 

To ascertain how the basic SST data is physically collected, refined, and subsequently used as the basis for information utilised by Governments all over the world, and in conjunction with CRU as the definitive record of global land/sea temperatures, it is worth starting out on our voyage of discovery by reading the Wikipedia article referenced above in full, which continues;

“There are a variety of techniques for measuring this parameter that can potentially yield different results because different things are actually being measured. Away from the immediate sea surface, general temperature measurements are accompanied by a reference to the specific depth of measurement. This is because of significant differences encountered between measurements made at different depths, especially during the daytime when low wind speed and high sunshine conditions may lead to the formation of a warm layer at the ocean’s surface and strong vertical temperature gradients (a diurnal thermocline). Sea surface temperature measurements are confined to the top portion of the ocean, known as the near-surface layer.”

The methodology for collecting temperatures from the sea historically covers measurements from ships (buckets and engine intakes), buoys floating on the ocean surface, weather ships, and in recent years the employment of an array of diving ‘Argo’ buoys. Satellite measurements commencing around the 1970’s are outside the scope of this article. All methods measure at different depths, from the top ‘skin’ of the ocean at 0.01 mm through to hundreds of metres below the surface, and therefore one method cannot be easily compared to another, as temperature is likely to vary considerably according to depth.

“The temperature of the world’s ocean is highly variable over the surface of the ocean, ranging from less than 0°C (32°F) near the poles to more than 29°C (84°F) in the tropics. It is heated from the surface downward by sunlight, but at depth most of the ocean is very cold. Seventy-five percent of the water in the ocean falls within the temperature range of −1 to +6°C (30 to 43°F) and the salinity range of 34 to 35. “ (reference)

The earliest measurements of the world’s oceans, rather than coastal waters, were taken from ships, with an early example of this being when Benjamin Franklin suspended a mercury thermometer from a vessel while travelling between the USA and Europe when conducting his survey of the Gulf Stream in the late eighteenth century. Temperatures can however be relatively consistent if there is a current-such as the Gulf Stream- operating throughout the varying levels, as exemplified in the examples given below. These cover the first reliable reference to systematic and detailed ocean temperature sampling for scientific purposes that the author is aware of. They come from around the 1820’s, when expeditions were mounted by the Royal Society to investigate the ‘unprecedented’ melting of the Arctic.

“An observation which it is interesting to mention here, and which gives a proof of the very little difference between the temperature of the surface and that at some depth, is mentioned in the Voyage of Captain Graah, p. 21. He says,” The 5th of May, 1828, in lat. 57° 35′ N., and 36° 36′ w., Gr., the temperature of the surface was found 6°-3 (46°-2 Fahr.), and at a depth of 660 feet 5°5 + K. (44°-5Fahr.).” This proves that there is no cold submarine current in the place alluded to the S.E.. of Cape Farewell. A still more conclusive experiment is recorded by Sir Edward Parry in the account of his first voyage, June 13, 1819 : in lat. 57° 51′ n., long. 41° 5′, with a very slight southerly current, the surface temperature was 40J° Fahr. ; and at 235 fathoms 39°, a difference of only 1J°.” (reference).

Any measurement of the ocean’s surface, or deeper sub surface, prior to the middle of the 19th century is generally considered of relatively limited scientific value as there was little consistency in the data collection.

In 1853 Lieutenant MF Maury helped organise the Brussels international Maritime conference, whereby all participating countries agreed to adopt common methods to monitor meteorological and marine information-of which SST’s were a small part of the total. His 1855 book ‘The Physical Geography of the Sea” was considered required reading.

However, it was not until after World War II that the science of measuring SST gained momentum as new and more reliable methods of measuring them came about, given a further boost during the International Geophysical year in 1957/8. However, even in more recent years the ocean has continued to yield surprises, for example in the mid 1990’s it was discovered that deep ocean currents were both much stronger and much more variable than previously realised. In this context the development of increasingly accurate SST’s (when gathered from scientific sources), albeit still spatially incomplete, can be said to have arisen only over the last half century. Data collected prior to that has a big question mark over it as we shall discover.

 Chapter 5 of the book ‘Descriptive Physical Oceanography’ by M P M Reddy describes some of the methodology used, but is perhaps even more interesting for the general history preceding it.

Proper scientific expeditions, such as those mounted by the Royal Society in the 1820’s, were probably able to retrieve broadly accurate (to a few degrees)  sea temperatures (subject to all relevant caveats mentioned in the article about land temperatures) for a tiny stretch of ocean during a brief window of time before the ship moved on. Then the observer would be sampling a piece of ocean that might display completely different temperature characteristics, for as any swimmer will testify, water does not always mix very well, and as already observed, that depth plays a key part in temperature.

However, the majority of earlier records were not taken under strict scientific conditions, but on a much more casual ad hoc basis by members of the world’s navies, together with fishermen. The method of sampling was quite simple, whereby a wooden or canvas bucket was attached to a length of rope marked off in fathoms, the ensemble thrown overboard, the bucket subsequently raised and then a thermometer stuck in it to record the temperature of that small portion of the ocean. (see figure).  A method that, with small variations, persisted for 140 years.

Commenting on disparities in data this paper observes;

“…A new paper by David Thompson and other NOAA atmospheric scientists in Nature reports a different explanation (for accuracy). Most of the wartime measurements of sea temperatures factored into the global average came from US warships, which unlike the British navy tended to log engine room water intake thermometer readings as representing the temperature of the sea. The hardy jack tars who returned to meteorological duty as the war wound down instead relied as always on the time honoured method of throwing a bucket over the side, hauling it in, and putting it on deck for a thermometer wielding chief or officer to measure. The late Victorian change from oaken buckets to galvanized steel was compounded before World War II, when not just British, but Dutch and Japanese hydrographers were issued porous and hence cooling-prone canvas seawater scoops, a bad idea since the wind is generally brisk on a moving vessel.  Inevitably, the seawater sampled tended to cool – evidently measurably, in the time it took to present it on deck for measurement.”

The difficulty of keeping such a fragile instrument as a thermometer in one piece, let alone calibrated, can be imagined. On serious scientific expeditions it might be kept locked in the Captain’s cabin but in many other circumstances it might remain on a hook outside in all sorts of weather

This link  shows a ships barometer and thermometer from around 1855. The thermometer used in the bucket would often have been a robust standalone version of the instrument on the left of the main picture.

To put the problems inherent in recording ‘bucket’ temperatures in this fashion into their proper context, I can do no better than record the conversation I had some years ago with someone who had served in the British Navy in the 1940’s and 50’s when the bucket readings were still common (they finally finished in the early 60’s).

He matter of factly pointed out that the water was taken from all sorts of depths (greatly dependent on the strength and disposition of the person involved) and the water left in the container (not always an approved bucket) for indefinite periods of time, which included periods of hot sunshine and the cool of the night. Similarly, the quality of thermometer was not always of the highest, calibration infrequent and thermometers left in the ambient temperature on deck before often cursory readings were taken of the water sample, thereby compounding uncertainty.

His incredulous laughter as I recounted the great importance scientists attached to readings such as his is with me still. This is not to say of course that every SST was collected in this manner, but far too many for the general record to be considered to be scientifically robust and meaningful.

This from a 1947 paper “A new bucket for measurement of sea surface temperatures”

“It has been known for many years that the standard method of measuring sea surface temperatures by taking a sample with a canvas bucket is liable to serious errors.”

Precisely.

In this 1963 book H F Herdman commented that too often the sample is taken in a canvas bucket and the temperature read after an appreciable time.

This internal Met Office memorandum from 1985 by the highly respected D E Parker and is entitled “A comparison of bucket and non-bucket measurements of sea surface temperatures” concerns the differences of temperatures between water collected in insulated or non-insulated buckets. The author seems to fondly believe that, despite all the evidence to the contrary, water taken from buckets can be parsed to tenths of a degree. It also usefully shows the grid system used to measure data over the 70% of the globe that is ocean, and illustrates the precision which Hadley believes they have. Essential reading for all serious students of SST’s as it gives an indication of the early development of the SST data base.

There was a variation on the bucket method whereby the thermometer was put in the bucket before lowering over the side, with, in later years, a rubber cushion for protection. A reversing thermometer-generally used for greater depths than the surface -was developed in 1874 and in use from 1900 to 1970.

Generally, the SST data collected was of variable quality because of methodology and instrumentation quality, to which can be added lack of ‘spatial’ data-measurements were intermittently gathered over a tiny proportion of the world’s oceans as observed here:

‘Maps of mean temperature have also been made from ICOADS data. The data are poorly distributed in time and space except for some areas of the northern hemisphere. In addition, Reynolds and Smith (1994) found that ship temperature data had errors twice as large as temperature errors in data from buoys and AVHRR. Thus, space data processed by Reynolds are more accurate, and better distributed than ICOADS.’

A little more insight into the creation of sea surface temperature records was provided by Richard Verney (in the comments), here writing about the method that eventually superseded buckets — taking measurements from engine intakes:

“It seems to me that the Met Office and CRU do not understand how ship’s data is taken. Sea water temperature reported by ships is taken from the sea water drawn for cooling the engine. Where this is drawn from depends upon the design and configuration of the ship and whether the ship is proceeding in ballast or is laden. Ships try and avoid lengthy ballast voyages since these are not revenue earning legs. With a laden ship, the sea water drawn for cooling is drawn about 10m below the surface (could be anywhere between say 7m and 13m but about 10m is typical).

What does this mean? It means that ships are measuring sea temperature at a depth of about 10m whereas Buoys measure sea temperature at a depth of about 1m (to 3m). (Authors note; buckets measure temperatures at various depths, according to the disposition of the thrower, satellites at .01mm) Generally, the greater the depth, the cooler the water.

It follows from this (i.e. the depth differential) that one would expect ships engine intake data to record a lower temperature (not a higher temperature) when compared to the similar measurement taken by Buoys (measuring nearer the surface). This means that in order to make a like for like comparison, one should either adjust the Buoy temperature downwards, or the ship’s temperature upwards. There is no case for adjusting the Buoy temperature upwards since this further exacerbates the difference between the depth at which the data is taken.

Further, a not insignificant number of ships may have a tendency to deliberately under record the sea temperature. Many ships carry liquid cargoes that need to be heated (various chemicals, palm oils, veg oils etc.). In simple terms, the ship owner gets paid for heating these cargoes.…The ship owner gets paid for heating when he heats. Of course with very warm tropical seas, cargoes cool slower and the natural prevailing sea water temperature may be sufficient to keep the cargo free flowing such that much heating may not be required. It is therefore in the ship owner’s interest to record sea water temperatures slightly lower than those truly prevailing so that he can claim and charge for heating when in fact no heat is being applied. I am not saying that the practice is uniform throughout the shipping industry but it certainly does occur. Thus a number of ships are recording/reporting a lower temperature than that actually experienced.

Accordingly, for these two reasons, there is reason to believe that temperature records provided by ships under assess/under record the sea surface temperature. That being the case, sea temperatures may have decreased even more than the ‘team’ (or those closely connected with them and/or supporters thereof) is prepared to accept.”

This interpretation is confirmed by the  Seafriends site, of  Dr. J. Floor Anthoni, from which comes this excerpt concerning ocean temperatures:

“Ocean surface temperatures have been measured by ships for several centuries. First it was done by collecting surface water in a bucket while steaming on, but later the engine’s cooling water inlet was used. Unfortunately this made a difference, because the water inlet is at some depth under water. Today this may serve to advantage because satellite can measure only the top few centimeters of the sea because infrared radiation is rapidly absorbed by water. Because water continually evaporates from the sea, the surface film is somewhat colder than a few meters down.”

Notwithstanding the method of collection, it must also be recognised that the few readings that were taken came from shipping lanes that represented a tiny fraction of the oceans’ surface, and here we have another factor already touched on, for as well as the quality of the information there is an equal concern with the quantity of the data, as relatively few readings were taken, and the geographical coverage is much sparser than even the inadequate land temperature record.

This graphic – Reynolds 2000- shows the traffic of ships used for collecting SST Data during the week 1-8th January 2000 during a time of maximum economic activity. (Width of the lanes greatly exaggerated for pictorial purposes). It shows very poor coverage even then but a snapshot from 1850 would reveal magnitudes of lower activity. Observations and calculations of SST anomalies are shown in this WG2 analysis from the IPCC

Today’s hi tech version of these older methods of measurement are Argo buoys:

“Argo is a global array of 3,000 free-drifting profiling floats that measures the temperature and salinity of the upper 2000 m of the ocean.  This allows, for the first time, continuous monitoring of the temperature, salinity, and velocity of the upper ocean, with all data being relayed and made publicly available within hours after collection.”

This currently short lived experiment had a controversial beginning as the buoys initially recorded ocean heat content that was dropping (although not strictly under the same criteria as SST’s) This was an apparent anomaly as ocean temperatures were expected to show a rise, commensurate with computer models. This deviation was explained in this article which also provides useful information of what % of the whole-such as thermal expansion and glacier melt- is attributed to each aspect of sea level change.

“In 2004, Willis published a time series of ocean heat content showing that the temperature of the upper layers of ocean increased between 1993-2003. In 2006, he co-piloted a follow-up study led by John Lyman at Pacific Marine Environmental Laboratory in Seattle that updated the time series for 2003-2005. Surprisingly, the ocean seemed to have cooled.

Not surprisingly, says Willis wryly, that paper got a lot of attention, not all of it the kind a scientist would appreciate. In speaking to reporters and the public, Willis described the results as a “speed bump” on the way to global warming, evidence that even as the climate warmed due to greenhouse gases, it would still have variation. The message didn’t get through to everyone, though. On blogs and radio talk shows, global warming deniers cited the results as proof that global warming wasn’t real and that climate scientists didn’t know what they were doing.”

However, the interpretation given above has itself been superseded by this paper from November 2010  ‘Recent energy balance of earth’ by Knox and Douglass, who after researching the data from the Argo floats, show that for the most recent period 2003-8 ocean heat content was indeed shown to be still cooling-not warming.

Abstract. A recently published estimate of Earth’s global warming trend is 0.63 ± 0.28 W/m2, as calculated from ocean heat content anomaly data spanning 1993–2008. This value is not representative of the recent (2003–2008) warming/cooling rate because of a “flattening” that occurred around 2001–2002. Using only 2003–2008 data from Argo floats, we find by four different algorithms that the recent trend ranges from –0.010 to –0.160 W/m2 with a typical error bar of ±0.2 W/m2. These results fail to support the existence of a frequently-cited large positive computed radiative imbalance.

Considerable debate on the re-affirmation that oceans appear to be generally cooling sparked a lively debate on this blog post.

To date, what with the adjustments cited in the paper, the shortness of the project, and the difficulty in obtaining on going data, this method of collecting sea temperatures is as yet unproven.

Conclusions on Sea Surface Temperatures

With land temperatures we observed we were often comparing apples and oranges. Similarly it can be seen that with SST’s we are mixing a great variety of incompatible methods of collection, can observe that the number of samplings of the ocean are minute in terms of physical numbers, and recognize that the methodology itself is potentially severely flawed. In addition, the limited understanding we have of ocean temperatures drops exponentially the further back in time and the more remote the area, as the measuring points are so limited. To compound the problems, where the data is sparse it is statistically infilled from areas where it may still be sparse.

None of this will stop Hadley (and others) parsing global SST’s to a fraction and elaborating on the robustness of the answer, which the IPCC and  National Governments will then take as proof positive to enact yet more measures to guard against warming.

The basic historic temperature data,  land or surface, used in good faith by climate scientists, statisticians and analysts does not appear to meet basic quality control measures and are not fit for purpose-that of consistently determining temperatures to tenths of a degree.  Historic Sea Surface Temperatures in particular are highly uncertain and should not be considered as any sort of reliable measure.

References

220,000 log books of the Royal Navy from 1669 to 1976 are being studied for meteorological information that can give an insight into climate change.

A project to recover worldwide weather observations made by Royal Navy ships around the time of World War I.

Metrology of thermometers is the science of measuring as described here

Fascinating article about various types of historic measurements taken at sea

Bob Tisdale has an excellent web site dealing with all aspects of ocean temperatures

484 responses to “Unknown and uncertain sea surface temperatures

  1. If some measure shows warming, we are all going to fry and there are predicted dire consequences of all kinds; if the measure shows cooling it’s either temporary, the measure isn’t reliable, or it’s just wrong because it doesn’t show warming.

    • That sums it up Jim.

      At some point politics exceeds the topic of discussion, it’s obvious to many but others hang on thinking there is “objectivity” left in the consensus view. Without a dissenting voice published by the IPCC the pattern will be shout down all dissent and claim the club has settled the science.

      AGW causal conclusions are a pseudo science, it doesn’t mean climate shouldn’t be studied and considered a science but only a quack would make a sweeping conclusion with the tools at hand. This is where the summary IPCC groups have taken us.

  2. Shanghied, we was.
    ==========

  3. Interesting article, with many problems pointed out. Beyond those examined is the infilling business. .You go out, collect data, and then you analyze it. You don’t collect data, make up some more data, and then throw it all in the same pot and do your analysis. I know people do it, but I don’t know how they get away with it. If you don’t have enough data/coverage to do what you want to do, then don’t do it. It seems to be standard practice in climate science to start with some data, invent some more, and then make a mash-up and present it as a ‘study.’ If you’re using pseudo-data, then it’s a pseudo-study.

    So in this case, you start out with bad data, you infill it with pseudo-data based on your bad data, and then you stand by it? My head is spinning. I’d like to see this stuff reviewed by molecular biologists. Nothing would get published.

  4. Climate Science has taken on many of the aspects of Cargo Cult Science as described by Richard Feynman in 1974.

    http://www.lhup.edu/~DSIMANEK/cargocul.htm

    The scientific method requires that all data, even data that does not support your position be treated equally. The IPCC does not do this. It gives greater weight to data that is agreed to by consensus than it does to contradictory data.

    As such, if the majority believe in a wrong assumption, the evidence that this assumption is wrong will not come to light, because the contradictory evidence has less weight than the accepted belief.

    This argues strongly that the IPCC through the consensus process is having a corrupting effect on the application of the scientific method to Climate Science. In effect, Climate Science through the consensus method promoted by the IPCC has become a form of Cargo Cult Science.

    Climate science is using the trappings of the scientific method to locate data that supports AGW while ignoring, minimizing or otherwise rejecting data that argues against AGW because it does not match the consensus.

    This is the critical element. Contrary evidence is being rejected or otherwise minimized not because it is wrong, but because it does not match the consensus. This prevents Climate Science from separating fact from belief.

  5. “A man with one watch knows what time it is; a man with two watches is never quite sure.” Lee Segall

    In this case two thermometers

    • I have a broken watch that is absolutely correct 700+ times a year. While I have a working watch that is never quite correct.

      According to Climate Science, my broken watch has the correct time, because I can find many more examples where it is correct, than I can find examples where my working watch is correct. This is the basis of the IPCC promotion of AGW. Count the number of times something is right and call it scientific proof.

      • ferd berple

        Count the number of times something is right and call it scientific proof.

        Or better yet:

        Count the number of times something can’t be proven to be wrong and call it scientific proof that it is correct

        Max

  6. Excellent post.
    Thanks Tony.

  7. Tony Brown 6/27/11, SST

    Truly remarkable in climate studies is how well the state-of-the-art models for solar radiation (Wang, et al. (2005)) and Global Average Surface Temperature (HadCRUT3, also known as the Brohan record) fit together. This tends to validate both models, and tends to validate the construction of the SST record. It suggests that any SST biases have been well removed, and that the temperature models produce a useful variance reduction from combining the many thousands to millions of measurements over time and space.

    Earth’s IPCC-endorsed annual surface temperature over its full 140 year record is a simple (4 or 5 constants) lagged, linear response to the IPCC-endorsed Sun, one sigma = 0.13ºC or 0.11ºC. This result produces a variance reduction ratio of 79.0%, which is comparable to IPCC’s 11-year smoothed representation itself with its 89.3% VRR.

    IPCC and its climatologists cannot enjoy this result, however, because it quite invalidates AGW.

  8. So to summarise the scientists using the data are in effect guesssing and it would appear not even educated guessing?

  9. K Scott Denison

    Great post, thank you.

    Whynis it so hard for some to admit that the quality of the data available today simply isn’t high enough to detect trends, let alone to do do projections?

  10. AGW is an attempt to murder science. Vast sums have been spent on its weapons. We’d better all hope they have only inflicted flesh wounds.

  11. Theo Goodwin

    This article is excellent on all counts. But it deserves special praise for exhibiting that most essential and rare quality in scientists, humility. Scientists should be accurately describing their data and data collection techniques, not hyping them. However, any Warmista who reads this article will offer copious explanations of how their hyper-advanced statistical techniques will permit them to find the signal in the noise. Hooey. By the way, similar and equally damning criticisms apply to land records. These measurements were taken by people who were untrained in siting, reading, and maintenance of their equipment. Many of them were farmers who were also school teachers. No doubt in June they would read the thermometer the first day of the month and not read it again until the weather changed, reporting the one reading for the entire month in some cases. Yet our climate scientists, so-called, claim that such temperature records are accurate to within one tenth of one degree Fahrenheit per decade. Preposterous. The great need in climate science at this time is universal agreement, including critics, on a global measurement regime for temperature, heat content, and various other measurable phenomena. After the new technology has been installed for fifty years, we might have some reasonable data about climate.

    • Tho, Thank you.

      If you go to the first paragraph of this article you will see a link to two papers that I wrote on the Land surface record, which is every bit as bad as you imply.

      tonyb

      • Theo Goodwin

        Tony, Thanks. I think it is very difficult for today’s climate scientists to imagine the conditions under which the land surface temperature readings were made fifty to one hundred years ago. Your articles are very helpful in this regard.

  12. This is another disappointing guest post. There’s just no argument here. A nonscientist attacks a particular data set as not meeting “scientific standards,” but he neither describes what standards he thinks were not met nor provides any evidence for his thesis. He asserts that the data is treated as “as a scientific record that accurately informs our global understanding of SST’s back to the middle decades of the 19th Century” but he never defines what he means by a “scientific record” and the passages he does cite make it clear that the scientists are well aware of the limitations of the data:

    “The SST data are taken from the International Comprehensive Ocean-Atmosphere Data Set, ICOADS, from 1850 to 1997 and from the NCEP-GTS from 1998 to the present. HadSST2 is produced by taking in-situ measurements of SST from ships and buoys, rejecting measurements which fail quality checks, converting the measurements to anomalies by subtracting climatological values from the measurements, and calculating a robust average of the resulting anomalies on a 5° by 5° degree monthly grid. After gridding the anomalies, bias corrections are applied to remove spurious trends caused by changes in SST measuring practices before 1942. The uncertainties due to under-sampling have been calculated for the gridded monthly data as have the uncertainties on the bias corrections following the procedures described in the paper.”

    How successful are these scientists in overcoming the limits of the data set to retrieve reliable information? Tony doesn’t say, and one suspects that is because he has none of the relevant expertise necessary to analyze the math and see how well the corrections work, or evaluate the tests and comparisons applied. He is left with no way to say anything about the data except to erect a straw man (scientists think the data is perfect) which he own quotes show to be fictional, and then attack the straw man (the data is not perfect! Shock!)

    Besides the rather obvious political bias, logical misfires like Tony’s essay are a consequence of people who are ignorant of science coming to believe that they know all about it and are better at it than the professionals. Even an interested amateur — anyone, really, who has read more than half-a-dozen scientific papers, should know that data sets rarely come to us in a neat package, free of sources of error or potential bias, with infinite detail covering exactly the points that are of interest to the scientist. So the name of the game is always identifying the limitations of data and getting the most good science out of it you can. It’s certainly not ignoring a gold mine like hundreds of thousands of temperature measurements from all over the globe because they are challenging to interpret. (Imagine the rage among deniers if such records seemed to cast doubt on the hockey stick, for example, and were ignored — “Scientists suppress a century of direct measurements by practical navy officers whose lives depended on accurate weather forecasts; How will climate alarmists survive this deception?”)

    How successful are the investigators in their efforts to help estimate historical SSTs using these data sets? Now that would be an interesting guest post. Especially if you got a scientist to write it.

    • Hi Robert
      I believe that Tony speaks with great authority of a thorough research and excellent knowledge of the subject.
      May I enquire what your authority of the subject is?

      • He certainly pretends to speak with authority; you may be impressed by this pretense, but I am not. Color me skeptical of the author of this mess.

        Where is the evidence for his assertions? You don’t need credentials if you bring data.

      • Robert,

        NO ONE KNOWS!!!!!

        That is what Robert is telling us. NO ONE KNOWS whether that record is close far, partially close etc. There is simply not enough data ABOUT THE DATA to make any estimates of error much less of any conclusions as to whether the data is appropriate for what it is being used.

        One of the most basic ideas in averaging is you must be averaging the same thing. How the heck can they get averages from such varying data. It isn’t even all of the surface temp much less with similar instruments or methods. You know, kinda like the surface record.

    • Theo Goodwin

      This article is about collecting data. It is not about using statistical techniques on collected data. Apparently, you deny that there is a difference. The article describes some data collection techniques and rightly explains that those techniques could not produce data that are reliable to within one degree per decade much less one tenth of one degree. No genuine statistical technique can find a signal in garbage. So, why are you trying to change the topic to statistics?

      • This article is about collecting data.

        Actually, it’s not. Unless you are going to go back in time and give instructions to 19th sea captains, the data collection phase is over and has been for decades.

        This article is precisely about how scientists are using the data which has been collected.

        No genuine statistical technique can find a signal in garbage.

        And how would you know? Scientist? Statistician? Can you give a rigorous scientific definition of “garbage.”

        I am totally unimpressed by scientifically ignorant people attempting to speak with faux-authority on science. Defending it with more of the same gets you nowhere.

      • Theo Goodwin

        ‘And how would you know? Scientist? Statistician? Can you give a rigorous scientific definition of “garbage.”.

        OK, that just about says it all. You are going to defend the claim that statisticians can always find a signal, maybe the signal, in whatever garbage is given them. And your argument is your own authority? Why do you insist on committing some outrageous fallacy in everything you say?

      • Torture the numbers long enough and they will confess.

      • “And how would you know? Scientist? Statistician? Can you give a rigorous scientific definition of “garbage.”

        I am totally unimpressed by scientifically ignorant people attempting to speak with faux-authority on science. Defending it with more of the same gets you nowhere.”

        I have read few (a dozen or so) papers about SST adjustments and those papers were most referenced and/or used by IPCC. While I am not a climate scientist, I am an active scientist in another field and I belive that I am fully capable of understanding, e.g., methodology used in various SST adjustments.

        What I do not understand is the way SST adjustments are made. OK – it is known that different measurement methods have different bias and that the change from one method to another have been gradual – but after that everything seems to be more or less pure guesswork (“if we assume that x% this type of buckets and 100-x% of the other type in year this-and-that and…”) Furthermore, it seems quite odd to apply the adjustments directly to global SST anomaly without any(?) efforts to find out methods to do the corrections in ship log level. I understand that there is not much accurate metadata to rely on but there are some sophisticated methods that could be applied – at least to find out if data is hopeless or not.

        The most amazing thing is that the most of the “standard” SST adjustments are verified and justified by arguments like “now SST record matches models” or even “now SST record seems to be correct” (not necessarily exactly those words but the message is just that).

        I think that Thompson et. al. (2008) was quite prime example of this approach – instead of checking US and UK SST records to see if the “1945-blip” were present or not (which would verify or disapprove their hypothesis) they just made some excel trials and – amazingly – found out that “blip” was disappered. Why papers with so non-existing content are published – and in Nature?

        I have no idea if ICOADS database is a real mess and if reproducing global SST anomalies is “HARRY READ ME” level task but I think that there should be a thorough reanalysis of ICOADS and SST estimates – instead of cascading this-feels-like-ok global SST adjustments.

      • Nebuchadnezzar

        “Furthermore, it seems quite odd to apply the adjustments directly to global SST anomaly without any(?) efforts to find out methods to do the corrections in ship log level. ”

        You are correct that they’re not applied at a ship log level. They’re applied at a local level – grid-cell level which is approximately 500 km on a side – not a global level.

        “I have no idea if ICOADS database is a real mess ”

        So go and look at it.

      • And exactly where do they get information to inform them of the appropriateness of applying their adjustment at grid cell level?? Their ain’t none bud!!!

    • Theo Goodwin

      I see your political bias. I do not see Tony’s political bias. Could you please quote his relevant remarks and explain why you believe they show political bias?

      I hope you will agree that the fact that some statisticians have done wonderful work on some problematic data collections does not imply that all problematic data collections can be handled successfully. Do you agree? The question then is the condition of this data collection. Will you agree to discuss that question?

      • I see your political bias. I do not see Tony’s political bias.

        Which pretty much tells us what your bias is.

        I hope you will agree that the fact that some statisticians have done wonderful work on some problematic data collections does not imply that all problematic data collections can be handled successfully. Do you agree? The question then is the condition of this data collection. Will you agree to discuss that question?

        Which is exactly what I proposed:

        How successful are the investigators in their efforts to help estimate historical SSTs using these data sets? Now that would be an interesting guest post.

        To contribute to that discussion, Tony would need to analyze the specific measures taken to deal with the limits of the data and make arguments for how successful or unsuccessful they were. He would also need to examine realistically what the authors are claiming for this data, and what they are not, instead of pretending the data is being presented as perfect. Tony would need to look at whatever comparisons are available with other temperature reconstructions of the period and see if reconstructions from these data sets are reasonable in that context.

        None of that work has been done here. It’s possible it’s due to laziness, but I suspect he just doesn’t know how.

      • Theo Goodwin

        You continue to change the subject. This essay was not about statistical technique but about data collection. You refuse to discuss the essay and are fervently engaged in trying to hijack this forum.

      • Come off it. Pull the other one. You only have to read the first few science bashing comments to see what the effect of this essay has been. An article like this that lists as many problems as possible but provides none of the solutions has only one effect on laypeople – it smears the SST records.

        All those early commenters got that message anyway. So did Robert and so he’s spoken out about it.

        You claim you don’t get see that message. You claim the article is about “data collection”. If that was the case why are there so many commenters above making conclusions from this essay above and beyond what can be concluded if it was merely about data collection?

        More to the point why are you not accusing them of hijacking the forum?

        My opinion is your accusation that Robert is trying to hijack the forum is a transparent attempt to shut him up from exposing the true nature of this post.

    • I think SST data is very dubious before 1950. Apart from the quality and interpretation of individual measurements which Tony addresses, there is the very sparse sampling and dubious filling in of missing data using EOF patterns based upon the period 1960-1990. IMO Tony provides a useful historical perspective on how these data have been collected.

      Here is what the CRU “experts’ have to say on uncertainty in the SST record, see Brohan et al. See esp Figure 12. Do you for one minute believe that the uncertainty in global average sea surface temperature in the 19th century is 0.3C? I sure as heck don’t, especially when the uncertainty estimate in the 19th century for land temps is more like 0.8C (a more realistic number but still possibly too low). A more recent essay on the challenges of determining global SST is written in this article

      Here is some text on SST that I wrote for the original version of the uncertainty monster paper:

      The AR4 (Figure 3.6) uses the HADCRUT3 dataset for surface temperature, whose error statistics are described in some detail by Brohan et al. (2006). Analyzed sources of error include estimates of measurement and sampling error, temperature bias effects, and the effect of limited observational coverage on large-scale averages. The errors in the land surface data are thoroughly characterized (Brohan et al. 2006; Fig. 12a). Because the oceans comprise approximately 70% of the Earth’s surface, the error bars in Figure 3.6 of the AR4 are dominated by errors on the ocean dataset that are likely to be too small, perhaps by a substantial amount. According to Rayner et al. (2009), the historical database of SST observations contains biases that are comparable in size to climate variations, arising from changes in instrumentation and data sources. Gridded analyses are produced by inferring missing data and smoothing available observations by making assumptions about the statistical properties of the observations using data from the relatively data rich period 1960-1990. This presumption that modes of variability over the entire period is encapsulated in this relatively short period of time begs the question of what is climate change, and may miss the longer internal multidecadal oscillations particularly in the data sparse Pacific Ocean.

      • Do you for one minute believe that the uncertainty in global average sea surface temperature in the 19th century is 0.3C?

        I don’t know. I’ll look at the sources you cite. It doesn’t immediately offend my common sense to say that uncertainties might be less over the sea than on the land, because of the greater thermal inertia of the seas. I would think variation in SST would be less than in comparable land records.

        Gridded analyses are produced by inferring missing data and smoothing available observations by making assumptions about the statistical properties of the observations using data from the relatively data rich period 1960-1990. This presumption that modes of variability over the entire period is encapsulated in this relatively short period of time begs the question of what is climate change, and may miss the longer internal multidecadal oscillations particularly in the data sparse Pacific Ocean.

        And there we see the beginnings of a really interesting post on the subject by somebody who knows a lot about it. The contrast is striking.

        These limitations seem like a good reason to try and reconstruct a longer and broader temperature record from things like ship’s measurements. More measurements ultimately means less assumptions, less extrapolation. Can the limits of the methods of collection be overcome to give useful information? Tony doesn’t give us any information to help draw a conclusion one way or the other.

      • Rob Starkey

        Robert

        I think you realize that the basic temp measurements have been confirmed to be unreliable and therefore only very limited conclusions can be accurately drawn from using these measures.

        If there is no opinion(s) claiming that the measures where consistantly and accurately recorded/reported I do not understand your position. Those famiar with the data state it is bad data, recorted inconsistantly….is there any point is disagreeing???

      • Nebuchadnezzar

        Hi Rob,

        “If there is no opinion(s) claiming that the measures where consistantly and accurately recorded/reported I do not understand your position.”

        It’s worth looking at the log books of ships from the First World War that are being digitised at the moment:-
        http://www.oldweather.org/how_to_take_part
        http://blogs.zooniverse.org/oldweather/2011/03/better-than-the-defence/

        The consistency and accuracy of reporting is perhaps surprising.

      • Rob Starkey

        The 1st link was of no value, but the 2nd link describing the measurements taken by the HMS Defense was interesting. It appears that they tried to take and record measurements accurately although those looking at the data realize errors were made. The article cites:
        “But looking closely at the image, we can also see some errors, both ours and those of the mariners writing the logs in the first place. We can spot our own errors because each log page is transcribed by at least three people, and when those three people disagree, someone has made a mistake”

        A couple of point’s I’d ask. Do you believe that other ships were as diligent at taking readings and recording measurements as HMS Defense? I would guess not, and even the measurements of the Defense had errors. What was the relative accuracy of the measurement instruments taken by the Defense? Are those instruments of sufficient accuracy to be used for the purpose(s) they are trying to be used for today?

        Personally, since I have not read much on the topic, I defer to the person who has (Curry) who would have no motive to claim that the data was unreliable if she thought it was otherwise.

      • Nebuchadnezzar

        “Do you believe that other ships were as diligent at taking readings and recording measurements as HMS Defense?”

        Hi Rob, The first link was there because it tells you how to take part and that way you get to see a lot of log books. The hand writing varies in quality, but they were certainly diligent in taking and recording measurements. How a navy log book compares to a merchant marine is another question.

        The measurements from the Defense had errors, but they were specific kinds of errors that it was easy to detect. The difficult ones are questions like, how well calibrated was their thermometer? What was their bucket like? Were they even using a bucket? From looking at the observations from just the Defense it isn’t possible to say. Looking at a whole bunch of different ships might help or some other point of comparison.

      • Rob Starkey

        btw–I also initially accepted the positions of the scientists whose positions I read back in the late 1990’s on the subject of climate change. As i heard others disagree, I read independently on the topic and believe that my current position reflects my “informed position”

      • Unfortunately the “informed position” appears to be very often partially informed in the way that it’s based on the long list of potential problems, but not on knowledge on, how well these problems have been taken into account by the scientists and how significant they are, when they are taken into account as well as they really are.

        There is no doubt that many scientific papers are deficient and in error, but again that alone cannot tell, how seriously the present main stream view is affected.

        The state of science is not perfect, but it’s much better than many “informed positions” tell.

      • Robert,
        You are losing.

      • Good God, 0.3 degrees accuracy, no way. I have been on dives where you swim through a thermocline or a current and there has been a 15 or more degree difference in temp. It is like turning off the hot or cold water when taking a shower. Anyone who believes the ocean is easy to get representative measurements from has never spent any time as a diver. It is a complex body of water and I don’t trust anyone who tells me they have it figured out with 0.3 degrees of accuracy.

      • “Do you for one minute believe that the uncertainty in global average sea surface temperature in the 19th century is 0.3C? I sure as heck don’t”

        Are you a quote quote expert on sea surface temperatures too? Do any quote quote experts support your opinion? Have they been published anywhere?

        If not why not? Is there a quote quote conspiracy afoot?

      • read the links i provided. In particular, see Fig 12 of the Brohan et al. paper, these were the numbers used in the AR4. Then read Rayner et al. (which includes as coauthors virtually all the major players in producing SST data sets) which describes all the shortcomings of the data. You have to do some homework here and actually read stuff and then analyze it. I’ve done my homework and Tony has done his.

      • “You have to do some homework here and actually read stuff and then analyze it.”

        I’d rather expect the person claiming the quote quote experts got it wrong to do this.

        I read the Rayner,et all article and sure I see them discussing the problems with historical SSTs, what I don’t see them do is support a position like

        “Do you for one minute believe that the uncertainty in global average sea surface temperature in the 19th century is 0.3C?”

        So again I have to ask – if Brohan et al is wrong where is the published counterpoint? Pointing to another article where various problems are discussed doesn’t really cut it for someone like me whose primary experience of sea surface temperatures is dipping my toe to see if it’s ok to swim.

        You don’t expect me to simply accept your claim based on authority do you?

      • Sharper00,
        Do you realize that what you claim skeptics are you are in fact doing?

      • FYI here are my papers on sea surface temperature:

        Webster, P. J., C. A. Clayson and J. A. Curry, 1996: Clouds, radiation and the diurnal cycle of sea surface temperature in the tropical western Pacific Ocean. J. Clim., 9, 1712-1730. pdf

        Clayson, C.A. and J.A. Curry, 1996: Determination of surface turbulent fluxes for TOGA COARE: Comparison of satellite retrievals and in situ measurements. J. Geophys. Res., 101, 28,503-28,513. (pdf)

        Curry, J. A., C. A. Clayson, W. B. Rossow, R. Reeder, Y.C. Zhang, P. J. Webster, G. Liu, and R. S. Sheu, 1999: High-resolution satellite-derived dataset of the surface fluxes of heat, freshwater and momentum for the TOGA COARE IOP. Bull. Amer. Meteorol. Soc., 80, 2059-2080. (pdf)

        J. A. Curry, et al., 2004: SEAFLUX. Bull. Amer. Met. Soc., 85 (3), 409–424. (pdf)

        Inoue, J. and J.A. Curry, 2004: Application of Aerosondes to high-resolution observations of sea surface temperature over Barrow Canyon. Geophys. Res. Lett., 31, Art. No. L14312. (pdf)

        My expertise is on the modern SST record and remote sensing of SST (both satellite and aircraft). Therefore I appreciate Tony’s perspective on the historical in situ SST measurements.

      • Rob Starkey

        LOL….just a tad bit of overkill here

      • Which ones address uncertainty in the historical sea surface temperature record?

      • Live by Appeal To Authority, Die by Appeal To Authority.

        Andrew

      • Rob Starkey

        Geez….try reading. Anything else you are to lazy to do that you want her to do for you?

      • So Rob if I pick through the papers and discover none of them relate to the topic would you consider that important?

        Better yet, what I simply assert that’s the case and challenge you to read them all and show their relevance? I mean you are being skeptical of Dr Curry’s claims aren’t you? You’re not just going to accept her position based on authority are you?

      • So let’s get this straight – you trust the word of an expert, just as long as their name isn’t Curry?

      • Rob Starkey

        sharperoo-

        Since one of the coauthors of the reports offered them to you to verify the point the she believes that the SST data is unreliable, I would start by reading the papers. If you find information in those reports that shows she believes the SST data is consistantly relaible–then you should point that out to her as an inconsistancy in what she has written.

        If on the other hand, you find that she has consistantly pointed out that the data is unreliable; then it would seem reasonable to discount any conclusions based upon that data. Because I like data—yes—I will probably actually read all the reports

      • @Peter

        “So let’s get this straight – you trust the word of an expert, just as long as their name isn’t Curry?”

        I would like to see the published analysis of an expert not their word. Dr Curry doesn’t like the 0.3 degree uncertainty and has suggested this is supportable from another paper without saying how. She hasn’t even provided an alternative number much less one with work to support it.

        @Rob

        “Since one of the coauthors of the reports offered them to you to verify the point the she believes that the SST data is unreliable, I would start by reading the papers.”

        I could have easily googled Dr Curry’s publishing record.

        I note you have avoided the question – if it should turn out that none of them concern the topic would you consider that significant? Are you prepared to make an argument predicted on the direct relevance of those papers or not?

      • Rob Starkey

        Sharperoo

        I have only read through half of one of the papers so far. It is completely relavant….a bit dry….but absolutely on topic

      • @Rob

        The first paper uses models to analyse the influences on Diurnal temperature cycles. Where, exactly, in it are you seeing anything relevant to the reconstruction of historical SST data?

      • Sharperoo,

        Are you now suggesting that Dr Curry is being less than honest with you?
        What would be her motive? To get one over on an anonymous non-entity like you? I don’t think so

      • @Peter

        “Are you now suggesting that Dr Curry is being less than honest with you?”

        I believe Dr Curry’s point was “I have published papers on sea surface temperatures, therefore I am an expert therefore shut up”.

        “To get one over on an anonymous non-entity like you? I don’t think so”

        I do enjoy reading about how important humility is on this blog.

      • Rob Starkey

        Sharperoo

        I can not copy and paste portions of the PDF to show you, but there are comments throughout the 1st paper that indicate that exactly how you take measurements will have a significant impact on the resulting measurements. Do you not read this???

      • Rub his nose in it, why don’cha :-)

      • maksimovich

        The uncertainty of sampling errors in the vos is well described in the literature,that uncertainty is so great is a significant problem eg Gulev 2007.

        http://journals.ametsoc.org/doi/abs/10.1175/JCLI4010.1

      • curryja says: “I think SST data is very dubious before 1950.”

        I’ll add that the SST data is questionable before the satellite era, which starts in November 1981. And as you mentioned, the source data is too sparse in the Southern Hemispherethere, there are too many corrections, and there is too much infilling in the datasets (HADISST and ERSST.v3b) that attempt to fill in the blanks.

        HADSST3 and HADISST2 are due out this year. They are supposed to have better uncertainty estimates.

      • Bob Tisdale

        Spot on. We seem to have many people here that believe all data from official sources is automatically good. Unfortunately that is demonstrably untrue with regards to much of the historic data such as Land records and SST’s. I rarely comment on information gathered during the Satellite era as personally I think thats much too short a period to draw any meaningful conclusions from.

        There are of course another group that automatically thinks that all official data is bad, but thats another discussion.
        tonyb

      • Nebuchadnezzar

        “Do you for one minute believe that the uncertainty in global average sea surface temperature in the 19th century is 0.3C? I sure as heck don’t, especially when the uncertainty estimate in the 19th century for land temps is more like 0.8C (a more realistic number but still possibly too low). ”

        Uhhuh. So, how reliable are they?

      • stay tuned, working on a new post, will be up later today

      • Judith

        You didn’t work in advertisng in a previous life did you? You’ve trailed this new post now three times and we’re all impatient to see it :)

        tonyb

      • yes, everybody, keep holding your breath :)

      • Paul Dunmore

        Judith, thanks for the reference to the Brohan et al (2006) paper. It may have been superseded by the new papers in the “Critique of the HADSST3” thread, but a comment on the Brohan paper may still be of interest. When you referred us to it, you did not warn us of its errors, so I assume that you were not aware of them. As I read the paper, I repeatedly had the sudden feeling “that can’t be right!” and with a little thought it became clear that it was not right. The comments below all describe issues that are immediately evident on a careful first reading, and should have been evident to the referees. I think that none of them would have been made if the team of authors had included a statistician (the kind of real statistician that one finds in statistics departments rather than climate science units). This may not be a complete list, because eventually I gave up working through the paper. (Yes, I know, but I have a day job and it is not in climate science.)

        [19] There will be a difference between the true mean monthly temperature (i.e., from 1 min averages) and the average calculated by each station from measurements made less often; but this difference will also be present in the station normal and will cancel in the anomaly. So this does not contribute to the measurement error.

        This claim is false, easily shown by numerical counterexamples. To make the illustration simple, consider a planet with a 10-minute day, where during the “normal” period the temperature is zero (on some scale) except at minutes 3 and 7 when it is 1. Suppose further that after the normal period the temperatures are 0.2 at all times. If temperatures are measured every minute, the average temperature is 0.2 in both periods so the anomaly in the later period is 0; but if temperatures are measured only at minutes 3 and 7 then the average normal temperature is 1 and the later temperature is 0.2, so the anomaly is -0.8. So measuring at only certain times of day can cause errors which do not cancel out in the anomalies. I think this will generally be a problem if the variation of temperature throughout the day changes over the years. This may anyway occur at random, but is specifically predicted by climate change models, that more warming occurs at night as the planet warms. I do not know how to estimate this source of error, and it may be quite small; but it is clear from the quote that Brohan thinks the error is necessarily zero, and this is certainly wrong.

        [23] Hypothesizing that the distribution of adjustments required is Gaussian, with a standard deviation of 0.75C gives the dashed line in Figure 4 which matches the number of adjustments made where the adjustments are large, but suggests a large number of missing small adjustments. The homogenization uncertainty is then given by this missing component (dotted line in Figure 4), which has a standard deviation of 0.4C. This uncertainty applies to both adjusted and unadjusted data, the former have an uncertainty on the adjustments made, the latter may require undetected adjustments.

        [26] So the homogenization adjustment uncertainty for any station is a random value taken from a normal distribution with a standard deviation of 0.4C….

        Since the errors in the adjusted and unadjusted data are caused by quite different processes, it is not necessary that their uncertainties should be equal. Specifically, if required adjustments follow a Gaussian distribution but larger required adjustments are preferentially detected and made, then the undetected adjustments do not have a Gaussian (normal) distribution. Suppose that the odds of making a required adjustment are proportional to the size of the adjustment needed. Then it is easy to show that the distribution of the detected adjustments is proportional to exp(-(z/0.75)^2/2)*abs(z)/(k+abs(z)) for some constant k, which is a bi-modal distribution like that shown in Figure 4, and the distribution of the undetected adjustments is proportional to exp(-(z/0.75)^2/2)/(k+abs(z)), which is not Gaussian, though it may be close if k is large enough. If the estimation errors made by the scientists doing the adjustments have a Gaussian distribution, then the Gaussian distribution of the adjustments must be different from the non-Gaussian distribution of the unadjusted data.

        [29] … If data for a station are available for N of the 30 possible months during the period from which the normals are taken, …

        There are 360 months during the normal period 1961-1990. This obvious typo is only of interest because it suggests that none of the five authors, the editor, or the referee(s) read the paper with enough care to spot it.

        [29] The station temperature in each month during the normal period can be considered as the sum of two components: a constant station normal value (C) and a random weather value (w, with standard deviation sigma). If data for a station are available for N of the 30 possible months during the period from which the normals are taken, and the w’s are uncorrelated; then for stations where C is estimated as the mean of the available monthly data, the uncertainty on C is sigma/sqrt(N). Testing this model by selecting stations where complete data are available for the climatology period and looking at the effect on the normals of using only a subset of the data confirmed that the autocorrelation is small and the model is appropriate.

        [30] The station normals used fall into three groups [Jones and Moberg, 2003]. The first group are those where data are available for all months in 1961–1990; these normals are given an uncertainty of sigma/sqrt(30). The second group are those where data are available for at least 15 years in 1961–1990 (enough data to estimate a normal); these normals are given an uncertainty of sigma/sqrt(N) where N is the number of years for which there is data. The third group are those where too few data are available in 1961–1990 to estimate a normal…..

        I am very tentative about this comment, because “I don’t understand” may be my fault, not the authors’. But if anyone understands what this extract means, I would really appreciate a translation.
        The authors jump back and forth between “months” and “years”. If they really mean “months” then “30” should be “360” and “N” should be the number of months for which there is data. But of course month-to-month departures from C (the “weather” values w) are highly correlated, because it is warmer in all of the consecutive summer months; if they indeed confirmed that the autocorrelation of monthly temperatures is small then they did it wrong. Perhaps they meant that they found that the estimated uncertainty followed the expected formula if months were dropped out at random, which might be true. But if some stations had complete data for the summer months but some missing data in the winters, then averaging the available months gives a biased estimate of C and hence of the reported anomalies; the bias may be much greater than the estimated uncertainty. Perhaps there are no stations like that, but the authors do not say so, or indeed address the point at all. If some stations are biased upward and others downwards because of different patterns of missing months, the overall bias might roughly cancel out but the uncertainty would be increased above their estimate.
        If they mean “years”, then during the normal period the temperatures were systematically rising and the w values are also autocorrelated; the bias problem would arise if more months were missing in the early years than in the later years. I even considered whether they really mean “days”, with about 30 days in a month – this would justify identifying w as a weather component, and the autocorrelations might be weaker, but I could not make sense of their discussion at all on this interpretation.
        The confusing discussion makes it hard to know what they did, but I cannot see that their estimate of uncertainty can be justified by their apparent evidence. But perhaps it is just me, and there is a valid interpretation which is obvious to someone who works in the field.

      • Nebuchadnezzar

        “There are 360 months during the normal period 1961-1990. ”

        I think there’s one normal value for each of the 12 calendar months. The January normal is the average of the 30 Januarys between 1961 and 1990.

        Do you have feel for how important these effects are likely to be?

      • Paul Dunmore

        Thanks, Nebuchadnezzar, that makes sense of their whole discussion – an object lesson that “I don’t understand” is not the same as “it is wrong”. The only point that remains of my concern about paragraphs [29] and [30] is that temperatures were rising from January to January over the period, so there would still be a bias if missing January records were not spread evenly across the 30 years. If the missing years were different for different stations this would cause the overall uncertainty to be greater than their estimate. To work out how much, I think you would have to know how many observations were missing, how they were distributed over the years, and how the temperatures changed – one can estimate the latter from the final published temperature dataset, but I don’t see how to get the others without the raw data.
        But again, thanks for your hint.

      • Nebuchadnezzar

        Thanks Paul,
        Nebuchadnezzar

      • Paul Dunmore

        Nebuchadnezzar, a real statistician could develop a formula; the best I can offer for the size of the effects is a cheap and dirty simulation. The CRUTEM3 global land temperatures gives the temperature (anomaly) each January in the normal period. We can use Brohan [18] for the monthly uncertainty of one station, about 0.03C. I assumed that no data was missing, that 1% of all monthly observations were missing, …. that 10% of all observations were missing. I left the missing observations completely random because I have no idea whether or how they tend to cluster.
        The following table (R code later) shows expected means, standard deviations, and standard deviations scaled by what they should be with no missing observations, first assuming that the real temperature is the same every year (as Brohan assumes), then assuming that the temperature each January follows the actual CRUTEM3 pattern.

        Temp same every year Temp increases 1961-90
        Miss% Mean Stdev Scaled Mean Stdev Scaled
        0 -0.0458 0.0047 0.9985 -0.0458 0.0047 0.9985
        1 -0.0458 0.0047 1.0041 -0.0458 0.0055 1.1672
        2 -0.0458 0.0048 1.0105 -0.0459 0.0062 1.3204
        3 -0.0458 0.0048 1.0157 -0.0459 0.0068 1.4531
        4 -0.0458 0.0048 1.0219 -0.0459 0.0074 1.5653
        5 -0.0458 0.0048 1.0258 -0.0459 0.0080 1.6939
        6 -0.0458 0.0049 1.0317 -0.0459 0.0085 1.8093
        7 -0.0458 0.0049 1.0376 -0.0459 0.0090 1.9148
        8 -0.0458 0.0049 1.0449 -0.0459 0.0095 2.0177
        9 -0.0458 0.0050 1.0510 -0.0459 0.0100 2.1167
        10 -0.0458 0.0050 1.0559 -0.0459 0.0104 2.2072

        In both cases, the estimated mean is not affected by missing observations but the uncertainty (standard deviation) gradually increases as more observations are missed. If the temperature does not change over the years, 10% of missing observations causes about a 5% increase in uncertainty – that is Brohan’s formula. But if we allow the temperatures to rise over the 30 years, the uncertainty more than doubles by the time 10% of the observations are missing. I did not go to 50% of missing observations (which is when Brohan stops using his formula), but clearly by that time the uncertainty will be many times larger than he claims.
        This is very crude, but may give a sense of how much uncertainty is added to the estimation of the normal temperature (and hence all of the anomalies) of a station with missing observations. Again, the expected anomalies are not affected at all, only the uncertainty.

        R code and data for the above:

        set.seed(12345)
        sd.month <- 0.2/sqrt(60) # See Brohan [18].
        x <- rnorm(n=300000, sd=sd.month) # Same starting set for different amounts of missing data.
        ina <- order(runif(300000)) # Set of entries to be deleted.
        Ct <- c(-0.059, -0.087, -0.116, -0.121, -0.269, -0.112, -0.195, -0.259, 0.102, 0.063, -0.128, -0.141, 0.156, -0.241, -0.19, -0.369, 0.03, 0.01, 0.016, 0.149, -0.033, 0.031, 0.214, 0.023, -0.057, -0.075, -0.006, 0.24, -0.046, 0.095)
        # This is the mean temperature for each January in normal period, CRUTEM3 global averages.
        Ct.mean <- mean(Ct)
        sd0 <- sd.month/sqrt(30) # Base standard deviation.
        cat(Ct.mean, sd0, "\n")
        results <- NULL
        for (ip in 0:10) {
        t 0) is.na(t) <- ina[1:(3000*ip)] # Delete ip% of values at random.
        t <- matrix(t, nrow=30, ncol=10000) # 10000 samples of 30 years.
        Cest0 <- colMeans(t, na.rm=TRUE) + Ct.mean # Omit missing values in computing estimates.
        Cest <- colMeans(t + Ct, na.rm=TRUE) # With variations in base temperatures. Cest0 has same base temperature every month.
        results <- rbind(results, c(ip, mean(Cest0), sd(Cest0), sd(Cest0)/sd0, mean(Cest), sd(Cest), sd(Cest)/sd0))
        }
        print(round(results,4))

    • so it’s the “Pshaw! nonscientists!” argument.

      Robert,

      There is much information in Tony’s post that is very interesting. I know you are used to being told what to think by your Greenpeace approved so-called “scientists” but sometimes the facts get presented and you have to use your own brain.

    • Robert

      This is another disappointing guest post. There’s just no argument here.

      Huh?

      Read it again, Robert.

      Max

  13. Robert

    Have you actually read the links? They include material from Hadley, original books from the 1850’s, Internal Met office memorandums and numerous statements from people at the sharp end of collecting data.

    Most aspects of this record are flawed, from the methodology through to the actual number of records. The standards that were not met are those of such as the ISO9000 quality standards-that the data should be fit for their intended purpose-determining temperatures to fractions of a degree.

    If you think the random throwing of buckets and the equally random sampling of the temperature is a scientific process which can then be ‘corrected’ I am afraid we will have to disagree.

    Why don’t YOU demonstrate why this poor data material is actually good instead of throwing Ad Homs around and what on earth do you believe to be my political bias?

    tonyb

    • “Most aspects of this record are flawed, from the methodology through to the actual number of records.”

      Most real data sets are flawed. That’s why so much of science is about identifying and quantifying things like error, bias, and significance. You don’t know very much about that, clearly, and that’s fine. I’m sure you’re very well-informed about your own profession.

      “If you think the random throwing of buckets and the equally random sampling of the temperature is a scientific process which can then be ‘corrected’ I am afraid we will have to disagree.”

      If you think you know what a scientific process is, we will have to agree to disagree. You seem to believe that science is some sort of religion, or ritual, or that all data in science must conform to the standards of a good single-variable laboratory experiment. That’s a common mistake among laypeople, but it has nothing to do with science.

      Your case reminds me of a quote on the wall of my alma mater: “The greatest obstacle to discovery is the illusion of knowledge.”

      • Robert

        There’s ‘flawed’ and there is ‘irretrievably’ flawed. Throwing a bucket over the side of a ship and collecting temperatures in the way they often were is no part of a scientific process that can be used to inform important matters of policy. Even climate science data has to conform to some notion of accuracy. Doesn’t it?

        tonyb

      • randomengineer

        You seem to believe that science is some sort of religion, or ritual, or that all data in science must conform to the standards of a good single-variable laboratory experiment.

        It would be one thing to be Mozart or fictional tv character Greg House and be demonstrably brilliant and a complete ass, but — and maybe it’s just me here — I’m just not feeling the “demonstrably brilliant” bit here.

    • tony, have you actually spent any time with ICOADS?

      The simple fact is we have the data we have from a historical perspective. And yes, those records do not meet standards developed years later. How could they?

      Nobody who works with this data believes that it is accurate to a 1/10 of a degree. For example, when I say that the average temperature for a grid cell is 30.3 C, it does not mean I believe I know this to 1/10 of a degree. It simply means this: GIVEN all the data we have, the estimate that minimizes the error is 30.3.

      So, we have an estimate for 1850 SST. It’s the best estimate we have, based on the best flawed data that we have. It suggests that the SST was cooler in 1850 than it is today. If you have a better ay of estimating the temps for SST in 1850, have at it. I suspect you dont want to argue that it was warmer then than now. If your argument is that the error bars on the estimate need to be larger, well thats another matter. For that, you have to do some work to show it

      • steven mosher

        I think you miss the point TonyB has made.

        It is NOT whether or not it was warmer or cooler than now, it is simply that the temperature record for 70% of the globally and annually averaged land and sea surface temperature anomaly has been essentially worthless for most of the time period reported.

        IOW the GAAALASSTA should be taken with a large grain of salt, even if the land portion were not fraught with its own set of problems (which it is, of course).

        Since this record is all we have, however, we have to live with it. But we should acknowledge the great uncertainty.

        Is this uncertainty greater than the reported rate of change itself? Who knows?

        All we can say is that there is sort of an independent check since satellite measurements started in 1979, which confirm a global warming trend since then in the troposphere (albeit at a slower rate than the surface record).

        Max

      • K Scott Denison

        steven mosher

        This is by far the most absurd argument I’ve read from you. My analogy is this: there has been a gas leak in your home. To determine if the air is safe we open different windows and doors one at a time, insert an open mason jar (while holding our breath) into the window/door, screw on a cap and withdraw it. We then use several chromatographs, for which we know nothing of their calibration histories, and reas out the % oxygen, methane, etc. We decide the average concentration of oxygen in the house is 5.3%.

        My question to you: Are you going in the house?

      • Mosh

        Several people have already commented on your post. I appreciate that you take a very sophisticated view of certain data and how it can be made useful. I don’t have a better way of estimating temperatures in 1850 and it would be more honest for scientists to admit that SST’s this far back can not be viewed as any sort of scientific measure for a host of reasons already discussed.

        The problem is that data such as this is being viewed as a definitive record-I have myself heard representatives from the Met Office and Govt talk on how sea temperatures have been steadily rising, with no caveats at all as to the merits of the raw data.

        No, I dont want to argue that it was warmer then than now, equally I am uncomfortable with the notion that current warming started around 1880 or so and has showed a sharp uptick. Historical evidence seems to indicate that the warming trend has been progressing since a low point in the early 17th Century, albeit with numerous reverses and advances.
        tonyb

    • “If you think the random throwing of buckets and the equally random sampling of the temperature is a scientific process”

      You know … that’s a thought. What estimate of error do we get if we treat the existing data set as a series of random samples, across space, and time, of the temperature [singular] of the earth? ie: imagine that the data set was the result of some aliens dropping thermometers at random across the surface of the seas, repeating daily [or hourly], to create a time series?

  14. The graphs of ocean temperature anomolies that I have seen show that there was a rapid increase around 2003/2004 just about when the old temperatue record based on older collection methods ended and the new ARGO-based records began. see for example, http://wattsupwiththat.com/2011/06/19/tisdale-on-2011-ocean-heat-content-and-the-giss-miss/ This seems suspicious, but I haven’t read any detailed analysis of the efforts to splice the records. Is this abrupt increase an artifact of data collection methods? Is it an actual increase in ocean heat content? Can anyone point me to an analysis?

    • Nebuchadnezzar

      Pauld, there’s a difference between sea-surface temperature measurements and ocean temperature measurements. The ARGO array measures temperatures down to around 2000m, which is pushing any definition of sea-surface temperature way past its breaking point.

      There are records of sea surface temperatures from a variety of sources for the past 30 years. Ships, buoys and satellites of various kinds.

  15. Theo Goodwin

    What I take from this thread is that Robert is here to prevent discussion of the topic introduced by Tony in his guest post. It is a simple and plain case of bullying. Too bad. I would have enjoyed discussing Tony’s work.

    • Another transparent attempt to get rid of the pesky Robert

      • lolwot –
        The pesky Robert has played the troll, disrupting the conversation, on many threads. It does sometimes get entertaining – and very often hilarious. But it also interrupts some of the more interesting threads, so attempts to get rid of him are likely to be transparent. I’m not sure he’s contributed anything of value to any thread.

        But your agita wrt the subject is noted –

  16. Nebuchadnezzar

    Tony,

    Another reference to add to your list:
    http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html

  17. Good post TonyB. Just for fun you should think of doing a paleo style reconstruction of the instrumental era based on satellite era data.

  18. The inaccuracy of individual measurements is not a significant problem as long as there are no systematic changes over years that cannot be corrected by well justified adjustments, because we are interested in the averages over many independent measurements. The average can well be accurate to 0.1 C, while the individual measurements have errors of more than 1 C.

    The difference between cooling water intake and the bucket measurement is a clear example of error that can easily be corrected, because there is extensive overlap of both methods, but gradual changes in the way the bucket measurements have been made is a more serious problem.

    One thing that keeps on disturbing me is the continuing referring to the Knox and Douglass range “four different algorithms that the recent trend ranges from –0.010 to –0.160 W/m2” in spite of the fact that only one of these four algorithms makes any sense giving the result –0.0405 ± 0.1488 W/m2. In particular the value –0.160 W/m2 is based on an unbelievably stupid error of statistical analysis for a published scientific paper (the extra negative trend comes from regular seasonal variations within each year).

    • There is an old sailors saying –

      Below 40 degrees south there is no law.
      Below 50 degrees there is no god.

      We can modify it to the task of historical ocean heat content.

      Below 40 degrees south the data is sparse.
      Below 50 degrees south the data doesn’t exist.

      We have the same problem with the land surface temperature records for Africa and much of South America.

      • If the L&C paper was wrong because it exclude the extra-tropics, then the historical sea temp data is wrong for the same reason.

    • K Scott Denison

      So let’s cut to the chase then… How much of your annual salary will you wager that using the same methods, instruments, sampling rates, and nu,bers of observers from the 19th century would yield a mean global SST accurate to within 0.1 C versus the satellite record today?

      • I don’t know, how serious the systematic errors are or to what extent they can be eliminated. My purpose was only to point out that some of the arguments presented are not at all as significant as most people writing comments to this thread seem to think.

        I don’t know the answers, because I’m not a specialist on these issues, but most of those, who have strong views are likely to know even less. That doesn’t stop them from drawing their strong conclusions, but that makes me disregard their views totally.

        The original post concludes: “.. the methodology itself is potentially severely flawed”. It may potentially by, but is it really? The post doesn’t provide the information to answer that question. Neither does it tell, whether such an answer is available from existing science. The problems are known to every scientist that has studied the historical SST data. Listing the well known issues doesn’t tell, how well they have been answered. Here is the difference between guest postings to a blog and science.

      • Pekka

        It is up to scientists to recognise that the data they use in their climate studies obtained from third parties may potentially have serious problems. It is up to the scientists who produce this data to address its core weaknesses and strengths. In this case that SST’s before a certain date (could be mid 1950’s or could be the 1980’s as Bob Tisdale believes) are of such a mixed bag and so sparse that using the material as a reliable means to inform policy is highly problematic.

        Whilst scientists may of course be aware of the provenance of the material they use, this uncertainty can become diluted as it goes up the chain.

        I have been in govt seminars where the SST record has been cited as incontrovertible factual material to the audience of policy makers and other Govt officials who will implement policy.

        Judith tries to explore these uncertainties which is what makes this an entertaining and informative blog. There are few aspects of the climate narrative that are so uncertain as historic SST’s
        tonyb

      • Tony,

        My point is that you get nowhere by listing issues that are well known and that have been taken into account by scientists. How well they have succeeded is not always as clear, but the only way of getting useful information is by looking at the present state of science, not at the problems that existed before the science has done, what it already has done.

        Your discussion of historical CO2-data fits the same model of behavior. You tell that the value of those old measurements has been criticized, but you continue to present the data as a reason to doubt main stream results. This is dishonest, because the reasons for dismissing the value of those early measurements are strong. It has been demonstrated beyond reasonable doubt that those local measurements are not indicative enough of the average CO2-concentration of the atmosphere and that they cannot be used in estimating changes in concentration.

        When the mainstream science is criticized the work that has been done by the scientists must be taken into account and weaknesses shown in the analyses accepted presently by the main stream. Demonstrating weaknesses somewhere else is just straw man argumentation.

      • I disagree Pekka., for I have reasonable doubt about the AGW accepted myth of unchanging natural CO2 concentrations. In fact as I see it AGW is founded on a self serving interpretation of poor data. That the data is poor is mentioned in passing, if at all, then the AGW interpretation becomes the foundation for the climate research. The deep uncertainties fall by the wayside.

        For example, look at the central role the 100 year mean temp curve from HadCru plays in climate science today. That curve, which has no statistical validity, is taken to be an established fact, the fact to be explained (with 90% certainty no less). This is an absurd interpretation, yet here we are.

        Taking poor data and turning it into policy prescriptions has always been the method of environmentalism. This is precisely what we see here.

      • Pekka

        I think you misunderstand the purpose of the Co2 article which was to examine whether such measurements were embedded in society or were merely carried out by a few unremarkable scientists for their own esoteric purposes.

        I was startled to find such a wealth of background material, that the Co2 issues were apparently very well understood, and that measurements were thought to be around the same as todays . In that respect the settled science of the time has been overturned by modern thinking which is surely the value of constantly questioning the status quo?

        I can only be guided by the considerable weight of historic evidence of the apparent levels of Co2 concentration at the time. So what I find curious is that highly qualified scientists, using increasingly sophisticated methods, managed to get it so wrong for 130 years until Charles Keeling came along and without any experience almost immediately got it right.

        Now of course their methods of sampling are very different and it may well be that if an ‘old style’ measurement could be converted to a ‘new style’ measurement (with all that we know these days) that they might still be hopelessly adrift.

        I for one would be perfectly satisfied by an audit of the most reliable of the old measurements in the same way that Dr Muller is in effect auditing land temperatures.

        tonyb

      • Pekka,

        CO2 hockey stick is just as bad as that other hockey stick, IMO. What is your opinion of the “temperature” hockey stick(s)?

      • To whom are they not that important?

  19. There is some fascinating history in this post on SST measurements, and more than enough evidence to cast doubt on the reliability of those measurements prior to the middle of the twentieth century. Since that time, and even more, since the advent of satellite measurements 30 years ago, accuracy has improved. The satellite data are still subject to calibration uncertainties, but add an important component to measurements taken at the surface.

    Despite residual uncertainties, the conclusion that SST has been rising over the past 50-60 years is well supported by the consonance of multiple data sources. That trend has been punctuated by periodic bumps, dips, and flat intervals, reinforcing the notion that attempting to ascertain trends from data spanning a decade or less is likely to be unrewarding.

    It’s important, I believe, to distinguish the accuracy of absolute temperature measurements from the accuracy of temperature anomaly measurements. Different technologies, sampling different water depths, may yield different absolute measurements, but the reliability of anomaly measurements is reinforced when the trends produced by each run parallel. An example is the good concordance of Night-time Marine Air Surface temperature data and data derived from water sampling. Over the multiple decades, trends of OHC and sea level rise determined by satellite altimetry have also tended to parallel SST measurements, lending more credence to the latter. The error bars are still large, but it would be incorrect to conclude that the data are too uninformative to be useful as a reasonably good guide to the pace of global warming over the past 60 years.

    Tony Brown’s historical analysis is more useful, in my view, than his evaluation of current and recent SST data. The latter suffer from omission of some of the concordances mentioned above, but also from serious flaws of omission that appear the bias the analysis in the direction of finding no trend rather than a rising one. Two examples may help to make the point. He states,

    “In 2004, Willis published a time series of ocean heat content showing that the temperature of the upper layers of ocean increased between 1993-2003. In 2006, he co-piloted a follow-up study led by John Lyman at Pacific Marine Environmental Laboratory in Seattle that updated the time series for 2003-2005. Surprisingly, the ocean seemed to have cooled. Not surprisingly, says Willis wryly, that paper got a lot of attention, not all of it the kind a scientist would appreciate. In speaking to reporters and the public, Willis described the results as a “speed bump” on the way to global warming”

    This citation appears to document an ocean cooling episode. It is therefore unfortunate that Tony failed to mention that in 2007, Willis et al published a correction indicating that the apparent cooling reflected instrumental errors. The link is at Correction To “Recent Cooling of the Upper Ocean”.

    The paper stated,
    “Most of the rapid decrease in globally integrated 18 upper (0–750 m) ocean heat content anomalies (OHCA) between 2003 and 2005 reported by Lyman et al. [2006] appears to be an artifact resulting from the combination of two different instrument biases recently discovered in the in situ profile data. Although Lyman et al. [2006] carefully estimated sampling errors, they did not investigate potential biases among different instrument types. One such bias has been identified in a subset of Argo float profiles. This error will ultimately be corrected. However, until corrections have been made these
    data can be easily excluded from OHCA estimates (see http://www.argo.ucsd.edu/ for more details). Another bias was caused by eXpendable BathyThermograph (XBT) data that are systematically warm compared to other instruments [Gouretski and Koltermann, 2007]. Both biases appear to have contributed equally to the spurious cooling.”

    Tony’s post then continues to imply that the alleged cooling has continued to the present –

    ” However, the interpretation given above has itself been superseded by this paper from November 2010 ‘Recent energy balance of earth’ by Knox and Douglass, who after researching the data from the Argo floats, show that for the most recent period 2003-8 ocean heat content was indeed shown to be still cooling-not warming.”

    However, that is not what the “most recent data” actually show. Rather, the NOAA Global Ocean Heat Content Data Through March 2011 show the recent trend to be flat, not declining, and not dissimilar to past bumps and dips in an OHC record that shows an overall rising trend over multiple decades.
    This is an important topic, and it would be particularly worthwhile if Dr. Curry could persuade someone currently active in research in this area to lend his or her expertise to a future post on SST trends and their measurement.

    • Hi Fred

      If you notice I conclude my article with the comment;

      “To date, what with the adjustments cited in the paper, the shortness of the project, and the difficulty in obtaining on going data, this method of collecting sea temperatures is as yet unproven.”

      I think that’s pretty even handed- a few years of data collected in radically new ways can’t yet be judged, let alone against older methods which are measuring something completely different. Its the old apples and oranges problem, and theres an awful lot of mixed fruit in climate science.

      tonyb

    • I’m curious to hear Tony Brown’s comment on a point I made above. He cited a 2006 paper alleging ocean cooling but failed to cite the 2007 correction showing that the 2006 conclusions had resulted from instrumental artifacts. My question is why. I think Tony is biased against accepting mainstream conclusions about global warming, but I don’t think he’s dishonest. My surmise is that perhaps he failed to cite the correction because his main sources of climate information don’t come from the scientific literature but from secondary sources that are biased and never mentioned the correction. I hope he’ll comment on that, including some indication of where he gets his information.

      • Come on Fred, you can do better than that. I cited an article within my paper that led directly here;

        http://earthobservatory.nasa.gov/Features/OceanCooling/

        This tells the story of the problems encountered from the viewpoint of Willis and Nasa. I cite articles in the hope that people will read them, and anyone looking at this document will see a detailed examination of the problem in sequential order and within it is the correction article you cite.

        I also make reference to the WUWT article in which the same problem is examined freom a variety of angles. As I said in the comment immediately above yours, I also pointed out that the Argo buoys are too new and too different to be judged yet.

        I wouldn’t call any of that biased. I don’t think I’ve ever seen the likes of Real Climate or Tamino citing the opposite point of view.

        tonyb

      • Mirror, on the wall
        Of Fine Fred’s water closet.
        Confirming bias.
        ==========

      • I think that’s an incomplete answer, Tony, but I accept it. I guess I’m wondering why and how “ocean cooling” came to your attention, while “it was a mistake” did not.

        As to your statement that “I wouldn’t call that biased”, you should consider how onlookers would view making a claim that later turned out to be false and failing to note the correction. I preferred to think that you simply were unaware of the correction. Was I wrong?

      • Fred

        I don’t know how else to answer you other than pointing out once again that I refernced an entire SITE that went into the background of Willis’s paper-from his and Nasa’s perspective- and it included the very paper that you seem a little fixated on.
        tonyb

      • You could just answer the question directly. Did you know about the correction?

      • Joshua

        I select my links carefully and read them through thoroughly. The very reason I selected the Nasa web site- out of many hundreds available- was that the entire set of articles on it goes chapter and verse into how the ‘cooling’ mistake came about and how it was corrected. Anyone reading the link provided would know in great detail all about the problem and its resolution.

        Argo is a completely new concept. Whether 10 years of cooling or 10 years of warming would show anything relevant is a matter of debate. Its a very short timescale on which to pass any judgements. I guess as a minimum you’d need thirty years to show a trend.
        tonyb

      • Tony – I have to notice that you still haven’t given a direct answer even though you have been asked the direct question four times now. That, in itself, is more than a little curious.

        But to continue – you say that you select and read your links thoroughly. That would imply that you did, in fact, now about the correction.

        You didn’t just cite the paper – but you quoted Willis’ comments with reference to his documentation of a cooling trend. If I just read your post, and Fred hadn’t posted, I wouldn’t have known that the authors of that study themselves later said that the cooling trend you referred to was in error.

        It seems to me that for you to know about the correction – even as you discussed the data that were later corrected – and not mention the fact that they were corrected, was a highly, highly questionable thing to do.

        You didn’t just rely on the reader to follow your link to read about a documented “cooling trend,” you wrote about that “cooling trend” specifically, so you shouldn’t just rely on your reader to follow your link to read that the authors themselves said that the “cooling trend” documented was due to measurement errors.

        This all seems to rather obviously be a problem. I think you should address that problem directly rather than pretend that it isn’t a problem.

      • Joshua, the “correction” has been “corrected” confirming that the ocean has, indeed, cooled slightly since ARGO measurements were installed.

        Max

      • manacker –

        The original report noted a cooling trend. The same authors later corrected that report, saying that the trend they identified was due to measurement error.

        If a later study then also documents a cooling trend, it does not “correct” the “correction.” The original report was still in error in its analysis of the data examined.

        This reminds me of when “skeptics/deniers” write that recent cooling “cancels out” extensive periods of warming in the past.

      • Joshua,

        if the cooling this year had not cancelled out the warming of the El Nino last year the modellers would be delirious and you would almost have me convinced that there was something to AGW. Sadly for you, while the corrupted temperature record does appear to show a minimal warming easily explained by the poor observations, it simply doesn’t approach what the modellers were projecting.

        The fact that there have been several rounds of adjustments should not be lost on any of us. As Tony states, the Argo system simply has not been in use long enough to give us a strong indication of how accurate it is. Since there is nothing else to calibrate it against over most of its range, conclusion should not be reached at this point.

        He had to rewrite his paper before they would publish it. THEN he found the problems with a small group of the floats which reduced the cooling yet again. Now there is yet another adjustment which added a slight cooling. Nope, Tony is exactly right, there simply isn’t enough quality data to draw conclusions. The fact that Climate Scientists keep whipping these dead horses simply shows how desperate they are to try and prove their fantasy.

      • kuhnkat.

        Cooling from one period does not “cancel out” heating from an earlier period.

      • So that is how global warming works Joshua?? No amount of cooling in between periods of warming can reduce the temperature of the earth??

        Try boiling water by heating it up by 10c and cooling it by 10c repetitively. I don’t think you will ever get to boiling, but, I have never actually tried it.

        Apparently you are not talking about the same basic concepts I am. Please explain what you mean as I am too ignorant to guess.

      • Fred

        Further to my reply at 6.38pm. You have actually hit on an interesting problem caused when trying to condense potentially long URL’s into an efficient quick refernce. My original link read as follows
        http://earthobservatory.nasa.gov/Features/OceanCooling/

        I suspect if you had seen the nasa reference you would immediately have read it and seen that the information you wanted included was there, as was a great deal more. However it became translated into ‘article’ and I suspect you didn’t read it as it may have looked less interesting.

        tonyb

      • Tony – forgive me for being persistent, but you said you didn’t “know how else to answer [Fred]” and I’m making a suggestion. I think your answer to the direct question is pretty important for gaining a purchase on the objectivity of your post.

        Did you know about the correction?

      • BlueIce2HotSea

        Hi tonyb.

        I enjoyed your paper. No offense intended, however I agree with Fred that it would be improved by an overt mention of the Willis’ correction – unsettled evolving state of the art and all that.

        BlueIce

      • Yes I agree. I had assumed that people would read the article I cited which was wholly and specifically about how the apparent problem was resolved and chosen for that very reason. However some people have obviously only looked at the headline and not the meat, so it would have been useful to have made a more specific reference.
        tonyb

      • Judith –

        I was wondering if you’d comment on how tony has handled this question about his failure to note that a “cooling trend” he identified in his post was, in fact, identified later by the authors as resulting from instrumental errors.

        He appears to have know about the correction, yet he spoke about that cooling trend nonetheless in his post (without mentioning the correction).

        Would you be inclined to overlook such an approach if it came from someone in the “climate establishment?” Methinks not.

      • Here is the material originally sent to me by tony, for which i did some editing and formatting:

        This currently short lived experiment had a controversial beginning as the buoys initially recorded ocean heat content that was dropping (although not strictly under the same criteria as SST’s) This was an apparent anomaly as ocean temperatures were expected to show a rise, commensurate with computer models. This deviation was explained in this article which also provides useful information of what % of the whole-such as thermal expansion and glacier melt- is attributed to each aspect of sea level change.

        “In 2004, Willis published a time series of ocean heat content showing that the temperature of the upper layers of ocean increased between 1993-2003. In 2006, he co-piloted a follow-up study led by John Lyman at Pacific Marine Environmental Laboratory in Seattle that updated the time series for 2003-2005. Surprisingly, the ocean seemed to have cooled.
        Not surprisingly, says Willis wryly, that paper got a lot of attention, not all of it the kind a scientist would appreciate. In speaking to reporters and the public, Willis described the results as a “speed bump” on the way to global warming, evidence that even as the climate warmed due to greenhouse gases, it would still have variation. The message didn’t get through to everyone, though. On blogs and radio talk shows, global warming deniers cited the results as proof that global warming wasn’t real and that climate scientists didn’t know what they were doing.”
        http://earthobservatory.nasa.gov/Features/OceanCooling/

        However, the interpretation given above has itself been superseded by this paper from November 2010 ‘Recent energy balance of earth’ by Knox and Douglass, who after researching the data from the Argo floats, show that for the most recent period 2003-8 ocean heat content was indeed shown to be still cooling-not warming.

        I think that Tony’s concluding statement re the Argo buoys is appropriate:

        To date, what with the adjustments cited in the paper, the shortness of the project, and the difficulty in obtaining on going data, this method of collecting sea temperatures is as yet unproven.

      • Except, 2003 to 2008 is not the most recent period, and the 0-to-700 meter ocean is, well, a whole bunch deeper.

      • The next time someone from the “climate establishment” fails to note that a study they referenced was identified, by the authors themselves no less, as being flawed, I’ll remind you of this post, Judith.

      • JCH,

        since 2008 is not the most recent data, would you please provide us a link to it?? Or is the issue that there is not a more recent paper on the subject??

        I could send you off to the European Sat sea level and the SST’s, but, I don’t think you would like them.

      • Except that the the cooling observed by Knox and Douglass is almost totally due to an explicit error in statistical analysis. When the error is corrected, what is left is a small fraction of the most simpleminded standard deviation and thus totally insignificant.

      • Fred Moolten

        You need to get a bit more up-to-date.

        You forgot to cite the 2009 paper by Craig Loehle:
        http://multi-science.metapress.com/content/cw17434027026726/

        Ocean heat content data from 2003 to 2008 (4.5 years) were evaluated for trend. A trend plus periodic (annual cycle) model fit with R2 = 0.85. The linear component of the model showed a trend of −0.35 (±0.2) × 1022 Joules per year. The result is consistent with other data showing a lack of warming over the past few years.

        An even more recent paper by Douglass + Knox confirms this:

        We determine Earth’s radiation imbalance by analyzing three recent independent observational ocean heat content determinations for the period 1950 to 2008 and compare the results with direct measurements by satellites. A large annual term is found in both the implied radiation imbalance and the direct measurements. Its magnitude and phase confirm earlier observations that delivery of the energy to the ocean is rapid, thus eliminating the possibility of long time constants associated with the bulk of the heat transferred.

        Longer-term averages of the observed imbalance are not only many-fold smaller than theoretically derived values, but also oscillate in sign. These facts are not found among the theoretical predictions.

        Three distinct time intervals of alternating positive and negative imbalance are found: 1960 to the mid 1970s, the mid 1970s to 2000 and 2001 to present. The respective mean values of radiation imbalance in W/m2 are −0.15, +0.15, and −0.2 to −0.3. These observations are consistent with the occurrence of climate shifts at 1960, the mid-1970s, and early 2001 identified by Swanson and Tsonis.

        So the latest period (which was measured by the new, comprehensive ARGO system) shows slight cooling.

        Will this cooling become a longer-term trend or will it simply be a “speed bump”, as Josh Willis has called it?

        I don’t know the answer to that question. You don’t know. Neither does Josh Willis.

        Roger Pielke has stated that the current cooling period is much too short to constitute a falsification of the “hidden in the pipeline” postulation of Hansen et al. or of the AGW hypothesis itself, so we will just have to wait and see.

        Patience is a virtue, Fred, as is acknowledging that there are things one does not know.

        Max

      • There has been no cooling, Max, as of March 2011.

      • Fred,

        there doesn’t need to be cooling to prove the models are poor to useless, but, as Tony pointed out, as good as the Argo system is it will NOT settle the issue. It doesn’t go deep enough and without hard data to back up the lack of flux thru the 0-700m depth you Alarmists will never admit you are wrong.

      • Max,
        The Knox – Douglass paper has a few more months of Argo data than Loehle and those months were warm enough to cancel the cooling trend calculated by Loehle. Due to the error in analysis Knox and Douglass present also an value with cooling, but as I said that’s due to erroneous statistical analysis.

        The method used by Loehle is also suspect as he introduces smoothing and fitting to sinusoidal oscillation. Neither of these steps are really acceptable, but I don’t think that the error is large in his case, while the smoothing introduces a large error, when done as Knox and Douglass do it in their first and obviously preferred estimate (It’s the only one taken to Figure 1. of their paper.)

      • Fred Moolten,

        “…you should consider how onlookers would view making a claim that later turned out to be false and failing to note the correction….”

        On the Threatened Island Nations thread you posted a comment in which you said:

        “Ocean acidification due to rising atmospheric CO2 concentrations is a major cause of coral reef damage, and adds to the damage from other forces, human and ‘natural’. I did include one acidification reference above, at 6/23, 10:59 AM….”

        When I asked you for any examples where the damage caused by CO2 was measured, while controling for other causes of damage, you wrote

        “The atoll damage may be due mainly to the other factors mentioned, but on a global scale involving corals in general and their role in the marine ecosystem, the damage is mainly from CO2-mediated ocean acidification, and to a lesser extent, warming. The best suggestion I can offer you for the data is to visit the references I linked to above in the 10:59 AM comment, plus additional references and evidence in the Barrier Island thread.” (A curious construction: “the atoll damage may be mainly due to the other factors mentioned, but on a global scale…the damage is mainly from CO2….” Cognitive dissonance in all its glory.)

        You then cautioned another commenter that “…it’s a good idea for readers to visit the references on ocean acidification to judge for themselves.”

        I took your advice, and read the articles you suggested (not just the extracts), and pointed out to you that they directly contradicted the your point you had twice made.
        http://judithcurry.com/2011/06/23/threatened-island-nations/#comment-79179

        I have noticed on later posts that you have stopped claiming that there is current major coral damage due to acidification from increased atmospheric CO2:

        “As I read the evidence, the harm to marine life we have inflicted from warming and acidification is still modest, and probably less than the effects of overfiishing, polluting chemicals and discarded plastic.”

        From “major damage” and ” damage mainly from CO2″ to “still modest” and “less than the effects of over fishing, pollution….” I was just wondering if you intended to “note the correction” as you sagely advise above?

        Mr. Pot, may I introduce Mr. Kettle?

      • I believe I’ve been consistent, Gary. Outside of areas where corals interact closely with humans, the damage comes mostly from acidification and warming according to the references I’ve cited, with recent excerpts quoted. At inhabited atolls, the most damage is almost certainly from other human activities such as overfishing, pollution, and coral mining. On a global scale, the damage from acidification and warming is probaby still modest, although I suspect that because the vast majority of corals are remote from humans, that damage in aggregate probably outweighs the locally caused damage on atolls. I can’t say we know that for certain, because a relatively small number of remote sites have been studied.

        Rising CO2 and consequent acidification appears to have been the cause of massive coral devastation in the past, so the phenomenon of major damage from rising CO2 is documented in the paleoclimatologic record. Currently, pH is still high enough that the recent damage is probably only sporadic.

        You are welcome to believe that my later statements “corrected” my earlier ones, although I certainly disagree. In that case, however, the “correction” you claim is needed was cited, rather than hidden so that no reader would know it had been made. You’re barking up the wrong tree, Gary.

      • Even since writing the above, I reviewed the excerpts you quoted to see whether perhaps I had changed my conclusions as you implied. I noted that I had referred to a “major cause” of reef damage, whether “major” signified one of the most important causes. It was not a statement about the extent of damage itself, although I suspect the damage has been significant.

      • How can “It is happening on a global scale,” “major damage,” “mostly from,” and “mainly from” not be “statement[s] about the extent of the damage itself?”

      • “Outside of areas where corals interact closely with humans, the damage comes mostly from acidification and warming according to the references I’ve cited, with recent excerpts quoted.” The problem is that those articles you cited say the opposite.

        On the other thread you similarly claimed that “the damage is mainly from CO2-mediated ocean acidification.”

        That is the issue i asked you about. You referred me to the articles you had cited in that same thread. I reviewed them and they contradict what you claimed. The article you claimed said ocean acidification was the major current threat to the viability of coral reefs in fact stated:

        “If conditions were stabilized at the present [CO2]atm of 380 ppm, that is, Coral Reef Scenario CRS-A (Figs. 1B and 5A), coral reefs will continue to change but will remain coral dominated and carbonate accreting in most areas of their current distribution. ”

        I don’t expect you to make any correction, or admit any error, the blog’s ceiling might cave in, This was just my way of pointing out the irony of your advice above.

        You peeper your comments with “I suspect” and “I can’t say we know that for certain,” to sound reasonable, but then make uncategorical statements like “the damage is mainly from CO2-mediated ocean acidification.”

        As others have noted, you are unfailingly polite. But you are as dogmatic and incapable of self correction as anyone in the debate.

        Your last paragraph makes no sense at all to me. You had no intention of correcting yourself, and did not do so just because you contradicted yourself. But if you did need to, you have already done so openly?

        Huh?

      • I think you’re trying too hard, Gary. The statements I’ve made are corroborated by the papers I’ve cited with no exceptions that I’m aware of. My only thought is that you haven’t read them carefully, because I’ve now quoted extensive excerpts to document the relevant points. You also failed to note my choice of the word “threat” to refer to future harm rather than what has already happened. Ocean acidification from rising CO2 is certainly a major threat to coral viability, and the articles I’ve quoted made that clear – in fact, that was one of the most important points of some of the papers. Again, though, others can read those papers to judge for themselves.

        At this point, there’s enough data cited so that others can draw their own conclusions. If you want to disbelieve the evidence, that is your prerogative, but then there may be no point in a continued discussion, which probably won’t interest too many other readers.

      • “My only thought is that you haven’t read them carefully, because I’ve now quoted extensive excerpts to document the relevant points.”

        I haven’t read them carefully? I am beginning to wonder if you have read them at all. I not only read them, I quoted from them extensively, on the Threatened Island Nation thread and here.

        And you’ve “quoted extensive excerpts?” Where? You not only haven’t quoted those articles on either thread in this discussion, you haven’t even responded to the quotes from them that I have posted.

        By all means, please quote an article that supports your repeated claims that the main cause, and a major cause, of current damage to coral reefs is increased acidification resulting from increased atmospheric CO2. (The comments I questioned, and that you have repeated here, were in the present tense by the way, which was the only reason I asked you in the first place.)

        This is getting to be like the Japanese earthquake thread: broad assertions; citations to articles that undermine your point; and no retreat. I would have let it pass if not for your comment criticizing another commenter for not admitting a correction.

        I agree though that this issue is not likely interesting to anyone else. So absent a citation to an article that actually supports your earlier claims, I won’t have any more to say on the matter. You may have the final word if you so choose.

      • My final thought is to let others judge, Gary. To depersonalize it, when you become too eager to find fault with anyone, no matter whom, you’re in danger of misjudgments – misinterpretation, misrepresentation, and misstatements all of which characterize your comments above.

      • I have to say – this is classic blog argument.

        As some kind of a bizarre defense against Fred’s pointing out that tony failed to note that a study he referenced was erroneous, Gary accuses Fred of, horror of all horrors, correcting his posts to more accurately reflect the status of the science.

        Leaving aside the veracity (or lack thereof) of Gary’s assertions (and the somewhat comical turn that Fred denies having done something so horrible as to correct his assertions), what does it say that Gary thinks that correcting one’s assertions is something one shouldn’t do, and something that is equal to failing to note that studies you reference were identified by the authors as being flawed?

        Ya’ just gotta love blog arguments.

      • Joshua,

        If you feel the need to comment on something I write, please try to keep yourself from blatantly misrepresenting what I wrote, difficult though you may find that to do. No where, anywhere on these comments, did I “accuse[] Fred of, horror of all horrors, correcting his posts to more accurately reflect the status of the science.” I wrote exactly the opposite.

        Fred and I both agreed that he did not “correct” himself. That was the entire point of the discussion. I maintained that when he changed his comments on ocean acidification without admitting his earlier comments were incorrect, he did the same thing he criticized tonyb for allegedly doing. Fred maintained that his previous comments that acidification was “the main cause” and a “major cause” of reef deterioration on a “global” scale, were not incorrect, that they were consistent with his later comments that acidification’s effect on reefs was “still modest” and “less than the effects of over fishing, pollution,” and that the articles he cited demonstrate that his earlier comments were accurate, and did not need to be corrected.

        In CAGW land , “major” and “global” = “modest,” and “the main cause” = “less than other causes.” Fred denies the need to correct because he claims there was no change in his comments. You try to argue that he admits an error and corrects himself, and I criticized him for doing so. You misrepresented what both of us wrote.

        Now I could give you the benefit of the doubt and believe that this latest incidence of you misrepresenting what I wrote so completely was the product of abysmally poor reading comprehension skills on your part. But given your track record, and the fact that the meaning of what I wrote was obvious even to Fred (though he disagreed), I will go with knowing that you just need to lie about what people write because you don’t have the ability to argue with they have actually written.

        I would prefer you go back to just not responding to my comments. But if you must, try to be honest about it. OK?

      • It’s also important to distinguish between coral reefs specifically and “marine life” in general. As you quoted from my comment, our damage to marine life from acidification and warming is much less than from pollution, from overfishing (which has almost extinguished some fish species), and from discarded plastic that ends up in the bellies of fish and can eventually kill them. The main threat from ocean acification probably resides in the tiny organisms at the bottom of the food chain, whose demise can eventually lead to reduced growth and survival further up, but has not yet done so to a point where we can clearly see the difference.

      • I don’t want to belabor the point, but that fact that ocean acidification has now been dragged into multiple threads reinforces my conviction that it deserves a thread of its own. At this point, multiple comments, six references I cited (with relevant passages quoted from three of them), and several papers cited by others are already scattered between the Threatened Islands thread, the Barrier Islands thread, this thread to a small extent, and a few places elsewhere, and it would be worthwile collating the material in one place for further commentary. Ideally, experts who work in this area might be invited – e.g., to write a guest post.

        My own view has been summarized to some extent by Gary above, but inaccurately I believe. It is that there is evidence that corals remote from human habitation are suffering damage, that the damage is probably significant in many areas but still limited in distribution, that a major cause can be inferred from the referenced evidence to be ocean acidification probably combined with heat stress is some areas, that damage in the vicinity of inhabited atolls is mainly due to other human excesses such as pollution and coral mining, and that ocean acidification is a serious threat to marine life – primarily calcifying organisms including corals and many other species – if atmospheric CO2, which is now at 390 ppm, continues to rise to substantially higher levels.

        Finally, although strong criticism by anyone of the content of anyone else’s comments is acceptable, accusing someone of lying oversteps the boundaries. I think Gary should acknowledge that he should not have leveled that accusation at Joshua, even if he and Joshua disagree on the substance of an issue.

      • Fred, I agree that ocean acidification is worthy of its own thread, but I don’t have any expertise on this nor the time to investigate. We can have a discussion thread, or someone can volunteer to do a post?

      • Fred – as to whether or not the forum would be better served if people didn’t accuse one another of lying is an open question.

        But I can assure you that I couldn’t care in the least whether Gary accuses me of lying. He made similar accusations when I pointed out to him that attacking Muller’s integrity is…..well…..attacking Muller’s integrity.

      • Pooh, Dixie

        May I take it that “ocean acidification” as used here is synonymous with “decline in alkalinity”?

      • Pooh, Dixie – Despite some concern expressed in the blogosphere that “ocean acidification” was a term chosen to scare people, it is the now well-established term used by all individuals working in this area of science, as well as those knowledgeable about the area, chosen simply because it is the increase in hydrogen ion concentration rather than the pH value that poses a threat to marine calcifying organisms. If you want to call it something else, you can, but it would simply make some people wonder whether you were referring to a different phenomenon.

        Whatever term you choose, it shouldn’t be “decline in alkalinity”, because rising CO2, while increasing ocean hydrogen ion concentration and reducing pH, does not change ocean alkalinity. The latter term is defined as the total ability of ion species to buffer added acid. For the ocean, major species are bicarbonate and carbonate ions. Alkalinity is a function of bicarbonate concentration, but of twice carbonate concentration, because carbonate is doubly ionized (it can absorb two hydrogen ions). Dissolved CO2, hydrated to carbonic acid and then dissociated, has the potential to add both bicarbonate and carbonate, thereby actually increasing ocean alkalinity. However, because of the hydrogen ion increase (i.e., the pH reduction), conversion of carbonate to bicarbonate reduces alkalinity to about the same extent as carbonic acid dissociation adds to it. The net result is that alkalinity remains largely unchanged.

        A different term, “basicity”, is actually correlated with pH, and you could legitimately refer to the pH reduction as a “decline in basicity”.

      • I am reading a book (“We Are What We Speak”), that speaks quite a bit about the relationship between language and tribalism.

        It is stunning how many people supposedly focused on finding answers, instead waste a lot of energy on reinforcing tribal boundaries through meaningless arguments about language.

      • Joshua
        I mean this politely, but judging by the mumber of times you introduce the subject into your posts, you do seem to be a little obsessed by the idea of tribalism and its application to the climate wars.

      • No offense taken, Rob. That is my main point of interest – no doubt.

        Perhaps my interest is “obsessive,” but I think that tribalism is highly relevant to how the science is being interpreted.

        Judith agrees, in fact, and that is what first attracted me to her blog. However, she disagrees in that she sees some “vast asymmetry” in the impact, if not the prevalence of the tribalism.

        I think that many of the comments here undermine her perspective on that “vast asymmetry.”

      • Joshua

        I agree that some of the positions adopted by people are tribal in nature but not exclusively so. There are also plenty of well reasoned skeptics who argue knowledgably from their own point of view without consideration of the skeptical tribe.

        With regards to tribal blogs, I am sure that you could find the assymetry Judith describes if you go to the many sites habitually occupied by the warmists. Many of those sites ban skeptics or delete their posts. In my view, this is extreme tribal behaviour and indicates a lack of confidence in the warmist position with regards to the science and the politics. Contrast that with the relatively light moderation here. Finally, in my view the level of debate here is mostly superior to other skeptical sites like WUWT. Just my thoughts, regards, Rob

      • Whatever term you choose, it shouldn’t be “decline in alkalinity”, because rising CO2, while increasing ocean hydrogen ion concentration and reducing pH, does not change ocean alkalinity. The latter term is defined as the total ability of ion species to buffer added acid. For the ocean, major species are bicarbonate and carbonate ions. Alkalinity is a function of bicarbonate concentration, but of twice carbonate concentration, because carbonate is doubly ionized (it can absorb two hydrogen ions). Dissolved CO2, hydrated to carbonic acid and then dissociated, has the potential to add both bicarbonate and carbonate, thereby actually increasing ocean alkalinity.

        Perhaps I should not continue on these details which are not the subject matter of this thread, but a small correction anyway.

        Most of the above is correct, but the last few words are not. Adding CO2 to the ocean does not increase alkalinity as the increase in HCO3- and CO2– is compensated by opposite changes of equal size in other components of alkalinity.

        This sentence doesn’t make sense either “it is the increase in hydrogen ion concentration rather than the pH value that poses a threat to marine calcifying organisms”, because pH value is defined as the negative logarithm on H+ concentration. Thus it’s not even conceptually possible to change the H+ concentration without changing pH.

        Similar problems may be present in the other details of the description of chemistry, when such are discussed. The ocean chemistry is pretty complex, and making brief statements about it, leads often to errors.

      • Rob – I would never claim an exclusivity to the tribalism – far from it; it is my perspective that tribalism is inherent (it’s human nature) on both sides of the debate. My argument is that claims of “vast asymmetry”: (1) run counter to what we know about human nature and, (2) are perhaps, in fact, a viewpoint rooted in tribalism.

        I visit blogs of all sorts. From what I’ve seen, complaints about moderation philosophy run the gamut from justified to childish whining about appropriate treatment

        That said, I appreciate the relatively light moderation here. Judith has the patience of Jobe.

      • I disagree slightly with Pekka’s comments below although we have the same general perceptions. The use of the term “ocean acidification” is designed to highight the role of increasing H+ concentration even though pH values are on the basic rather than acidic side of neutrality. The value of pH itself isn’t as informative a metric in my opinion, partly because it is above 7, but more importantly because it signifies a state, whereas “acidification” signifies a process, and it is the latter that creates the threat to marine calcifiers. This is mainly because mean pH values will become a severe threat for widespread damage only if they decline below the current mean of 8.1, and are less harmful at current levels. “Ocean acidification” is thus used to emphasize the future dangers of a current process. It’s a minor semantic distinction and probably not very important to elaborate on. Reference to pH is reasonable as long as the importance of changing pH is noted.

        Regarding his other point about compensatory changes that leave alkalinity unchanged, I think he was saying the same thing I did, except I referred to increases in bicarbonate and carbonate (not CO2) as the ions that add to alkalinity, and the reduction in carbonate from added H+ as the compensatory mechanism. Added H+ also reduces the concentration of other anions, and he’s right in implying that it isn’t carbonate alone that’s involved. Carbonate, however, appears to be the most important species involved in the Alkalinity compensation.

      • In thinking about it, maybe Pekka and I don’t agree about the alkalinity effects, but I’m not sure. Adding CO2 reduces rather than increases ocean carbonate concentrations, by converting some carbonate to bicarbonate. Perhaps I misunderstood him, but the important point is that carbonate reduction is central to the maintenance of ocean alkalinity in the face of increasing CO2.

      • Fred,
        It doesn’t make sense to specify, which of the three values: H+ concentration, OH- concentration or pH is the one that has influence in the oceans, because these three numbers are always in fixed relationship with each other. None of them can be preferred based on the chemical reactions either, because all possible reactions occur simultaneously and the same outcome can be expressed in several ways using various combinations of the possible reactions. Your statement that one of the values: the H+ concentration is the important one and the others less important is false. There is finally only one balance, and any one of these values gives the same information about the balance. The influence of acidity on the carbonate chemistry can equally well be formulated in terms of any one of these values.

        The alkalinity is also a somewhat arbitrarily defined concept, because there is no unique natural definition for the buffering capacity. The capacity is defined by specifying a clearly acidic pH level (like 4.5) as the point, where all the buffering capacity is used, but choosing such a value is somewhat arbitrary as I already said. Fortunately the value of alkalinity is not particularly sensitive to the selection of the limiting pH value.

        Adding CO2 does not affect alkalinity, because transforming CO2 to HCO3- happens by removing one OH- from the water or splitting one H2O molecule and releasing H+. Both these reactions add one unit of alkalinity as HCO3- and remove one unit either by removing OH- or by adding H+. Thus alkalinity doesn’t change. Similarly going from CO2 to to CO3– releases two H+ ions making the net effect zero again. Therefore adding CO2 to the water does not change the alkalinity in either direction.

      • Pekka – I have to disagree partially on both counts, but maybe it’s just a matter of emphasis. If you review my comment, the point was not that hydrogen ion concentration and pH are unlinked – of course they are linked – but rather that it is the change in H+ and pH signified by the term “acidification” that best describes the threat to marine calcifying organisms. Citing a pH of 8.2 doesn’t describe that process as well as pointing out the increase in H+. It is the latter that combines with carbonate to convert it to bicarbonate. This is about descriptive terminology, not acid/base chemistry.

        Regarding the second item, I’ve pointed out that adding CO2 to seawater reduces carbonate concentration rather than increasing it. It is this reduction that most compensates for the increase in bicarbonate and thereby maintains total buffering capacity at about the same level. We agree that total alkalinity doesn’t change, because CO2 adds both H+ and anions in equivalent amounts. If you also agree that carbonate reduction from the added H+ is the principal phenomenon accounting for the constant alkalinity, then we agree on everything.

      • Fred,
        The issues are turning to semantics again. That appears to be unavoidable as long as two people try to agree on, what is important in a complex phenomenon, or what is the best way to make it simpler.

        I’m sure we could agree on the issues, when they are represented in sufficient detail using reaction formulas and equilibrium constants. I think we could also understand, why we appear to have the present disagreement, when we would start from the detailed description and each one of us would describe the steps that he makes in reaching his presentation. Right now I cannot understand your point of view, and you evidently cannot understand mine.

        One reason that I wrote my first message was to tell you explicitly that even people, who are likely to agree on the actual full phenomena, don’t necessarily understand your way of simplifying them and telling, what is most important in them.

        If there remains something to discuss further, my “random topics” or email messages may be a better channel.

  20. Robert’s a mess. Not as knowledgeable as Fred, not as nice as Fred, but just as subject to confirmation bias. Bring the A Team.
    ====================

  21. This is slightly off topic, but i could use some help with references (journal articles and good blog posts) on the urban heat island effect, that are critical of what is done by CRU, GISS, NOAA. Thx

    • Try to distinguish true UHI from the local heat island (LHI?) effects Watts and company are investigating. Lots of confusion there.

      • Peter Wilson

        David
        I am puzzled as to what distinction you draw between UHI and “local heat island” effects – in what way can a heat island effect be anything other than local? In what way are they irrelevant to the station siting issues being investigated by Watts etc.

        No one ever said UHI was anything other than local. The issue is the number of thermometers sited within “local” heat islands, giving the appearance of widespread rather than local heating. This must be the confusion you refer to.

      • Pooh, Dixie

        It would be widespread if “homogenization” or other adjustments smear the heat observations of the city into nearby rural areas.

      • Peter, it is the difference between the warming caused by the city of Atlanta, Georgia, for example (UHI), and the warming caused by a nearby air conditioner or paved parking lot (LHI). LHI can occur in remote rural locations. Many of the supposed statistical investigations into UHI have assumed it correlated with urban development. Towns over 50,000 pop for example. These are irrelevant to LHI, which is what Watts’s squad has been investigating.

    • Judith Curry 6/27/11, 5:27 pm; SST

      If you want to include the historical perspective of UHI, you might want to include the analysis by G. S. Callendar (1938).

    • (sorry if I am even more off topic) but is there any atmospheric relation of urban heat island and smudge pots in orchards? I’ve read that smudge pots do not work by heating the air. They work by producing a heavy smoke layer over the orchard which reduces the heat loss by radiation to the night sky. However, urban heat islands are not usually described like that but more by the surface changes. Just curious if anyone knows.

    • Big news on the UHI front at climateaudit.org
      ==================

    • UHI is a misnomer. It’s not only urban areas, but also rural that impact the thermometers (any land use will affect deltaT). It should be called anthropogenic local warming (ALW).

    • http://climateaudit.org/category/surface-record/uhi/
      http://climateaudit.org/2010/12/15/new-light-on-uhi/#more-12617
      http://www.nasa.gov/pdf/505254main_zhang.pdf

      “The compact city of Providence, R.I., for example, has surface temperatures that are about 12.2 °C (21.9 °F) warmer than the surrounding countryside, while similarly-sized but spread-out Buffalo, N.Y., produces a heat island of only about 7.2 °C (12.9 °F), according to satellite data.”

      “Forest, around 10C, grass 5-6C, desert weak and sometime heat sink.”

  22. Oh, hi Fred, what a relief. Flat OHC is more likely due to albedo than CO2, eh?
    ==========

  23. http://climateaudit.org/2009/01/20/realclimate-and-disinformation-on-uhi/

    steven mosher says: January 13, 2011 at 12:06 pm
    …The following paragraph of AR4 [by Jones and Trenberth] should be changed from:

    McKitrick and Michaels (2004) and De Laat and Maurellis (2006) attempted to demonstrate that geographical patterns of warming trends over land are strongly correlated with geographical patterns of industrial and socioeconomic development, implying that urbanisation and related land surface changes have caused much of the observed warming. However, the locations of greatest socioeconomic development are also those that have been most warmed by atmospheric circulation changes (Sections 3.2.2.7 and 3.6.4), which exhibit large-scale coherence. Hence, the correlation of warming with industrial and socioeconomic development ceases to be statistically significant.

    to:

    McKitrick and Michaels (2004) and De Laat and Maurellis (2006) demonstrated that geographical patterns of warming trends over land are strongly correlated with geographical patterns of industrial and socioeconomic development, implying that urbanisation and related land surface changes have caused up to 50% of the observed warming over land since 1979. However, the locations of greatest socioeconomic development are also those that have been most warmed by atmospheric circulation changes (Sections 3.2.2.7 and 3.6.4), which exhibit large-scale coherence. Hence, the correlation of warming with industrial and socioeconomic development may not have the level of statistical certainty those papers established.”

    As the paragraph stands Trenberth and Jones simply made stuff up…

    http://www.uoguelph.ca/~rmckitri/research/jgr07/jgr07.html

  24. As a former historian I enjoy Tony’s historical accounts of methodology and data collection, and join with him in concluding that we simply do not know the extent to which the air and the sea have warmed over the 20th century. We certainly can’t talk about GTAs that are expressed to three decimal places — and they represent a combination, an amalgam, of two quite different kinds of measurements, each with abundant error (of different kinds) and all needing smoothing, extrapolation to deal with missing information, and ‘adjustment’.

    You just shake your head. How could any competent scientist rely on this sort of base to propose a solid mathematical expression of warming over time?

    I repeat what I posted earlier. I know it’s great fun to argue at each other about what the trends (really) show about whether or not we have entered a cooling phase. If I were doing that myself I would only use the satellite data, and even they come with problems. But as someone schooled in ensuring that the data are as accurate as they can be, I defy anyone to go on a public platform and show how these absolutely basic temperature data can be relied upon for formulating social and economic policy.

    • “As a former historian I enjoy Tony’s historical accounts of methodology and data collection, and join with him in concluding that we simply do not know the extent to which the air and the sea have warmed over the 20th century.”

      Ask Theo Goodwin to inform you that the post was about data collection not statistical methods and therefore your conclusion is not warranted.

      • lolwot:

        I don’t usually respond to snark, but on this occasion I write only to say that your comment is simply baffling. The phrase ‘data collection’ is mine,and in fact you quoted it. Tonyb’s essential point is the same as mine, and that is that the data are so awful that very little meaningful can be derived from them. They are so bad that when they are combined with land data, which are no better, but for different methodological reasons, the outcome is virtually without meaning.

        What are you trying to get at?

      • Read up a bit to see the discussion between Robert and Theo Goodwin where Robert is basically saying that you guys are just jumping from problems in data analysis to conclusions. The step inbetween being neglected is statistical analysis, which is part of the solution to such problems listed in the article.

        Theo Goodwin responds by telling Robert the article is about “data analysis” not statistical analysis and therefore Robert is comitting a strawman and trying to change the subject.

        Interesting because everyone else seems to be fine in interpreting the article as implicitly saying the SST records are wrong and no amount of statistical analysis can matter.

      • Whether meaningful information can be extracted from the data cannot be judged from what Tonyb tell’s in his posting. That can be decided only through careful analysis of the data and all possibilities of extracting real information out of it. The problems listed by Tonyb make certainly that more difficult, but claiming that the task is impossible is a totally different matter.

        Judith gave a reference to scientific articles by Rayner et al and Brohan et al, which discuss the matter and give some estimates of the accuracy of the global average SST time series. These articles and other similar scientific articles can be used as a starting point for getting to the actual value of the SST data. If weaknesses are found in the data analysis and its error estimates, they can be pointed out, but it’s of no real value to list the well known issues as Tonyb did giving at the same time a totally lacking view about the actual state of science.

      • Pekka,

        OK, I shouldn’t have used the word ‘impossible’ though Tonyb does. Let me say ‘monumentally difficult’, and anyone who says they have done it or could do it needs to set out explicitly what they did/would do.

        Again and again we run into the opaque wall of ‘adjustments’ and ‘corrections’, where people say they have done something to improve the data. But it’s usually impossible to see what they did, as others have said.

        I cannot imagine how anything could be done to these data that would extract from them a meaningful account of how ocean temperatures have moved over the last 150 years. There are great parts of the oceans where few ships travel, for example. How do we ‘adjust’ for them?

        I could go on. Judy says she will do a new post soon, so let’s wait for that.

      • Don,

        .. and anyone who says they have done it or could do it needs to set out explicitly what they did/would do.

        Have you checked, whether they have done it?

        I cannot imagine how anything could be done to these data that would extract from them a meaningful account of how ocean temperatures have moved over the last 150 years.

        Is that a limitation of science or your imagination?

        What I have tried to say is that the way to find out, what the state of science is, is to look at the scientific publications, not in trying to list issues and use own imagination.

        If Tony or you want to say that the estimates presented by the scientists are not valid, you must show that they have not presented required justification, and you mast do it checking everything these scientists have published on the matter. It’s not enough to notice that one of the papers doesn’t contain the information.

        I haven’t done such checking, and I don’t claim that everything is fine, I say only that to judge science, you must know the particular science you are presenting judgments on. I think this should be obvious to you taking into account your extensive academic carrier, but I fail to see that you have followed this approach.

      • Oh dear, Pekka, you make it hard! I don’t know that anyone else has been instructed to go and read every scientific paper relevant to a broad area before he/she makes a comment. But, of course, you are sort of right. Remember, however, that a lot of the massaging of the data is done in government or semi-government establishments, and is not easily available to the lay public. Some is behind paywalls, and I have an old prejudice against paywalls erected to prevent the citizens who have already paid for the research seeing what has happened as a result of their taxes. What am I to do about these problems?

        Let me put my position this way. I am a data-monger. I like data, and respect each datum as the kernel of observation, and the basis for argument and testing hypotheses. And I don’t have a lot of time for theory that doesn’t take an early opportunity of testing itself against data. Now Tonyb has set out a series of problems that infect all the SST data. Like others who have commented, I am a swimmer who sometimes experiences sharp differences in temperature in the sea in which I surf. So I wonder about the validity and reliability of any SST measurement wherever and whenever it is made. SST is not engine-intake temperature, because of the several-metre difference in depth, and given the innate variability of sea-water temperature, how do we estimate error here?

        Look at the tracks of shipping: there are tens of thousands of square kilometres of ocean over which few vessels ever pass. Now I know that we could extrapolate over time from the measurements taken by the vessels that do pass by, and at a constant time from the nearest measurement point elsewhere. But what sort of error are we talking about? I haven’t found any decent estimates so far.

        You could help me, Pekka, and help us all, if you actually know of a set of scientific papers that do the work, and explain just how they did it and why their adjustments are valid. If you don’t, then I accept your point, and will bow out of this dispute awaiting the work of others.

      • You may complain about the availability of information on the details of scientific work, but you have a very weak position, if you just assume that the scientists are stupid or dishonest and have not done their work properly. The original posting of TonyB is along these lines, when it appears to assume that everything is impossible, when he doesn’t know the resolution, and some of your comments give the same impression.

        In my view the most central property of science is that it’s a cumulative process, where new understanding is continuously built on earlier one. That means often correcting errors done in the earlier work, but almost never is it right to dismiss earlier work without specific reasons. Thinking that all the scientists that have studied some specific problem have not reached useful results to form the starting point for the continuation is against the idea of scientific work.

        When some problems appear difficult based on own reasoning, the conclusion should not be that other scientists have not succeeded in solving that, but to check, how they have performed. Both Tony and you appear to have chosen the first and wrong alternative – as have so many skeptics of climate science in general. The main stream climate scientists make errors, but they are not totally incompetent, stupid or corrupted.

        One of the most central points that helps in the analysis is that many of the uncertainties of the old measurements are uncorrelated. Such errors make any single value unreliable, but have little influence on the average or its uncertainty. Even the poor coverage of wide areas of oceans is not so damaging, if we have strong reasons to believe that the areas behave uniformly. More recent measurements may provide such evidence. These are just two examples of facts that make the analysis more practical than it may first appear. Important systematic sources of error remain, and I’m not making any claims on those, except that scientists have been studying them and have certainly gained some understanding. What has been achieved can be judged only through understanding of this field of research. Guesswork is not a substitute for proper assessment.

  25. None of the available historical surface temperature records were designed for or are fit for measuring global mean temperature. At least there is now a proper grid of SST measuring buoys. When will we get that on land??? When will there be a proper debate on the validity of the concept of “global mean temperature anomaly”?

  26. tempterrain

    TonyB,

    So you have a problem with the accuracy of some SST going back over 70 years or more?

    But on the other hand, you don’t seem to use the standards of scientific scrutiny when it comes to analysing the data of Ernst Beck who you seem to think is “essentially correct” in saying the ice core record on CO2 is all wrong? He claimed atmospheric CO2 levels in the nineteenth century were higher than currently.

    http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/

    Judith Curry,

    Where do you get these people from? Are there no climate denialist cranks you might consider to be unsuitable and unqualified to post on your blog?

    • tt, see TobyB’s comment #215 on the thread you link. There is a link to Beck his own self.
      ========

      • tempterrain

        Kim,
        Thanks.
        We’d all like the accuracy of historical data to be better than it is. However, short of going back in time to retake the measurements there’s not a lot we can do about it. Its clearly wrong however to suggest, as does Don Aitkin that because “We certainly can’t talk about GTAs that are expressed to three decimal places” that therefore “we simply do not know the extent to which the air and the sea have warmed over the 20th century. ”

        Its the old denialist argument that because we don’t know everything we therefore must know nothing. Its used by denialists everywhere on every topic. Its like saying that because we don’t exactly know how many people died in WW2 concentration camps that therefore we can’t be sure the holcaust existed. And no, I’m not saying that climate denialists are the same as holocaust deniers, but I am saying that the tactics and arguments are essentially the same.

      • Tempterrain

        You have speculated

        Its the old denialist argument that because we don’t know everything we therefore must know nothing

        Don’t know much about “denialists” (sounds like a religious term, like “heretic”).

        But rational skeptics (to which I count myself) would argue:

        We do not yet know enough to postulate that AGW is a real problem, so let’s work on clearing up the large scientific uncertainties in order to see if this really is the case

        Can you live with that?

        Max

      • Max, You – a rational skeptic? I don’t think so. I don’t think this sounds very rational, do you?

        “How foolish can we be? To seriously believe all the hype that man is causing a climate disaster that will destroy the planet is not only basically stupid, it is extremely arrogant We insignificant humans do not have the power to destroy this planet. Never did. We also do not have the ability to change the current climate trends, or even to accurately forecast what is going to happen over the next 10 let alone 100 years. Lets hope things will get warmer, rather than colder. We don’t need another ice age. Forget all the junk science by so-called experts that are all in on the multi-billion dollar climate research scam!”

        “Forget all the disaster reports being sold by environmental activists via the sensationalist media.Forget all the self-righteous calls for action by power-hungry politicians. Use your common sense. It’s all a hoax.”

      • tt:

        The two quotations you took from me actually follow sensibly, one from the other. Then you go into a grotesque analogy with the Holocaust that doesn’t follow from anything I have written.

        Go back to your first sentences.

        (1) ‘We’d all like the accuracy of historical data to be better than it is.’ Agreed.

        (2) ‘However, short of going back in time to retake the measurements there’s not a lot we can do about it.’ Agreed.

        (3) ‘Its clearly wrong however to suggest, as does Don Aitkin that because “We certainly can’t talk about GTAs that are expressed to three decimal places” that therefore “we simply do not know the extent to which the air and the sea have warmed over the 20th century. ”’ Why is it clearly wrong? You’ve just agreed that we like good data, and that we can’t obtain new 20th century data that is accurate. What do you think is the extent to which the air and the sea have warmed?

        You might argue that despite the errors the existing data tell us something, but then I would want you to do the work properly, and give us at least a few examples, not just assert it. You would need to show that even if we went to no decimal places there is still a ‘signature’ of a warming trend there, and that it is stronger than the error.

        If you can do that I will sit up and take notice.

      • Don, You say “we can’t obtain new 20th century data that is accurate. ”

        A quick science lesson. Accuracy has to be quantified. There is no such thing as “accurate”. All scientific constants are inaccurate and that would include Plank’s Constant, Big G or the Gravitational constant, The electron mass etc.

        Yes, the SST’s are also inaccurate. But are they so innaccurate as to be worthless? You and TonyB would like to pretend they are. However, you also like to pretend you know what you are talking about!

      • tempterrain,
        Odd analogy, and quite irrelevant.
        The near invocation of Godwin was interesting but is plain strange.
        >sigh< and you true believers will never be able to just let skeptics be skeptics. You seem to have a deep seated need to dehumanize those who disagree and ti lie about what we say and do.
        All signs of a rather pitiful faith on your part, frankly.
        And do please recall that you were unable to let 'denier' go when skeptics point out that a certain political party in Germany was once all about the wonders of nature and pure environments.
        You believers have no real legs to stand on, since are so busy cutting them out from under yourselves.

      • Hunter,

        I’ve noticed that you are fond of using the term “true believer” in a somewhat pejorative sense.

        Can I just ask you, as a learned scientist, about your opinion on say the big bang theory? Do you favour that explanation or do you have another one?

        And whether its the BB theory, or something else, would that make you a “true believer” in that too? Or maybe, I’m making the mistake of crediting you with too much scientific curiousity in asking the question. Whatever is the true answer to the origin of the universe, it won’t affect the price of gasoline, so maybe you just don’t care?

      • See Eric Hoffer, ‘Thoughts on the Nature of Mass Movements’. See also ‘Useful Idiots’.
        ===============

      • tt-
        I’ve noticed that you are fond of using the term “true believer” in a somewhat pejorative sense.

        Please be advised that the proper (formal) expression is “True Believer in the Church of CAGW”. And it’s entirely pejorative, just more accurate and not nearly as slimy as “denier”. :-)

        As for the BB – I suppose you’d put non-believers in the same class as Young Earth Creationists (about which you still apparently know nothing) . But I have a question for you since the subject is of interest to you – What came before the Big Bang?

        I know some of those who spend a great deal of time investigating the first 0.001 second AFTER the BB – but I find before the event to be interesting as well.

      • tempterrain

        Jim Owen,

        Ok Maybe you’d like to enlighten me on the subject of “Young Earth Creationism”? God made the Earth a few thousand years ago. Right?

        In return, I can comment that the Big Bang is considered to be a singularity in space and time and therefore there is no ‘before’. It doesn’t exist , at least according to the most adherents of the theory. It’s normal for most people to claim that this theory doesn’t make sense, or at least it doesn’t to us mere humans. However, if there is one lesson than physicists learned in the 20th century, it is that scientific theories don’t have to. They just have to explain observable events mathematically.

        An alternative theory is the universe is infinite, again in both space and time, but the problem with that idea is that the night sky would look quite different to what it does. There would be stars everywhere and at every distance so we wouldn’t actually see individual ones.

      • tt –
        Maybe you’d like to enlighten me on the subject of “Young Earth Creationism”? God made the Earth a few thousand years ago. Right?

        ~6000 years ago – as calculated by the Bishop of Usher, who later recanted that number/belief. But it lives on – like Dracula – or CAGW. Yes, if you changed a very few words in Young Earth Creationism, you’d find that CAGW would be it’s clone. That’s why I find it ironic but not surprising that CAGW believers would attack YEC. YEC and CAGW are mirror images of each other.

        Now – how many Young Earth Creationists are there? Keep in mind that’s it’s the smallest sect of a very, very small belief set.

        And then, how many other variations of Creationism are there?

        I can comment that the Big Bang is considered to be a singularity in space and time and therefore there is no ‘before’.

        That, my friend, is not science in any sense of the word. If real, the BB
        spewed all the energy and matter that comprises our universe. That energy/matter did NOT come from “nowhere”. Unless you believe in magic – or Creationism.

        The words singularity in space and time are meaningless. They are true but explain nothing and are nothing but handwaving ignore-the man-behind-the-curtain flimflammery.

        No – I don’t have the answer – nor does anyone else – yet. But I know the question. Just as I know at least some of the questions but don’t have the answers to climate change – nor does anyone else – in spite of all the pretensions and protestations.

        But then, there are others here who apparently DO believe in magic

      • Now – how many Young Earth Creationists are there?

        In the most recent poll I’ve seen, some 40% of Americans think that “god created humans in their present form less than 10,000 years ago.” If to be a YEC you have to believe that it was 6,000 years ago (as compared to 10,000 years), I’d say that the difference between YECs and some 40% of Americans (at least on the issue of creationism) is one of those “distinctions without a difference” type of thing.

        A difference of 4,000 years as compared to when humans actually evolved is relatively insignificant.

        Oh, just ’cause I figured you’d be interested – a majority of Republicans (52%) believe that god created humans in their present form less than 10,000 years ago (34% of Demz).

        http://www.gallup.com/poll/145286/Four-Americans-Believe-Strict-Creationism.aspx

      • And while I’m at it – I’d venture a guess that of that 52% of Repubz, a higher % are “skeptical doubters/deniers” than among the remaining 48% of Repubz.

        I’d also venture that of the 34% of Demz that think god created humans in their present form less than 10,00 years ago, a higher % are “skeptical unconvinced/deniers” than among the remaining 66% of Demz.

        Of course, just because the obvious seems to always have to be spelled out around here – I’m not suggesting that all “skeptical unconvinced/deniers” believe that god created humans in their present form less than 10,000 years ago – only that the prevalence of people who believe such is higher than among “skeptical convinced/believers.”

        You are fully entitled to draw your own conclusions.

      • Joshua, That’s a vaguely interesting and mildly entertaining take on things, but what on earth does it have to do with sea temps?

      • Nothing, Peter.

        It does relate to the larger discussion re: “tribalism” in the climate debate.

        Jim asked the question, and I answered it. I’m sorry if it bothers you that I contributed to the off-topic tangent, but it seems that you are also by an off-topic comment on my off-topic comment.

        I never quite get why people who are concerned about off-topic comments make off topic comments voicing their objections.

      • Just asking, that’s all

      • Peter – show some accountability.

        You asked your question knowing full-well the obvious answer.

        Your point was obviously that my comment about YECs was off-topic; as was your post in which you asked me the question.

      • A popular refuge of true believers in AGW is to pretend that skeptics are YEC or other types of unpopular(but safe to criticize) religious believers.
        All of which is predictable entertainment from people who squeal like little piggies if any link between politics historical or present is made of them.

      • Joshua –
        If to be a YEC you have to believe that it was 6,000 years ago (as compared to 10,000 years),

        Over the years, I’ve been a Baptist, Budhist, Methodist, Presbyterian, and Agnostic – among otheres. I was raised with Catholics of at least 3 different varieties and attended their churches. And over the years I’ve acquired a fairly wide religious education wrt Islam, Shinto, Brahmanism, Zoroastrianism and Hinduism. I also know adherents of YEC, neocreationism, both strict and progressive creationism, general creationism and Gap creatioinism.

        Of all of those, only the creationists – and only a small part of them – believe in the 6,000 year number. Of the churches I’ve attended/belonged to, none of them accept that doctrine.

        So… tell me – where did Gallup find all these creationists?

        I wouldn’t call the Gallup poll a lie, but I would certainly note that the questions are nonsense, badly phrased and ignorant of the wide variety of beliefs under that “Christian” umbrella. So the statistics you’re analyzing are Wirthless. :-)

      • Polls are funny things. That Gallup poll also indicates that 49% of Americans with post graduate education believe the man evolved with God’s guidance, while 22% of them believe that God created man as he is 6,000 years ago. Oh, and 40% of Democrats believe man evolved with God’s guidance, while 34% believe God created man as is.

        71% of those with post graduate.education and 74% of Democrats believe that God created man, either in one fell swoop or through guided evolution. Looks like a consensus to me.

        Which of course tells us absolutely nothing whatsoever about whether it is true or not. It’s the left that does science through polling.

      • tempterrain,
        Is the BB a litmus test?
        First, please remind me when I ever represented myself as a learned scientist.
        I beleive the preponderance of evidence supports an ancient BB in the formation of the universe. There do not seem to be many holes in the BB theory, except for what caused the BB.
        Is there a pupose to this that is relevent to AGW?
        As to true believer, a true believer is someone who rejects new information that runs counter to their belief and accepts any rationalization that allows that belief to continue.
        As to the price of gasoline, please expand on that.
        Sort of like what you do irt the annoying bleat about grandchildren.

      • Hunter,

        actually the BB has a number of issues. The Schwarzchild solution appears to have been twisted:
        http://www.physicsforums.com/archive/index.php/t-123802.html
        Basically there is no support in real science for even neutron stars. It was always questionable whether the gravitational force could overcome the nuclear repulsion forces to the point they could cause this compaction, even enough to fire off fusion!!

        Our cutting science, like throughout history, is based on observations and ideas that are not proveable or disproveable. A good instance is the neutrino theory and the sun. Without the underground experiment, that is borderline for showing a valid result, there is no support for the sun actually being a nuclear furnace of the size necessary to generate its output. In fact, the observations of sunspots would tend in the opposite direction. A cooler central area with the only fusion out in the corona is what is actually OBSERVED as opposed to modelled or theorized.

        The big bang itself has always had the issue of how the energy and particles leaving the singularity could have obtained the imbalances to start the clumping which would end in structures. The light from the farthest objects we see are showing us the youngest structures. Strangely they are showing us structures that are so huge no one has the vaguest how they could have been formed so soon after the BB!!

        The Microwave Background Radiation was predicted to be higher energy and frequency than what was eventualluy found. There have been several stories confusing the issue, but, no, it wasn’t predicted very well. What is worse is that current observations by the best satellite instruments are showing anomalies where there should be none. The theory requires that we see the same energy and frequency in every direction and we don’t.

        Nebulas are supposed to be the breeding ground of stars. Here again the science is poor at best. The conventional explanations of pressure waves and nuclear explosions are simply inadequate to explain the actual observations. As mentioned above, what actually ignites a star in free space?

        Then we get to the planets. The idea they congealed from a disc is simply not tenable. it is another idea that has not stood up to actual conditions that prevent this type of formation. About the only way that seems to be feasible in conventional science is star near misses or actual collisions. As that doesn’t happen very often we shouldn’t be finding the planets around other stars we do find. Then there is the relatively new realization of there being LOTS of binary stars out there and ocaasional trinaries!

        More fun is the angular momentum of the galaxies. Gravity simply can’t hold the galaxies together with the speeds of the bodies and distances involved. Consensus science had a REAL problem there. Solution? Make up sh!%. Consensus science now BELIEVES in something they can’t touch or measure or see. Not only mass but energy!!!!! We are told it is there because they couldn’t think of any other explanation!!! (sound familiar??) Seriously, sometimes that type of speculation pays off. The problem is we don’t know, yet the theory has been sold as fact for how long now??

        Nope, the BB is being supported by professors insuring their buddies and students dutifully figure out some way to fit every new observation into the BB structure whether it is reasonable or not. It is a house of cards and every year there are more plausible and well done research and ideas challenging that mouldering hulk!!!

        Another excellent example is Phlogiston. Reading the history of the theory we find that it was a valid working theory that was useful up to the day it was tossed out for a better theory. It was never proven wrong so much as inadequate. Fortunately not too many people were wedded to it or we might still believe in Phlogiston!!

        Sorry for the rant, but, occasionally I just get fed up with all the CONSENSUS Science. We have seen throughout history that Consensus is what exists for most of the time until someone comes along and crashes the party so totally that the rickety framework can’t survive. Many people think that only happened in the past. Nope, it is the more usual status of science unfortunately. We are in one of those consensus periods with huge amounts of rickety frameworks that may start falling apart in a couple of generations. Sadly I probably won’t live long enough to see the jump in our understanding that happens when all that “there just wasn’t a better explanation for at the time” is tossed out and allows more real advancement or, at least, new wild goose chases!!

        If you have any extra time there are plenty of knowledgeable scientists and engineers writing about these types of subjects and making it available on the net.

      • Kuhnkat –
        Sorry for the rant, but, occasionally I just get fed up with all the CONSENSUS Science. We have seen throughout history that Consensus is what exists for most of the time until someone comes along and crashes the party so totally that the rickety framework can’t survive. Many people think that only happened in the past. Nope, it is the more usual status of science unfortunately. We are in one of those consensus periods with huge amounts of rickety frameworks that may start falling apart in a couple of generations.

        Thank you. 18 months ago I got into a furball with some believers who didn’t want to believe that “consensus” was not the gold standard of science. They lost some skin on that one.

        In point of fact, the “consensus” is ALWAYS wrong – because they NEVER know everything. The next generation will always prove them wrong in at least minor ways – and most often in major ways. So every time I hear someone say “consensus science” or any derivative thereof, I know that their knowledge of science is faulty. This, BTW, was taught to me by a scientist. And has been reinforced by several other scientists. And is being reinforced yet again by the course I’m presently involved with.

        Personal opinion is that the big push to nail down the present “consensus science”, to prevent alternate theories, to push through things like Kyoto, Cap & Trade, carbon taxes, and institute massive economic/political changes is simply because the alarmists know – or at least, are afraid – that the structure will fall – and that they have a limited time to achieve their goals. Granted – most alarmists (including many, if not most) scientists) don’t have a clue – but the scientists should know if they have any knowledge or sense of the history of their own profession. This, BTW, is a gift to Joshua – and I invite him to falsify it – if he can.

      • Semantics again.

        What people mean, when the discuss “scientific consensus” in positive terms is not the same consensus that you are condemning. The “positive consensus” is not fixed or refutable, it’s nothing more than a word that refers to those things that a great majority of scientists agree upon at a particular moment. Often such consensus remains unchanged for long like the belief in validity of the Newtonian mechanics over a wide range of conditions, but it may also change as happened to the Newtonian mechanics for cases where the results of relativity or quantum mechanics don’t agree with it.

        There is no question that we have seen cases, where new ideas have been opposed too long. The consensus had become too rigid, and the possibility that it has to be modified was considered too unlikely. That doesn’t, however, change the fact that at any particular moment the “scientific consensus” forms a large part of best available knowledge.

        We must only agree on the semantics.

      • The “consensus” routinely and rapidly changes from a source of solutions into a problem in its own right. To paraphrase a famous adage,however, we can no longer afford to wait for it to change “one casket at a time”.

        Especially with AGW. It needs swift and utter disassembly, disproof, and dissection. That means you.

      • Pekka,

        “That doesn’t, however, change the fact that at any particular moment the “scientific consensus” forms a large part of best available knowledge.”

        The Scientific Consensus simply forms what is ACCEPTED by the most influential and/or largest number. It can have nothing at all to do with the quality of the Science as the Climate Community has proven to history yet again. Reading about the discussions and arguments over relativity, quantum mechanics , radiation… shows that science is often decided by the strongest personality as opposed to the best science. At the cutting edge it is just as much intuition that decides a direction as it is solid observation and mathematics. Strong personalities and funded biases will always have too much influence until we are no longer HUMAN!!!

      • Pekka –
        I don’t disagree with you. BUT – there are those for whom the “consensus” is, as you say, rigid, immutable and not to be questioned. For these people the science is settled and will never change.

        Apparently, at least you and I know better, even if we don’t always agree .
        on the present state of the science. But – semantics – the “consensus” I speak of here is that rigid, immutable, monolithic and immortal body of knowledge at the center of the alarmist dogma. It has become a political weapon that resembles an assault weapon from the alarmist POV, but that skeptics see as being made of foam rubber.

        If the word were used in proper context, it would be benign but, like “ocean acidification”, “climate change” and others, it has become politicized.

        We must only agree on the semantics.

        Unfortunately, that’s not possible as long as the alarmists insist on changing the words and definitions arbitrarily. As I said earlier, it greatly resembles a political version of Whack-a-Mole.

    • Thank you for the link to Tony’s CO2 post, I hadn’t read this previously. A very interesting analysis. The words “essentially correct” in the context of Beck’s analysis were nowhere used in Tony’s post. Can you come up with some more substantive critique of Tony’s SST post?

      • Judith

        It’s a fascinating subject which I am interested in from a historical and social perspective. There is no doubt that the equipment and people were quite capable of taking accurate measurements and laws were enacted accordingly. Parliament debated the CO2 levels that should be set in factories for some 20 years before the Factories act was passed and this is recorded in Hansard.

        Whether the measurements were accurate can only be determined by an audit. The article and the numerous comments look at the matter from both sides and in consequence I think it is probably the single most comprehesive source of information available on the subject.

        tonyb

      • I agree Tony. Beck’s findings are fascinating, plus the problems with ice core estimates are significant. (“Crank” is AGW code for serious opponent.) It would be odd indeed if of all the natural climate related variables, CO2 is the only constant. There is no reason for it so I do not believe it.

      • I think it would be more odd if those past measurements were right and CO2 before 1960 jumped around all over the place as in Beck’s graph but then suddenly come 1960 it stopped behaving like that and just rose up in a smooth curve. Almost as if CO2 molecules knew they were being watched….

      • I agree, but it would be odder still if it held steady for millennia until it suddenly took off, which is what AGW claims. My guess is that it oscillates naturally, like everything else in nature, but we don’t have the proper data to know the details. Like Tony, I am fascinated by what we don’t know.

      • You are not understanding the context of the measurements. With temperature measurements we are placing the observation stations next to human modifications. Very few of the thermometers or thermal sensors are away from human disturbed environments.

        For CO2 measurements we have Mauna Loa as the baseline which used to be quite far from human disturbance. Of course it had issues with volcanic CO2 so they had to develop procedures where they simply tossed out measurements where the CO2 levels were outside what they expected. I have never seen the papers where they set out the standards that are used to determine how much the CO2 can vary from expected levels before they toss the readings. Over a year ago they had one month where they threw out about 20 days of observations due to this. They posted a large drop in CO2 for that month. When they realized what happened they instituted the infilling for missing data used in the temp records. The other OFFICIAL CO2 measuring stations are similar. They are set in areas where they are guaranteed few sources to cause any major variations in their measured levels. What I read is that they were trying to get the BACKGROUND CO2 level whatever that means. In practice they want areas that are stable and use standards to exclude measurements that do not meet their standards.

        The measurements collected by Beck were NOT filtered to the same level as used by our modern researchers. They also were taken in areas that would have normal vegetation changes and possibly some human effects. If you took your measurements in any modern city you would regularly see the levels exceeding 450ppm and even over 500ppm. The measurements in Antarctica are about as representative of the earth’s surface as what we see from the OFFICIAL network.

        So what am I saying? Callendar set up an observation location and procedure where he would HAVE to adjust or toss measurements that then gave an unaturally steady CO2 level compared to most of the earth. Just take a look at the satellite pictures. We DON’T see huge clouds of high level CO2 from the major industrial cities. We DO see it from areas of deep currents upwelling from the oceans. We also see lower levels of CO2 over rain forests and ocean areas particularly active with various growths. We can also get pretty heavy CO2 release from a few types of soil according to current research.

        Our climate scientists have created a very weird picture of the earth by measuring only background levels of CO2 and Human affected temperature levels.

        Oh yeah, Mauna Loa is the only station that isn’t near sea level and is downwind from a major growth area in the Pacific that reduces CO2 levels, yet has higher than the global average readings, WUWT???

      • tempterrain

        Judith,

        The quote “essentially correct” comes from point #6 in :
        “Having read and researched the material and corresponded with Ernst Beck, I think his conclusions are essentially correct.”
        http://climatereason.com/Links/

        As I’ve just said to Max in another post we aren’t, or at least I think we aren’t, discussing SST’s in isolation but in the wider context of GHG emissions and how they may affect future climate.

        But what’s the point of any discussion with those who don’t think that atmospheric CO2 concentrations are at all influenced by those emissions?

    • Tempterrain

      What in the hell has Beck got to do with the SST record?

      Where do you get these people from? Are there no climate denialist cranks you might consider to be unsuitable and unqualified to post on your blog?

      Changing topics in order to attempt to discredit the author of a post is a logical fallacy, known as “poisoning the well”: It falls under the general logical fallacy known as “ad hominem attack”:
      http://changingminds.org/disciplines/argument/fallacies/poisoning_well.htm

      Description
      Discredit the other person before they speak. Or discredit the topic or argument that they may support.
      There are many ways of discrediting the person. Call them names. Talk about their lies. Show them to be unworthy. Tell how they are unintelligent, crazy or otherwise undesirable, inferior and not worth listening to, let alone believing.
      To discredit the topic or argument, indicate how it is patently absurd, proven to be false or that only fools would support it.

      Looks like you have fallen into this trap, Tempterrain.

      Max

      • tempterrain

        Max,

        You ask about Beck and the SST record. Correct me if I’m wrong but I was under the impression that we aren’t discussing the SST record in isolation. I’m also under the impression, again I may be wrong, that the underlying purpose of Climate etc is to bring together those of differing opinions and try to come to some sort of consensus about what needs to be done about GHG emissions.

        I, personally would think that either Judith Curry is being disingenous in saying that or else she’s being wildly optimistic.

        It may be possible with some of the less extreme skeptics/deniers but how on Earth is it possible with someone who is of the opinion, as was Beck, that 20th and 21st century science has it all wrong with respect to the ice core record of CO2? How can we have a meaningful discussion about the finer points of climate science with someone who thinks CO2 levels were higher in the 19th century and therefore human CO2 emissions cannot be resonsible for current warming. Not that there is any of course ! :-)

      • Temp, you have a deep misunderstanding of what is going on here. We are trying to get clear about the scientific issues, as a minimum. Beck’s work raises serious questions. Calling people cranks is not helpful. Take it elsewhere as we have work to do.

      • tempterrain

        I think I might be in good company when using words like “crank”!
        http://judithcurry.com/2011/01/31/slaying-a-greenhouse-dragon/#comment-36411

      • Consensus is not so scientific. That’s sounds more like the NSF and the IPCC and other groupthink political organizations IMO.

      • The CO2 data that Beck uses is what it is; how to interpret it is the issue. The same with SST data: we have some measurements for which there are some serious questions about quality, not to mention the issue of representativeness. The purpose of Climate Etc is not to try to come to some sort of consensus about what needs to be done about GHG emissions. We need to have an honest and far ranging discussion about a topic like the measurement of sea surface temperatures and the associated uncertainties without deciding first that we shouldn’t have the discussion because of its possible implications for GHG emissions policy.

      • tempterrain

        Judith,
        In “Falling down from the ivory tower” you wrote that scientists should:
        “engage the skeptics on our own terms
        (conferences, blogosphere)
        • make data/methods available/transparent
        • clarify the uncertainties
        • openly declare our values ”

        http://www.pacinst.org/topics/integrity_of_science/AGU_IntegrityofScience_Curry.pdf

        You didn’t shy away from the policy implications saying:
        “Effective framing of the science requires understanding the social
        and ethical implications, policy options and the policy process”

        The ultimate policy option concerns the level of C02 and other GH gas emissions. Surely you can’t set up something like “Climate etc” and now claim otherwise?

      • tt –
        How can we have a meaningful discussion about the finer points of climate science with someone who thinks CO2 levels were higher in the 19th century and therefore human CO2 emissions cannot be resonsible for current warming.

        What “finer points”? The basic outline is still blurry and unproven – including the CO2/temp relationship. As for human CO2 emissions being responsible for current warming – I hear a lot of words about how it’s been “proved” – but no “proof” other than models – and some correlation that no longer correlates.

      • tempterrain

        I’m not sure you do hear about “proof”. Science doesn’t do “proof”. It works on the most likely, or best fit, explanation. In the case of global temperature rise the IPCC say it’s ‘most likely’ or a 90% probability that CO2 build up in the atmosphere is responsible for it. In other words, they are saying there is a one in ten chance they are wrong.

      • tt –
        It’s been your side of the dance floor that’s used the words it’s been proved I’m just quoting the dogma.

        In other words, they are saying there is a one in ten chance they are wrong.

        Really?

        You might want to check the meaning on that one, babe.

      • tempterrain

        Can you show me any instance of scientists using the word ‘proved’ in this sense?

      • tt –
        Possibly. But I’m not gonna try. Some time ago Robert asked for proof of the “unprecedented” usage and claimed it didn’t happen. So I did a search of “ONLY” the scientific papers I have on my computer (not those on the external hard drive) and found 18 instances.

        If you want instances of “proved” watch the blogs. Even this one.

        Note – at no point did I say it was said by scientists. But they have certainly said that the “science is settled”. Although not much recently – we had a little set-to about that some time ago on Climate, Etc. :-)

      • simon abingdon

        tempterrain, ‘proved’ in this sense = ‘the science is settled’.

        I think you’re well aware there’s plenty of complicit connivance in the spread of that misconception by “climate” scientists.

      • simon abingdon

        Of course I meant climate “scientists”.

      • tempterrain

        So who has said that the “science is settled” then? It obviously isn’t in the sense that people like Judith Curry haven’t run out of things to do and are all now looking for new jobs.

        Scientific knowledge, at any one time, represents the best that can be made of all the available evidence. It is different now to what it was 20 years ago and in the future it will be different again. So nothing is ever totally settled.

        It is quite possible that CO2 and other GH gases may in the future be considered a lesser threat than they look to be to us at the moment. On the other hand , they could be considered a worse threat. There is no way of knowing for sure.

      • Obviously this is a situation where facts matter not a twit, but what he said was the “the science is solid.” The science he was referring to as being “solid” was not all science everywhere, but rather a small sliver of science.

    • The lack of good quality historical data has push some climate scientists over the edge of statistical integrity. Sometimes, if you don’t have the data, you just don’t have it, won’t have it, and can’t get it. Period. No amount of hocus-pocus will bring it from the dusty pages of history.

    • temp,

      Ernst Beck is using data collected by SCIENTISTS, including a couple of Nobel winners, under defined conditions using calibrated equipment with a 3% accuracy for most of the readings. (what is 3% of 450ppm temp?) They logged locations and conditions and as already mentioned used calibrated equipment. The majority of the SST data isn’t even in the same class as this as the people taking the observations were not highly trained and were generally not as motivated to get exact readings and did NOT have instruments and methods that could provide observations with a reasonable error and bias for what it is NOW being used. The fact that some gentlemen with an agenda didn’t like the Beck collated measurements does not mean they are not accurate enough for the purpose he has presented them.

      I guess you want to throw out everything Arrhenius and the rest of those old guys did huh??

      HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

    • Pooh, Dixie

      Once again, a “Reputational Cascade”. We would be a lot better off if we got down to the science of finding out what is going on.

      Timur Kuran and Cass Sunstein. 2007. Availability Cascades and Risk Regulation. Research. Social Science Research Network. October 7. http://www.law.uchicago.edu/files/files/364.pdf

  27. TonyB

    An excellent post. Looks like you have “done your homework”.

    I was always skeptical of the relevance of the sea surface temperature record (canvas buckets and all that), but after having read your post I am even more skeptical of anything prior to the installation of the ARGO devices. And since these apparently show the ocean is cooling, rather than warming, I am really confused.

    Josh Willis called this recent cooling a “speed bump”.

    No wonder he was disturbed, as he was a co-author of the Hansen et al. “hidden in the pipeline” paper, where the “pipeline” was postulated to be the gradually warming upper ocean.

    But let’s see if Willis is right or not about the “speed bump” (if it lasts for more than a decade it will be more than just a “bump” and will constitute a direct falsification of the “hidden in the pipeline” hypothesis.)..

    Max

  28. Tony Brown: Thanks for the reference and the kind words in your post.

    • Bob Tisdale

      I greatly admire your work and hope people will find their weay to your articles
      tonyb

      • tempterrain

        Judith,

        The last couple of postings make Climate etc look like a climate cranks mutal admiration society.

        The chosen the very questionable course of recently inviting people like Tony Brown and Don Aitkin to enlighten us with their current musings on the state of climate science and humanity in general.

        How about balancing that up with inviting some of your colleagues from the climate science community, and I’m sure you must know plenty who do actually know what they are talking about, to give us their thoughts on the topic?

      • tempterrain

        Correction: Second paragraph of above should start “You’ve chosen the very….. “

      • I think inviting some ‘climate scientists’ to post is a great idea. Do you know of any with a PhD in ‘climate science?’

      • You’re asking me? No – not personally.

      • tempterrain,
        Your transparent attempt to define away what you do not like is not working very well.

      • temp:

        “The last couple of postings make Climate etc look like a climate cranks mutal admiration society.”

        Did you intend to include Bob Tisdale in that category or was it a self- characterization?

      • I wonder why it is that all the consensus types keep trying to tell Dr. Curry how to run her blog? If only there were “consensus” blogs that could give vent to the IPCC/consensus view, and filter out comments from all the skeptical, anti-science types. Oh wait, there’s a bunch of them.

        Why is it that those who claim to be so certain are the very people who so fear dissent?

    • Mr. Tony Brown, Thank you, for the very informative post. It saddens me to see the precision we have gotten, for a trillion dollars & twenty years of boasting. The bravado of the AGW scientists in defending their ‘science’, is quite unbelievable considering their finished product. The rulers were able to read microns for the unwashed, it seems… they were close.

  29. …which is why we need more proxy records.

  30. Interesting article. Not sure where to post this, but this thread might be OK for it. Whales, plankton migrate across the Northwest Passage.
    http://www.google.com/hostednews/ap/article/ALeqM5iqTYm_vYaki39R4dEkBao68LlSTQ?docId=bd0645f05e494649bf7e56ae512e0b82

    • Jim D, thanks for the link. That article also notes that
      plankton is now being found in the North Atlantic where it hasn’t lived in at least 800,000 years.

      If you have ever watched Sponge Bob, you may know plankton are evil little creatures. I wouldn’t put it past ’em to be involved in the global warning conspiracy.

      • M. carey, Tiny Tim lead the way, for the AGW scientist. He knew how to scare small children about the melting polar ice caps, long before the professionals got involved. It was almost half a century ago, when this video was released for public consumption. We seem to have lost his ‘field notes’ however. Perhaps, ‘Sponge Bob’ wouldn’t mind writing a peer review paper on this work.

        http://www.youtube.com/watch?v=8DEoOdcYKbc

        What a tear jerk.
        .

      • Thanks, Tom, I hadn’t seen that one. The kids seem amused by Tiny Tim, not scared. I think the “wash away or sins” in the lyrics is a reference to the parallel between flooding and baptism.

        Peter 3:20-22
        “God’s patience waited in the days of Noah, during the building of the ark, in which a few, that is, eight persons, were saved through water. Baptism, which corresponds to this, now saves you, not as a removal of dirt from the body but as an appeal to God for a clear conscience, through the resurrection of Jesus Christ, who has gone into heaven and is at the right hand of God, with angels, authorities, and powers subject to him.” (1 Peter 3:20-22)

  31. The author implies historical instrumental sea temperature records aren’t reliable indicators of long-term trend because scientists cannot take into account non-climatic influences on temperature readings and differences in the way temperature was recorded, but he offers no evidence that scientists fail to take these factors into account, nor does he offer evidence that temperature anomalies based on the records are inaccurate. An overview of how scientist develop temperature anomalies can be found at …

    http://web.archive.org/web/20070704012242/tamino.wordpress.com/2007/05/11/best-estimates/

  32. TonyB –
    I’ll add my thanks for this article.

    The responses were entertaining, especially those that show the fragility of the faith of some of the believers. You hit a little too close to home for comfort for some of them.

    • Jim

      Thanks for this. There are several subjects which those who believe in CAGW find it uncomfortable to deal with. The basic acuracy of data such as SST’s and Surface temperatures appear to be two examples.
      tonyb

      • Nebuchadnezzar

        The basic acuracy of data such as SST’s and Surface temperatures appear to be two examples.

        Hi tonyb, I saw this (in which Kevin Trenberth enumerates the manifold problems with SST measurements and analyses)
        http://www.cgd.ucar.edu/cas/papers/bams99/
        and thought of you!

      • Nebuchadnezzar

        Good find. I think Ttrenberth was a fine scientist back then. I do sometimes wonder if scientific objectivity and insight diminishes in direct proportion to fame and prestige. James Hansen was also good in his day but has succumbed to his celebrity.
        tonyb

  33. Good post Mr. Brown.

  34. If I placed thermometers on the sunny and shady sides of my house, on the roof, and in the garden fish pond, and measured the temperatures at each location for a long period, I would expect different temperatures but the same temperature trends at these locations. Mr. Brown and some of the posters here might disagree.

    Mr. Brown explains how sea surface temperatures in the same proximity can measure differently depending on such factors as the kind of ship, the kind of the thermometer bucket , and how the bucket is used, and implies the temperature trends also will be different.

    • Yes, if nothing changes.

      In reality, over ~100 years, your thermometers will be in a changed background ecology (e.g., replacing forest with other vegetated surface or soil surface with impervious surface such as paving material and building material) and your fish pond might completely change.

      • Yes, and how much is this bias ? We can try to quantify : http://img708.imageshack.us/img708/1363/anomthn.png

      • Edim, unless the “background” ecology changes effect each of my thermometer locations equally, which seems unlikely, I would notice a divergence in the trends.

      • Over ~100 years they will VERY likely change divergently. Have you ever measured outside temperatures?

        Anyway, in case of instrumental records, all stations have more or less warming bias (ALW). Some more (more urban), some less (more rural). This can be tested. Re-evaluate the stations if necessary, and give them notes (e.g., 1 – 10). 10 would be the absolutely best stations, 1 the worst.

        I bet there would be strong correlation between station quality and the temperature trend. The worst stations would show the most warming.

      • Why would the temperatures around my house show different trends over 100 years?

        The notion temperature on sunny side of the house would warm at a faster rate over 100 years than the temperature on the shady side is bizarre.

      • You have a lot of faith in your ideas. I’ll give you that.

    • You’re also forgetting that both the ships and the ocean currents are moving

      • The fact ships move shouldn’t be a problem if the locations of temperature readings are identified. If the pattern of ocean currents has changed over a long period in ways that bias long-term temperature trends, wouldn’t warming biases from some locations be offset by cooling biases from other locations?

      • You’ve never swum in the sea then, otherwise you will have noticed how markedly the water temperature can change in the space of a few yards.
        .Forgive me for being a bit sceptical at the implied contention that ships have followed exactly the same routes and measured temperatures at exactly the same times of the day of water drawn from exactly the same depths.

      • Peter317, why would you expect thermometer readings of sea surfaces a few yards apart or at slightly varying depths to have different long-term temperature trends?

        Locations and dates of temperature readings at sea were recorded in ships logs’ (see linked example for the Albion in ADM 101/1/19). If there were random variations in the time of day temperature was read, long-term temperature trends should not be affected.

        http://www.nationalarchives.gov.uk/catalogue/displaycataloguedetails.asp?CATLN=7&CATID=-5742394

      • M.carey –
        why would you expect thermometer readings of sea surfaces a few yards apart or at slightly varying depths to have different long-term temperature trends

        What gives you the idea that “a few yards apart” applies here?

        “Slightly varying depths’? What’s the temp difference in a 10 meter column of water? Or 3 meters, for that matter?

        Related comments here –
        http://judithcurry.com/2011/06/27/unknown-and-uncertain-sea-surface-temperatures/#comment-80866

      • Peter317’s post gave me the idea a few yards apart applies here.

        I doubt buckets were submerged 10 meters. Perhaps you would do it that way, but I wouldn’t. That would have made pulling them up too difficult for a lazy person like me. Besides, there would be no reason to submerge a bucket 10 meters if you wanted water from the surface.

      • M Carey

        I mentioned in the article that the depth depended on the disposition of the thrower of the bucket. Strong keen types might throw it deeper than weak lazy ones. Both sorts might leave the bucket full of water on a hot or cold deck for anything from minutes to hours.
        tonyb

      • But wouldn’t those introduce random error rather than systematic error?

      • M.carey –
        Peter317′s post gave me the idea a few yards apart applies here.

        You have no clue how big the ocean is, do you.

        I doubt buckets were submerged 10 meters.

        I doubt that too – in general. But you ignored the 3 meters. And the reason for the measurement in the first place. Which, on steamships, was to provide inlet temps for the engines. And the inlets would be located well below the Plimsoll line. On a big ship that could well be 10 meters. On most, it would be less.

        Surface temp wasn’t necessarily of great interest except to people like Darwin.

        BTW – “lazy” wasn’t a good way to survive long voyages. Even into the 40’s.

      • tonyb – you’re putting an awful lot of weight on the anecdotal tale of one limey.

      • Jim Owen, if you have evidence depths of 3 meters and 10 meters show different long-term temperature trends, present your evidence.

        If you have evidence reconstructs of SST trends don’t take into account systematic errors that could result from changes in the water depth of temperature samples, present your evidence.

      • M.carey –
        Never done any diving, have you. Try it, you might like it.

  35. The HadSST3 website went live sometime yesterday
    http://www.metoffice.gov.uk/hadobs/hadsst3/

    There’s a general summary of uncertainties in SST analyses on this page.
    http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html

    • This is very informative. I wonder whether it wouldn’t be useful to attach these links as an addendum to Tony Brown’s post rather than simply leaving them here at the end.

    • Out of curiosity.

      Have there been attempts to more accurately measure the inaccuracy of historical methods of measurement?

      By that I mean, now that more reliable methodologies are available, has anyone collected data by using the older techniques and compared the results to data collected by more reliable means? Given the intensity of finger-pointing and name-calling, it seems that someone on one of these sides would very triumphantly want to publicize the results of such an analysis.

      • I suppose I should rephrase the question to whether or not anyone has attempted to quantify the inaccuracy of historical means of measurement.

        Is there some reason why such an effort wouldn’t be relatively straight-forward?

      • Joshua –
        Is there some reason why such an effort wouldn’t be relatively straight-forward?

        Yes. The data was collected by thousands of ships, for example, over tens, if not hundreds of thousands of voyages over several hundred years. Each of those ships used different thermometers – literally. Once the importance of SST was realized, each ship carried at least two (or more) thermometers, each of which carried it’s own calibration errors. Then there are the “buckets” – different types – compostition, weights, length of rope attached, etc. All of which affect the measurement.

        And then there’s the measurement – what time of day, who throws the bucket today (or tomorrow), how deep does the bucket go, how long after being returned to surface is the measurement taken.

        For the engine intake method there’s the question of individual ship thermometer type and calibration. And of the ship configuration – location of the intake, ship loading (how deep the intake was), time of day, accuracy of reading (human error) – and ship speed at the time of reading.
        And then there are the inaccuracies in ship position – sometimes minor, sometimes major.

        Now – determine all those factors for each temp reading for each ship for each day of each voyage over several hundred years.

        And that just gets you your data base – with major “holes/unknowns/uncertainties” – but no analysis.

        How many lifetimes do you have to put into the project?

      • How many lifetimes do you have to put into the project?

        I’d be willing to donate a least a couple.

        OK – a prefect calibration is obviously not possible – but it does seem that efforts could be made that might return instructive results with broad-scale data on the magnitude and/or direction of discrepancies.

        Does a reasonable sample of the thermometers used survive to this day?

        It seems that errors in the methodologies used to determine ship location could reasonably be reproduced, and sampling could be controlled for location, time of day, etc.

        Obviously, it would be an involved task – but is it really unreasonable to think that a realistic effort might return valuable insight into potential large-scale discrepancies between historic and current methods of data collection?

      • Joshua –
        Does a reasonable sample of the thermometers used survive to this day?

        Depends on what you call reasonable. Some of the really old thermometers would be in museums – if you could determine which museums – and pry them away fromthe curators. A few of them might even still be in use – although “where” would be an interesting hunt. The engine inlet thermometers – again , some might still be out there, but most ships from the ’40’s would have been broken up for scrap. Some of the 60’s ships are still around – during the first Gulf War my son was on one that was supposed to have been “retired” ten years before that. But even for the ships that are still extant, that instrumentation would likely have been replaced – at least once, possibly more. Salt water is a rough environment on instrumentation. Bottom line is that – yes, you “could” find some of the source instrumentation. How much is another story.

        It seems that errors in the methodologies used to determine ship location could reasonably be reproduced, and sampling could be controlled for location, time of day, etc.

        Ship location is one of the problems – not easy to explain here but basically, the earlier the date the less likely ship location is to be accurate. Until LORAN, navigation was done using hand-held instruments and accuracy depended on the ability of the individual navigator. And even LORAN had it’s own uncertainties.

        Obviously, it would be an involved task – but is it really unreasonable to think that a realistic effort might return valuable insight into potential large-scale discrepancies between historic and current methods of data collection?

        Yup – it would be involved – and expensive. Would the ROI be worth it? Depends on who pays for it and how badly they want it. The project “could” be done – but is it worth it? Would it really tell us anything? I don’t know that. But I’ve wondered about it for the last 12 years, at least. Yes, I’m VERY familiar with the subject – my father was a Ship’s Master (navigator), I was a qualified blue water navigator before I graduated from High School and my career involved a giga-gaggle of instrumentation. And I’m a curious animal.

      • Joshua

        Yes, as far as Land temperatures go. Phil Jones and his Italian colleague Dario Camuffo were given substantial EU funds to follow up on eight historic temperature data sets several years ago. It weas their second study of such records.

        I wrote about it in the very first article referenced within this sea temperatures paper.

        http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%e2%80%93-history-and-reliability-2/

        The trouble is that it is very difficult to compare like for like for a whole host of reasons. For example, in historic times a thermometer may have been placed in a north facing room, then that thermometer may have been changed before being moved to a balcony, then one from another manufacturer placed in a field without any adequate screening, and eventually ended up at an airport.

        Trying to unpick all that is problematic and I’m not honestly sure it can be done. A moving target such as the sea is far more difficult to backtrack on .

        I carry many of the historic temperature records on my site here.
        http://climatereason.com/LittleIceAgeThermometers/

        tonyb

      • Thanks, tony.

        I don’t know much about the nuts and bolts of statistics, but I am impressed with the power of statistical analysis. Do you really think that it wouldn’t be possible to work out the logistics in studying this question to the point that analysis would bring results that add clarity?

      • Joshua,

        Random errors aren’t the problemat can introduce errors couldn’t be simulated.
        Joshua,

        I don’t think it would be impossible to replicate practices that resulted in pervasive systematic observational errors, which are the kinds of errors that are the most difficult to deal with when doing the sea surface temperature reconstructs. Many of the comments in this thread continue to dwell on benign radom errors as if they were serious.

        The following link provided by Curryja is to an article that discusses the kinds of errors and differences in their importance:

        http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html

      • Strike the first sentence in my previous post.

      • I don’t think it would be impossible to replicate practices that resulted in pervasive systematic observational errors, which are the kinds of errors that are the most difficult to deal with when doing the sea surface temperature reconstructs.

        That would be my point – stated more succinctly.

  36. Joshua

    I have paraphrased part of my article referenced above;

    “Until very recently, following the widespread advent of Automatic weather stations (AWS) in the 1980′s and digital recording some years after, obtaining an accurate manual reading from an instrument was highly problematic, being reliant on numerous variable factors, any of which may cause concern over the reliability of the end result.

    The skill and diligence of the observer were of course paramount, as was the quality of the instrumentation and that a consistent methodology was employed, but this did not prevent the numerous variables conspiring to make the end result-an accurate daily temperature reading-very difficult to obtain.

    Many of these basic concerns can be seen in this contemporary description from a 1903 book which relates how temperature recordings of the time were handled. The “Handbook of Climatology” by Dr Julius von Hann (b. 23 March 1839 d. 1 October 1921) is the sometimes acerbic observations of this Austrian, considered the ‘Father of Meteorology.’
    http://www.archive.org/details/pt1hanhdbookofcli00hannuoft

    This material is taken from Chapter 6 which describes how mean daily temperatures are taken;

    “If the mean is derived from frequent observations made during the daytime only, as is still often the case, the resulting mean is too high…a station whose mean is obtained in this way seems much warmer with reference to other stations than it really is and erroneous conclusions are therefore drawn on its climate, thus (for example) the mean annual temperature of Rome was given as 16.4c by a seemingly trustworthy Italian authority, while it is really 15.5c.”

    There are numerous veiled criticisms in this vein;

    “…the means derived from the daily extremes (max and min readings) also give values which are somewhat too high, the difference being about 0.4c in the majority of climates throughout the year.”

    Other complaints made by Doctor von Hann include this comment, concerning the manner in which temperatures are observed;

    “…the combination of (readings at) 8am, 2pm, and 8pm, which has unfortunately become quite generally adopted, is not satisfactory because the mean of 8+2+ 8 divided by 3 is much too high in summer.”

    And; “…observation hours which do not vary are always much to be preferred.”

    That the British- and presumably those countries influenced by them- had habits of which he did not approve, demonstrate the inconsistency of methodology between countries, cultures and amateurs/professionals.”

    —— ——— —–
    There are numerous other variables that I enumerate within the article. So we have a reading that in itself is likely to have been taken in a completely different manner to its neigbour run by a different observer, and the reading in question may be taken in different ways on different days with-until the maxc min thermometer being available -the night time temperature possibly being invented or missed out.

    A statistical analysis is perfectly useful for sorting out the average of a series of say 10 readings taken at the same time at any one station, but don’t forget these are ‘one off’ readings, so it is impossible to average them and discard the outliers.

    The SST’s are far worse as they are a moving target and the accuracy of the reading often in question for reasons already stated.

    So under all these circumstances I think it impossible to retrieve MEANINGFUL records that are accurate and consistent and fit for purpose.

    I have suggested to Judith that an article on analysis would be interesting but it doesn’t get away from the fact that if you start with bad and confused data it is very difficult to save it. The Phil Jones study I referenced earlier cost ssome £5 million I understand, so scientists trying to investigate these historic records in sufficient depth is highly expensive . The study was ssupposedly available on a cd but I found it impossible to buy it as it became unavailable. I offered to go to the Met Office or CRu to go through the material or see their library copy but they refused. Read into that what you will.

    Perhaps I’ll ask again, but neither organisation are very forthcoming with information.

    tonyb

    • Thanks again, Tony. As it happened, I already followed your link (I’m at the shore, it’s raining).

  37. Tony and Judith: Interesting, but unsatisfying due to a lack of quantitative information. How big are the correction factors being used by CRU to adjust for changes in bucket type and other measurement technology? How much does water temperature vary with depth? (Wooden buckets float; other buckets may not. Engine intakes are deeper.) Qualitative description of issues is a poor substitute for quantitative information.

    The marine measurements mentioned by Judy in Brohan 2006 are based on Rayner 2006, which is mostly derived from Folland and Parker 1995, which is behind a paywall. The abstract to FP95 says that the correction (mostly for changing from wooden to canvas buckets) amounts to +0.11 degC in 1865 to +0.42 degC in 1940. IF I understand correctly (a big if), measured SSTs therefore show negligible long-term trend during this period and all of the net rise is due to this correction! Rayner 2005 refined this correction using night marine air temperature to estimate the percentage of wooden and canvas buckets – despite the fact that climate scientists rely on SSTs because they are supposed to be more accurate than air temperature.

    Here is the abstract for FP95: “We describe a physically based empirical technique for correcting historical sea surface temperature measurements for time-varying biases. The corrections are based on models of heat and moisture transfers from uninsulated (canvas) and partially insulated (wooden) sea temperature buckets exposed on deck. One of the canvas bucket models is tested using measurements on board ship and published wind-tunnel measurements. The method gives geographically and seasonally varying bias corrections through the period 1856 to 1941. The corrections are fairly insensitive to uncertainties such as the size of the bucket or the details of its exposure on deck. A discussion of the history of sea surface temperature observations provides a background to the procedure.

    The resulting globally and seasonally averaged sea surface temperature corrections increase from 0.11 degC in 1856 to 0.42 degC by 1940. The corrections are compatible with recent measurements made at sea of the errors of canvas buckets. Global and hemispheric time series of corrected sea surface temperature and night marine air temperature data show good agreement: more detailed verifications of the corrections will be reported elsewhere.”

    I like the part where they say that: “One of the canvas bucket MODELS is tested using measurements on board ship and published wind-tunnel measurements.” Do climate scientists ever do real experiments with proper controls? Why didn’t they compare all of the sampling methods side-by-side on board ship? (All types of buckets, intake at various depths, buoy, and satellite.) Measurements in a wind tunnel are problematic because the rate of evaporative cooling depends on relative humidity, which is likely to be higher over the ocean than in a wind tunnel.

  38. I’m just going to throw this one out there as I don’t think anyone has mentioned it – and it’s slightly a bit off topic, but think of the way that the majority of paleoceanographers and paleoclimatologists reconstruct past ‘sea surface temperature’ – they rely on planktonic foraminifera that actually live about ~30m below the surface (in the case of species such as G. ruber)…

  39. for Pekka Pirilä | June 29, 2011 at 4:42 am |

    Sorry, not allowed to reply after your post. I hope you see it here.

    Look, I think you’re rubbing it in, as we say in Oz. I don’t think scientists are incompetent or stupid. I’ve spent my working life with scientists of all persuasions, and have two brothers both of whom follow his own scientific trade.

    I do think that there has been a good deal of adjustment without explanation, which is then accepted by the next scientist, who builds in the new data, makes some adjustments, and so it goes. That has happened elsewhere, and I would be quite surprised if it hasn’t happened here.

    But I’ll do something of what you have asked, and go and read. You didn’t offer me any suggestions about good examples of what you have in mind (ie the intelligent competent scientists who check what has been done and make their methodology and data available to all). So I’ll be more than a little peed off if I find such paragons lacking. And I’ll be back to tell you, either way.

    I’ll start by asking Tonyb (are you listening?) for some good examples. He may know of some.

    • Don

      I think your comment has become disconnected from the others so are you asking me for reading concerning the temperature record or is it a more general question?

      tonyb

      • Thanks, Tony. You have read a great deal in this area. Can you point me towards papers where those interested in SST have looked at the errors involved, suggested necessary adjustments, shown how they do it, and argued for the result?

      • The new thread on the uncertainties in HADSST3 tells well, what I had in mind. The analysis presented by the Hadley/Met Office authors in their papers is that type of work that must be acknowledged, when the problems are discussed, and the critique of those papers by Judith is criticism of the actual scientific knowledge.

        In combination with the new thread the posting of Tony has a very useful role in describing the difficulties facing the scientists, but without this new thread the overall picture would have been really one-sided.

      • Pekka

        My post was primarily intended to highlight the uncertainties and record the history of the means of gathering the data. It is important that a light continues to be shone on this very important subject as we move towards a scientific analysis of the msterial.

        But at the base of it all we mustn’t lose sight of the extremely random manner in which much of the early material is gathered and no matter the degree of scientific analysis we must recognise much of it was collected in a manner that renders the margins of error too great for it to be as accurate and as meaningful as is claimed.

        Material gathered in the last half century is another subject.
        tonyb

      • There are two related, but separate issues.

        1) All specific problems with existing historical data.

        2) The influence these problems have on the present knowledge about the phenomenon of interest. (In this case the development of the average SST.)

        Your posting was good for the first point, but most, if not all, readers interpreted it to present strong claims on the second point. I still believe that it did present strong claims on the second point. This is, where it was not at all balanced as it failed to recognize well enough that the problems had been taken seriously by the scientists, and that the scientists had done so much work on these issues that no valid judgment on the state of understanding could be presented without consideration of the outcome of this work.

        By the previous chapter I don’t make any other claims on the accuracy of the results than that leaving the existing state of science out of consideration or giving very little weight on it makes the posting worthless for the second point.

        I reacted on this so strongly, because this way of misrepresenting the state of scientific knowledge goes on all the time and often also on this site. (What kind of misrepresentations are done by some people of the other extreme is another problem to be criticized based on its faults. That’s being done quite extensively also an this site.)

        The fact that a scientifically literate non-specialist doesn’t know the solution to a problem doesn’t mean that no solution exists. It doesn’t mean either that scientists have not been open enough in the publications. To answer, what is the state of science one must bite the bullet and make the effort of looking at the publications and other representations of the science.

        When scientists argue that decisions must be based on their results, they must take the burden of presenting the most essential knowledge clearly enough for non-specialists to understand what’s known and what’s not. This requirement cannot be extended to every scientific result and detail of analysis, because that would be a total waste of effort and even impossible in many cases. In the case of SST we happen to have brand new (too new for you to take into account in the posting) summarizing representations from Hadley, but similar information was available also earlier.

        It’s right to criticize science, but that should be done taking into account all present knowledge, not only a selected fraction of it.

      • Pekka

        I was writing a short article not a book. In it I wanted to show the uncertainties in the basic information the further back in time you went and referenced several studies that expressed those uncertainties by the people formulating the information.

        These included a highly detailed letter from Phil Jones to Nature, one from Ashford in 1947 when this was a really hot subject, one from Parker in 1985 and several more general ones.

        I knew of Judith’s interest and quite rightly she is now highlighting other areas that deserve to be explored.

        Fortunately there are people from both sides of the debate on this blog which invariably means that whatever the subject of the article under discussion it will be examined from all sides and key points will be raised.

        Incidentally I very much value your informed comments on this blog on a variety of subjects-it is important to have a spread of opinion
        tonyb

        tonyb

    • Nebuchadnezzar

      Hi Don, There’s a recent review on systematic errors in SST measurements that might be a useful place to start. I think it’s behind a paywall now though. It was, formerly, free.

      http://wires.wiley.com/WileyCDA/WiresArticle/wisId-WCC55.html

      There are a whole bunch of studies referenced in the paper looking at systematic errors in the data.

      • Bias adjustments must be re-evaluated when new or digitized data are added to the record. Despite this, gridded SST datasets that include bias adjustments and estimates of bias uncertainty are available starting from 1850, and uncertainties are small enough to allow determination of climate change.

        Interesting, that.

        What say you, tony?

      • Joshua

        Yes interesting. The theory is fine but the implementation relied on humans carrying them out, often in difficult conditions. The spatial coverage is terrible and the number of observations miniscule as you can see from the graphs.

        Don’t forget it is the historic record that I think pretty useless. The stuff since 1980 from satellites is much better (although not on a like for lile basis of what went before) and arguably broadly acceptable for several decades before that in well travelled areas-mostly Western Europe and the US.

        So for over a century we have a record that is severely flawed yet we believe we can parse this material to tenths of a degree.
        tonyb

      • All the statistical parsing is an attempt to evade the “Least Significant Digit” rule: no result can be more accurate or certain than its least accurate or certain component. (By “component” I mean, of course, a factor relied on in the construction of an uncertainty estimate.)

      • Brian H, 6/30/11, 3:51 am, SST

        How ’bout them GCMs (Global Climate/Circulation/Catastrophe Models), eh? They can’t predict the weather (LSBs), and they can’t account for climate on paleo scales (MSBs), but in between they are so good that IPCC can justify spending the standard of living of the Western World for a hobgoblin.

        GCMs only get the middle significant figures right.

      • Nebuchadnezzar

        Thanks JCH, I thought I was never going to see it again. Damned paywalls. Kudos to WIRES for giving it away free when they did though.

  40. We can waffle around about “canvas buckets = wooden buckets + 0.11C adjustment”, etc. but the saline point in Tony’s article is.

    Prior to satellite altimetry we had a lousy system for measuring upper ocean water or sea surface temperatures (whether from canvas buckets, wooden buckets, ship intake, etc.). Expendable buoys were too spotty to tell us much. Satellite altimetry had the skin temperature problem, so did not really tell us that much, either.

    Since the SST record comprises around 70% of the global record, this represents a large source of uncertainty concerning past observed warming.

    Since 1979 we do have a satellite record of tropospheric temperature trends over land and sea.

    We now have a very expensive and comprehensive ARGO system, but it has only been in operation since 2003, and it has showing net COOLING of the upper ocean rather than warming, as expected.

    So let’s just say that there is a lot of UNCERTAINTY regarding the ocean and sea surface temperature and the records to date should be taken with a large grain of (sea) salt.

    Max

    • Agreed. But “a … grain of … salt” doesn’t mean what you think it does. It’s an actual quantity, not a particle.

      “10. gr the smallest unit of weight in the avoirdupois, Troy, and apothecaries’ systems, based on the average weight of a grain of wheat: in the avoirdupois system it equals 1/7000 of a pound, and in the Troy and apothecaries’ systems it equals 1/5760 of a pound. 1 grain is equal to 0.0648 gram
      11. Also called: metric grain a metric unit of weight used for pearls or diamonds, equal to 50 milligrams or one quarter of a carat”

      So about 1/200 gm. Or 1/1275 tsp. Approximately.
      ;)

      • BrianH

        Perhaps we need to be more precise-how about a ‘Dead Sea’ of salt?
        Care to work that one out? :)

        tonyb

      • Near as I can figger, about 10 cubic miles. Which is lotsa tsp., and 100 times as many grains.
        :(
        ;)

    • tempterrain

      Would an engineer opt for a higher or lower level of caution given an increased level of uncertainty?

      I’d say higher – almost certainly. It wasn’t Titanic’s engineer who ordered full steam ahead when the iceberg situation was quite uncertain. But you seem to like to argue the case for uncertainty. Is this because you feel we should be more cautious than most in our assessment of the likely harm which may be caused by increased atmospheric GH gas concentrations?

      • Snuck in the weasel-work “likely” there, dincha? About as likely as an unprecedented worsening of living conditions when it gets warmer, instead of the much improved ones experienced both historically and prehistorically.

        Which is approximately bupkis.

      • typo: weasel-word. Though “work” sorta works, too.

    • Max,
      I spent some effort in trying to find, where the observation is made that ARGO shows cooling, but without success excluding early papers that have been made obsolete by corrections in data or by extending the period from the shortest of all used by Loehle.

    • Your opinion isn’t evidence.

    • Nebuchadnezzar

      Prior to satellite altimetry we had a lousy system for measuring upper ocean water or sea surface temperatures (whether from canvas buckets, wooden buckets, ship intake, etc.). Expendable buoys were too spotty to tell us much. Satellite altimetry had the skin temperature problem, so did not really tell us that much, either.

      Satellite altimeters measure sea level height. Satellite radiometers are the ones that are sensitive to sea-surface temperature. Drifting buoys were deployed starting in the late 1970s, but numbers only really took off in the 1990s.

      The skin/sub-skin difference is an important effect to understand (I’m not sure exactly which one you are referring to – the cold skin effect, or the diurnal warming effect) but one can minimise the problems due to solar heating by using night time temperature measurements only, or only using measurements when the wind speed is sufficiently high to mix the near-surface layer. The latest data sets use physical models of the near-surface structure to covert skin temperatures to bulk temperatures and vice versa. These are problems that have been thought about by the scientists involved and are clearly not insurmountable given that many of them have been surmounted.

      There are other problems with satellites besides the skin effects – aerosols and dust in the atmosphere, drifts in the orbit leading to a changing time of measurement, imperfectly calibrated sensors and so on. The key thing is that these are all known problems and some satellites have been designed specifically to overcome them. The ATSR instruments (e.g. http://envisat.esa.int/instruments/aatsr/) have been flying since 1991 and were designed to measure SST trends to much better than 0.1K/decade accuracy. They are placed in very tightly controlled orbits that exhibit very small drifts so that the satellite passes over the equator within a few minutes of 10:30 throughout its lifetime. There are on-board black bodys so that the sensors can be regularly calibrated. The satellites take two views through the atmosphere to efficiently detect and minimse the effects of dust.

      In comparisons with ARGO data the AATSR instrument showed single measurement uncertainties of 0.15K see:
      https://www.ghrsst.org/files/download.php?m=documents&f=GHRSSTXI_STVAL_Report.ppt

      That’s for a single measurement! The systematic biases relative to drifting buoy measurements are, according to this document:

      http://xweb.geos.ed.ac.uk/~chris/Publications/ARC-Initial-Validation.pdf

      less than 0.1K. Satellites really ought not to be written off completely.

      Other satellites are less accurate than the ATSRs, but can and have been calibrated against drifting buoy measurements.

      Since the SST record comprises around 70% of the global record, this represents a large source of uncertainty concerning past observed warming.

      Actually, the fact that we have SST measurements massively reduces the uncertainty in global average temperature. It would be much larger if we didn’t have *any* SST measurements at all. The question is how accurate are those measurements. Even before the magic date of 1950 (or 1980 depending) the SST measurements are good enough to resolve El Nino and La Nina events even going back to the 1880s. That the events happened can be seen in records of station pressures and through the impacts that the events have.

  41. Max

    The trouble is of course that they are seen as highly accurate measurements that are one of the unshakeable cornerstones of climate science. Those who we want to take them with a large pinch of sea salt seem to have mislaid their salt cellar.
    tonyb
    tonyb

  42. Sorry. Messed up my arithmetic, above. It’s about 1/16 gm., or 1/100 tsp.

  43. “The obvious gulf between the ideal and the reality leads naturally to questions about the reliability of the surface temperature record. Often this question is couched as a yes/no dichotomy: are surface temperature records reliable? But a more scientific question is “How reliable are surface temperature records?”. Because historical measurements were not made for climate research does not mean that it is impossible to derive a record that is useful for climate research from those observations … ”
    ______

    From the comments in this this thread, I get the impression critics would rather couch the question of SST record reliability in a “yes/no dichotomy,” and let it go at that, rather than deal with the more scientific question of ” how reliable.”

    http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html

  44. I was very excited about seeing the results of Argo but still don’t seem to have seen any results. It has been 7-8 years now so would expect many papers by now. Also how much does it actually cost to launch and maintain 3000+ automated measuring buoys?? It seems like it will be very expensive compared to the number of relevant papers I’ve seen.

    • Like the BEST data, it may be that they are documenting, according to the consensus, inconvenient untruths that must be adjusted to death or hidden away.

  45. “… if there is extra heat in the ocean from any source, the cloud cover increases. The thunderstorms increase. This is visible in the course of each day. As the sea heats, clouds increase. A change in albedo of 2% is equivalent to two doublings of CO2 (from the current 380 ppmv to 1,520 ppmv!)… so a tiny, undetectable change in cloud cover is more than enough to offset any conceivable variation in CO2…

    “Since we only use 70% of the sun’s energy, it is clear that the sun puts out more than enough energy to totally roast the earth. It is kept from doing so by the clouds reflecting about 20% of the sun’s energy back to space, and the surface reflecting back another 10%. As near as we can tell, this system of cloud formation to limit incoming solar energy has never failed.” ~Willis Eschenbach

  46. maksimovich

    Judith says

    Fred, I agree that ocean acidification is worthy of its own thread, but I don’t have any expertise on this nor the time to investigate. We can have a discussion thread, or someone can volunteer to do a post?

    As a complex issue framing is indeed difficult .An interesting approach would be on the overextension (extrapolation) of possibilities beyond the level of skill or indeed in the papers themselves.In a nutshell science by press release crisis sells eg A. J. Andersson and F. T. Mackenzie 2011

    There is no doubt that this major change in seawater carbon chemistry is of great concern in terms of how it will affect individual marine organisms and ecosystems. It is critical that scientists remain objective about this issue and provide objective statements and advice regarding the potential consequences of ocean acidification. The potential threat to individual organisms and organism communities and to ecosystem services and resources that will arise from OA are very serious, but it is important to not exaggerate and provide sensational statements that are misleading and not substantiated by the currently available scientific data. In the long term, this may damage the credibility of scientists and their ability to influence public and political stakeholders to make educated science-based decisions regarding CO2 emissions, climate and the issue of OA. Unfortunately, the tendency of the popular media to focus on sensational news has become increasingly apparent in science as well. In the scientific literature, and especially in some of the journals with the highest impact factor, we feel that an increasing number of articles with sensational statements and/or far-fetched extrapolations, but with little or weak information about the details of the actual research and science, have appeared with increasing frequency. Interestingly, it is these statements or extrapolations that the popular media commonly pick-up on and reveal to the general public.

    http://www.biogeosciences-discuss.net/8/6161/2011/bgd-8-6161-2011.pdf

    • Complex, yes, but in every field there are trenchant observations and fun facts. Here’s an unattributed one (don’t have the (recent)article ref. to hand):

      Increased bicarbs in the seawater do, in fact, dissolve the shells of dead organisms faster. But they enable much easier and quicker building of those shells in the first place. IOW, it accelerates the cycling of calcium compounds in and out of living shells, dead shells, and seawater, at the expense of depositing limestone and chalk.

      The net result would then be that increased atmospheric and dissolved CO2 would boost the mass of living shellfish, corals, and diatoms, etc.
      Oops!

  47. Consensus seekers, SST

    Consensus has a legitimate, though informal, role in science. It applies to honors given in recognition of scientific achievement, as when the community broadly attaches a scientist’s name to his model or discovery, or when it recognizes that a model has matured along the track from conjecture to hypothesis, theory, and finally to law.

    Consensus has a role in religion and politics, where individuals seek to have their views reinforced but lack objective support for their positions. It is a psychological need of profound power. It leads to movements, to missionaries and revolutionaries, sometimes of the worst kind, built in part on a need for power and on deep-seeded senses of personal inadequacy – lack of comfort in one’s own skin.

    Consensus has no role whatsoever in the scientific method, but that does not prevent those on the fringe from converting some model or other into a belief system, from quashing opposition, and from becoming missionaries for their adopted cause.

    The case in point is AGW. Some of its missionaries post here, and some even sport titles. The difference between scientist and missionary is easily discerned. A scientist will defend his position openly, completely, objectively, and patiently, including assumptions and limitations, and facts and models. When asked, he will supply available sources, and he will supply as much rational argument as his audience requires. Either that or he will be silent.

    Missionaries, on the other hand, rely on conjectures; on proclamations, e.g., equilibrium, catastrophe; on fallacies, especially ad hominem and appeal to authority; on fear and ignorance; and, of course, the ultimate appeal to authority, the power of the alleged herd – the Consensus.

  48. Consensus may not have any right role in scientific method, i.e. in the way scientific research is done in each specific case.

    Consensus interpreted int the way I did in a comment earlier in this thread is, however, an essential part of the scientific process as the best scientific knowledge on any issue at any moment cannot really be described as anything else than the consensus view. The consensus is the only way Newton’s or Einstein’s work has been accepted. Somebody could say that they have been accepted, because experiments have verified them, but who has decided, whether the experimental evidence is sufficient or not. The only valid answer is consensus.

    That consensus is not immutable, but it’s changing all the time, but at any moment there is a consensus. This consensus is not only about best estimates or most likely theories, but also about their uncertainties and about likelihoods of the most likely and other possible theories.

    It’s not possible to define, what’s makes that consensus – how many must agree, and how opposing views are weighted, but if it looks like a duck and walks like a duck ..

    Jim Owen accepted this general view, but emphasized the cases where the consensus is distorted bu strong individuals and outside pressures. This happens, but I don’t believe that it’s even nearly as prominent as many skeptics believe. People go into science, because they think that they can invent something that others have not found out or accepted. Having differing views is a great and even dominating value for every good scientist. Not every person classified as scientist is a good scientists, and there are opposing pressures, but the fact that good scientists always search places, where their views differ from the consensus of the moment, makes the problem much smaller than it would otherwise be.

    • Pekka,

      You have too little scientific (sceptical) attitude. You think some kind of consensus is necessary for science. For bureeaucracy yes, but for progress of science no consensus is necessary.

      • There will always be consensuses, but the goal of science should be to atack and try to falsify them.

      • Paul Dunmore

        The goal of science is to try to work out how the world actually works. If we get it right, it is likely that everyone knowledgeable will eventually agree; but it is also possible that everyone knowledgeable actually agrees when they are mistaken. Consensus cannot settle how the world actually works, but it is a possible indicator of success.

      • Paul Dunmore, 7/2/11, 3:41 am, SST

        Science is a branch of knowledge whose content lies in objective models of the real world, whether natural or manmade. Those models predict events in the real world. If a model is contradicted by facts in its domain, it is falsified. If it is incomplete, it is a conjecture. If it is complete and makes novel predictions it is a hypothesis. If such a prediction is validated by fresh facts, the hypothesis becomes a theory. If every possible prediction of the model has been validated, the theory becomes a law. The consensus of the scientific community comes into play in deciding where a model falls in this schema, if it is elegant enough to be new, and in what name to give to it.

        Science is not about “balance” or “trust”. The test is simple and unique. It is whether the model has predictive power. Only a little thought is necessary to convince oneself that predictive power trumps all other standards. Consider any model. Suppose that this model meets all criteria set for scientific models by anyone except for one criterion: it has no predictive power. That model is scientifically useless except as a hypothesis, or a bad example and a path not to pursue.

        On the other hand, suppose your thought model fails every criteria set by anyone for scientific models, e.g., Popper’s criteria, balance , trust, published, peer-reviewed, conventional, except for one: it makes better-than-chance predictions demonstrated by statistically significant fresh measurements. Those measurements reflect not so much “how the world actually works” but how it projects on man’s puny senses and instruments. That model with predictive power is successful, and worthy of the name theory.

      • Science is not merely a collection of well defined models that can be judged by comparing their predictions with uniquely interpretable experiments. Science works at the edge of knowledge and judging its success is in most cases much more complicated and subjective. The combination of subjective judgments of scientists knowledgeable on each field forms the consensus view as I and many others understand and use the concept.

        As I wrote before, only this consensus view can tell, whether a set of experimental results confirms the hypothesis or gives evidence for the correctness of a theory to the satisfaction of the relevant science community, and this community is the one that acts as the judge based on its own informal rules.

      • Pekka Pirilä, 7/2/11, 11:03 am, SST

        I agree with you that subjective judgments … form[] the consensus. That is a sufficient reason to reject consensus as a part of science.

        The politics of science aside, its execution tolerates no subjectivity.

        Subjectivity may survive in academic science, because what matters there is getting published. After all, one of the Departments of Physical Sciences in the University of California system had a school of the paranormal. When academic subjectivity happens, it is a temporary anomaly, certain to be proven useless. When an iota of subjectivity sneaks into industrial science, it is weeded out the instant that it might be meaningful.

        Of course, there’s always the possibility of a lucky guess. Unfortunately for AGW, the toast fell jelly-side-down at every opportunity.

      • But there is no science completely free form subjectivity and new scientific results are practically always very much affected by subjectivity.

        I’m certainly not advocating relativism, only accepting realities.

        Of course scientist must do their best to obtain and use objective evidence, but they cannot do more. Well established scientific knowledge is not so much influenced by subjectivity, because it is the consensus view of a wide enough scientific community where the independent subjective judgments combine to a less subjective whole.

        I repeat the request: If you disagree, describe a workable alternative.

      • You are totally wrong. Without proper trust in earlier science the progress is almost stopped, as everybody will be using all their time in verifying earlier claims. At the same time the earlier science must not be trusted blindly. Finding the right balance is one of the requirements for a good scientists.

        How do you think that you can judge, whether I have too much or too little trust in earlier science and the momentary consensus view?

      • Pekka Pirilä, 7/2/11, 11:44 am, SST

        There is no science completely free from subjectivity? Where is the subjectivity in Newtonian Mechanics, in Thermodynamics, in General Relativity, in Probability Theory, in Communication Theory? These all have mathematical forms, and I find no subjectivity in mathematics.

        I’ll give you that AGW is loaded with subjectivity. Not accidentally, it leaves out the most significant climate processes, (especially cloud cover, solar variability, and natural CO2 flux, including solubility and ocean currents) and creates situational physics, including systems science, by redefining terms, (especially equilibrium and feedback), to create impossible states (especially an unregulated climate, and a stagnant, bottlenecking, acidifying surface layer) using trickery (correlation replacing cause and effect, hockey stick constructions, and chartjunk), fear (catastrophe, extreme events), secrecy and suppression (CRU email scandal).

        That is the essence of where we differ. You consider AGW science. I do not. True, once it was, as a conjecture in 1936, but no longer.

        The workable alternative to subjectivity you requested is objectivity. The alternative to models that sound or look reasonable, that explain or describe (subjective concepts), is to honor those with predictive power. The alternative to consensus formation is to let the proof be in the pudding, dish by dish.

        We credit Newton for saying I have stood on the shoulders of giants. He honored not the collective, the consensus du jour, but those giants who provided the shoulders. Reportedly (Wikipedia) he named names: Descartes and Hooke.

        I have often thought that IQs are admittances, not impedances. They add like resistors in parallel. The IQ of the consensus, if it were to exist, is less than the least IQ among its members.

        Then I have no problem with the notion that the consensus is a macroparameter, like global average surface temperature (including the topic here, SST), or global average albedo, concepts that are estimable but not measurable. The domain of thermodynamics comprises macroparameters. Climate is a thermodynamic problem.

      • First of all i have written many comments in this thread about science in general, not specifically about climate science although the comments apply certainly also to climate science. Understanding of the Earth system including climate is definitely not complete. Rather it’s acknowledged by everybody that significant gaps in knowledge persist. (In my terminology also the consensus view is that significant gaps persist.)

        I consider climate science science. AGW is not by itself science but just a phenomenon that can be studied by climate science. Some parts of AGW are qualitatively so obvious consequences of the knowledge that climate science and supporting basic sciences like physics have produced that there is no doubt in my mind about their reality. The increase in CO2-concentration is largely accounted for by anthropogenic influence, greenhouse effect is an important factor in determining the surface temperature, and CO2 has a significant share in that, more CO2 is going to cause warming. These are all reliable results of the understanding that is the consensus state of climate science. Going further to the details like the value of climate sensitivity and the feedbacks that influence that, I’m certainly not as confident, but neither does IPCC AR4 claim that they would be known with good accuracy, as the uncertainties are listed as rather large.

        Climate is more than a thermodynamic problem. Thermodynamics and radiation physics are important background sciences, but studying the Earth system including climate belongs to Earth sciences (or geosciences)

        I dislike statements that AGW is or is not science, when they are made as evidence for a or against the truthfulness of the phenomenon of AGW or of the basic understanding of processes involved. As I wrote above, I would, however, not formulate my view as saying that AGW is science.

      • Pekka Pirilä, 7/2/11, 1:34 pm, SST

        Our little dialog has converged on what seems to be the core question: the existence of AGW. Let me address your points categorically.

        (1) Background: AGW is supposed to be an objective model to account for climate, making it a part of science. As set forth first by Callendar in 1936, as what became known as the Callendar effect, it was a conjecture. It could not account for the past climate record. AGW has always lacked an objective criterion by which to define its domain, and without that it remains open-ended back to the creation of Earth. It cannot be declared to begin at the beginning of the industrial era. As a minimum, that would at least beg the question of its existence.

        (2) Background: Climatology draws from several other sciences, including thermodynamics. That overarching field is not restricted to the state of matter, and so encompasses climate. What is especially germane about thermodynamics is that it deals with macroparameters, parameters which are not directly observable. Because climatology defines climate in terms of macroparameters, it puts climate in the field of thermodynamics. Still, I have no problem with the overlapping of the many fields of climate and Earth science.

        (3) The observed increase in atmospheric CO2 is about 6% anthropogenic. The fingerprints alleged by IPCC are examples of the worst kind of science, worthy of being called frauds. Contrary to IPCC’s claims and demonstrations, the isotopic lightening of CO2 does not track the anthropogenic CO2 emissions, nor does the depletion of atmospheric O2 track the rise in CO2. These claims needed support with mass balance analyses, which were never done, and instead were shown by chartjunk: adjusting graph ordinates to make parallel what was not parallel, and to rely on visual correlation instead of math. IPCC managed to raise far more suspicions than solutions.

        Regardless, what needs to be done is to measure whether the rise in CO2 leads the effect it is supposed to have caused, as it must. This requires analysis with the correlation function. IPCC provides no evidence that that has ever been done. It can’t be done based on the published Keeling Curve. IPCC has over masticated the MLO data until it is neither recognizable nor useful. The famous Curve is a reconstruction, only slightly less suspicious than Mann’s tree ring reconstructions. It lacks the fine structure necessary for the correlation function.

        IPCC has failed to account for the natural flow of CO2 through the ocean. CO2 saturated sea water at about 4ºC is drawn to the surface by the Ekman Pump where it outgasses into a plume of hot moist CO2-rich air. That air rises, divides, enters the Hadley Cells where it feeds the Westerlies, and descends on Hawaii. That places MLO in the plume of the oceanic outgassing that is about an order of magnitude greater than the ACO2 emissions.

        2. IPCC admits CO2 is not sufficient to cause a significant rise in surface temperature. Consequently, it applied two remedies to this collapse of its conjecture. It created a bottleneck in the surface layer of the ocean to the uptake of CO2 by invoking not just carbonate chemistry, but equilibrium carbonate chemistry! The notion that the surface layer is in equilibrium is ludicrous. That same bottleneck had the advantage, though, of causing ACO2 to acidify the ocean, a bonus scare feature. IPCC ignored the physics of dissolution, i.e., Henry’s Law, and turned Henry’s Coefficients into a way to sense the state of the surface layer. Now IPCC had a cause for the atmosphere to be a buffer to hold excess ACO2 waiting for the deep ocean to sequester CO2 with the two biological pumps.

        By the way, when IPCC tried to measure the Revelle Factor, the bottleneck measure, it rediscovered Henry’s Law. When questioned about this problem in review of the AR4 draft, it discarded the results so as not to create confusion among the readers.

        Two other problems awaited this fantastical model. The build up in CO2 was still not enough to create the necessary warming: big enough to be a catastrophe for humanity, yet small enough that it could not be tested in our lifetimes. So IPCC invented an amplifier. Warming initiated by CO2 would release water vapor, and that water vapor was just the right amount. This is a most complex control system model.

        Still remaining was the problem that the ocean has no way to discriminate between anthropogenic and natural CO2. It has no way to bottleneck one and not the other. At best the two species of CO2 are merely different mixes of the same three isotopes, and they create a different mixture in the air, never to be separated. Henry’s Coefficients are known to be dependent on temperature and partial pressure first, and secondarily on salinity. That it might be dependent first on the pH of the water, as IPCC implies, and secondarily on isotopic weight, puts these discoveries in real Nobel Prize territory.

        The conclusions are that CO2 does not accumulate in the atmosphere, that it is not well-mixed, that it is not long-lived, and that the various global stations should not be calibrated into agreement. I conclude that Henry’s Law proceeds apace, i.e., instantaneous on climate scales, with the turbulent mixed layer acting as the buffer for any excess CO2. That surface layer seems to be the surface component of the thermohaline circulation, absorbing CO2 as it cools in its year long cycle of return to the poles. I predict that less heavily filtered CO2 at MLO will be seen to lag surface temperature.

        I agree that the greenhouse effect is real and accounts for the surface temperature being as temperate as it is in Earth’s warm state. It does not control or regulate the surface temperature, however. Cloud cover does that as a negative feedback to warming and a positive feedback to solar radiation. Callendar, by the way, predicted the negative feedback part of cloud cover in 1936, and affirmed that the climate is not in equilibrium. CO2 would increase the greenhouse effect, but it would be lost in the noise of estimating temperature, and furthermore mitigated by cloud cover. Callendar recognized that the ocean was the great reservoir for atmospheric CO2, but erred to suggest that the flux of CO2 could be in equilibrium.

      • Jeff,

        (1) Background: AGW is supposed to be an objective model to account for climate, ..

        You change my words. What I say, is that well known physics and reliable information of the state of atmosphere allows certain conclusions about AGW. Those are essentially the ones that are discussed on this site in threads about no-feedback climate sensitivity. All that follows from very reliably known theories and factual knowledge on the present atmosphere. Because influence of anthropogenic carbon emissions is being considered, the timing is determined by the growth in these emissions. All this is true, but of course this does not tell much about the strength of the effect. An order of magnitude is, however, given by the no-feedback climate sensitivity.

        To be completely clear: The no-feedback climate sensitivity is a theoretical concept calculable from well-known input. It’s not a property of the real atmosphere, but it’s to be expected that the real atmosphere has a climate sensitivity, which is of the same order of magnitude.

        Because climatology defines climate in terms of macroparameters, it puts climate in the field of thermodynamics.

        You seem to claim that everything defined in terms of macroparameters is thermodynamics. I disagree.

        (3) The observed increase in atmospheric CO2 is about 6% anthropogenic. The fingerprints alleged by IPCC are examples of the worst kind of science, worthy of being called frauds.

        Nonsense. The fingerprints are just some additional studies that confirm our general understanding of the carbon cycle. The increase is most certainly dominated by anthropogenic influence. There are no serious proposals for alternatives. We know the amount of emissions; we know the rate of increase in the atmosphere; we have less accurate, but still sufficient information about the natural carbon cycle, to conclude that the CO2 concentration wouldn’t have risen much if at all without the anthropogenic contribution. There are uncertainties, but they are a small fraction of the total increase.

        Regardless, what needs to be done is to measure whether the rise in CO2 leads the effect it is supposed to have caused, as it must.

        It would be very nice, if the climate change would be so easy to study, but unfortunately it’s not, as you know perfectly well. Therefore more complex scientific research is needed.

        2. IPCC admits CO2 is not sufficient to cause a significant rise in surface temperature. Consequently, it applied two remedies to this collapse of its conjecture. It created a bottleneck in the surface layer of the ocean to the uptake of CO2 by invoking not just carbonate chemistry, but equilibrium carbonate chemistry! The notion that the surface layer is in equilibrium is ludicrous. That same bottleneck had the advantage, though, of causing ACO2 to acidify the ocean, a bonus scare feature. IPCC ignored the physics of dissolution, i.e., Henry’s Law, and turned Henry’s Coefficients into a way to sense the state of the surface layer. Now IPCC had a cause for the atmosphere to be a buffer to hold excess ACO2 waiting for the deep ocean to sequester CO2 with the two biological pumps.

        A series of patently untrue claims and accusations. Again many of the issues are within the realm of well understood basic sciences, chemistry in much of the above. Again it’s true that applying the basic science to complex systems has its problems, and leads to inaccuracies, but the overall processes are clear and various details are known with variable accuracies dependent on the actual case being studied.

        Two other problems awaited this fantastical model. The buildup in CO2 was still not enough to create the necessary warming: big enough to be a catastrophe for humanity, yet small enough that it could not be tested in our lifetimes. So IPCC invented an amplifier. Warming initiated by CO2 would release water vapor, and that water vapor was just the right amount. This is a most complex control system model.

        Do you claim that warming does not increase evaporation and absolute humidity in the atmosphere, or that these effects would be much smaller than estimated by climate scientists? Seriously?

        Still remaining was the problem that the ocean has no way to discriminate between anthropogenic and natural CO2.

        What problem? Who cares where the molecules come from? Only their total amount matters.

        The conclusions are that CO2 does not accumulate in the atmosphere, that it is not well-mixed, that it is not long-lived, and that the various global stations should not be calibrated into agreement.

        The persistence of modest increases of CO2 in the atmosphere and connected subsystems is reasonably well known for short and intermediate periods up to 100 or 200 hundred years. The uncertainties grow, when longer periods are considered and when the concentration deviates from the preindustrial value much more than it does now.

        I agree that the greenhouse effect is real and accounts for the surface temperature being as temperate as it is in Earth’s warm state. It does not control or regulate the surface temperature, however. Cloud cover does that as a negative feedback to warming and a positive feedback to solar radiation.

        This hypothesis of negative cloud feedback is not totally crazy, but claiming that it would be known to be true is an extreme case of non-skepticism and willingness to believe, what you wish to be true.

        There is more evidence to support positive overall feedback than a negative one, but I go so far in my skepticism that I accept that the answer on this point is not known with high certainty.

      • Pekka Pirilä, 7/3/11, 6:04 am, SST

        1. You said,

        AGW is not by itself science but just a phenomenon that can be studied by climate science.

        My background response didn’t change your words, and even quoted precisely, I don’t agree with them. My opinion is that AGW does not exist in the sense that it is observable. Without independently establishing its existence, climate science might as well study the phlogiston.

        2. You say,

        certain conclusions about AGW … are essentially the ones that are discussed on this site in threads about no-feedback climate sensitivity. All that follows from very reliably known theories and factual knowledge on the present atmosphere.

        The discussion about no-feedback climate sensitivity found in the Lindzen and Choi Part II (L&C) thread was inane. What L&C said in their paper shows that what follows from theories and factual knowledge far from being reliably known, is in dispute based on system science theory introduced by L&C.

        As I pointed out to Christopher Game (6/20/11, 11:43 am), feedback is not well defined among the relevant sources. L&C compared their results to those of Trenberth et al., (2010) and Dessler (2010). These writers provided three different definitions of feedback, all three different than the three used by IPCC, providing a total of six different definitions. Regardless, no one among the 489 posters to that thread managed to discuss no-feedback climate sensitivity within the context of some particular definition.

        Lindzen and Choi adopted a superior definition by drawing from elementary systems science. They used a simple linear model with a feed-forward transfer function of G_0 and a constant feedback transfer function of F. It is a heat flow model. This is not IPCC’s model, which is a radiation forcing model. L&C’s feedback does not agree with any of IPCC’s definitions, two operational and one explicit. These twin disagreements, modeling paradigm and feedback definition, were sufficient cause for the conforming journals to reject their paper. Regardless, in the L&C analysis, G_0, the zero-feedback response function, was the no-feedback climate response, and it has no equivalence in the other definitions. The feedback function, F, includes changes in cloud cover and humidity.

        L&C say,

        In most climate models, the feedbacks from these clouds are simulated to be negative in longwave and strongly positive in shortwave, and dominate the entire model cloud feedback (Webb et al., 2006). P. 17, line 409ff.

        and

        Clouds also change so that their visible reflectivity decreases, causing increased solar absorption and warming of the earth.
        Cloud feedbacks are still considered to be highly uncertain (IPCC, 2007), but the fact that these feedbacks are strongly positive in most models is considered to be an indication that the result is basically correct. Methodologically, this is unsatisfactory. Bold added, L&C, p. 3, line 67 ff.

        but

        The negative feedback from observations is from both longwave and shortwave radiation, while the positive feedback from models is usually but not always from longwave feedback. Bold added, Lindzen & Choi (2011), line 397, p. 17.

        3. You say,

        The no-feedback climate sensitivity is a theoretical concept calculable from well-known input. It’s not a property of the real atmosphere, but it’s to be expected that the real atmosphere has a climate sensitivity, which is of the same order of magnitude.

        IPCC says the climate sensitivity if about 3K. L&C put it at 0.7K. It could be as low as 0.3K and be of the same order of magnitude. With an error by a factor of 4, IPCC’s model is severely challenged, if not invalidated.

        4. You say,

        You seem to claim that everything defined in terms of macroparameters is thermodynamics. I disagree.

        Here’s a definition on which I rely:

        In thermodynamics, however, the attention is directed to the interior of a system. A macroscopic point of view is adopted, but only those macroscopic quantities are considered which have a bearing on the internal state of a system. It is the function of experiment to determine the quantities that are necessary and sufficient for such a purpose. Macroscopic quantities having a bearing on the internal state of a system are called thermodynamic coordinates. Such coordinates serve to determine the internal energy of a system. It is the purpose of thermodynamics to find general relations among the thermodynamic coordinates that are consistent with the fundamental laws of thermodynamics.

        A system that may be described in terms of thermodynamic coordinates is called athermodynamic system. Zemansky, Heat and Thermodynamics, , 4th Ed., 1957, p. 4.

        IPCC defines the climate system thus:

        The climate system is a complex, interactive system consisting of the atmosphere, land surface, snow and ice, oceans and other bodies of water, and living things. AR4, FAQ 1.1, p. 14.

        And

        Solar radiation powers the climate system.

        So in that definition, the Sun is not part of the climate system. On the other hand, IPCC provides a diagram for the climate system where one of its components is changes in Solar Inputs .

        AGW explicitly involves relationships between global macroparameters that defines the state of Earth: its temperature and its reflectivity. There are other macroparameters that relate to the internal state of [the] system. The gas composition of the atmosphere is an example. By definition, the AGW model is a thermodynamic problem governed by the fundamental laws of thermodynamics. However, neither IPCC’s Third nor Fourth Assessment Report makes the first reference to any law of thermodynamics.

        Whether the climate system is or was ever in equilibrium is determined by the Second Law of Thermodynamics. IPCC ignored the law, and then asserted that Earth was in equilibrium in 1750, and implies throughout its Reports that Earth has a tendency to return that state, to equilibrate, but for manmade CO2. IPCC defines its radiative forcing paradigm as changes to the initial budget published by Kiehl and Trenberth (1997) (K&T). They say,

        We consider the long-term equilibrium state of the climate system, which requires the net inflow of energy at both the top of the atmosphere and at the surface to be zero. Kiehl, J. T. and K. E. Trenberth, Earth’s Annual Global Mean Energy Budget, Bull. Am. Met. Soc. V. 78, #2, 197-208, February, 1997, p. 198.

        You, Pekka, have continued this fiction, saying,

        The other possibility is to study what happens to the energy balance of the Earth system as a consequence of the new specific forcing. 6/15/11, 3:17 am, Lindzen and Choi Part II.

        and

        The energy balance of the thin skin layer is dominated by … . Bold added, 4/5/11, 2:55 am, Week in Review: April 2, 2011

        Fred Moolten, who regularly supports you in this fiction, wrote,

        Forcings identify the flux adjustment needed to restore balance, … . Bold added, 12/23/10, 6:02 pm, Radiative transfer discussion thread.

        and

        You misinterpreted the diagram, Jeff. It shows that only about 15-30% of IR emitted from the surface escapes to space uninterrupted, but of course, 100 % must escape eventually for energy balance. Bold added, 12/9/19, 1:38 pm, Confidence in radiative transfer models.

        George Simpson, meteorologist and director, Met Office, 1920-1938, critiquing Callendar (1938) said,

        The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. Bold added, Callendar (1938), p. 237.

        Callendar had not asserted that Earth was in thermal equilibrium. Instead, he was leading the art of systems science, observing

        Thus a change of water vapour, sky radiation and temperature is corrected by a change of cloudiness and atmospheric circulation, the former increasing the reflection loss and thus reducing the effective sun heat. Id., p. 230,

        Instead, Callendar had urged that the ocean was in equilibrium, and examined the effects of two and five millennia to equilibrate. (He seems unaware of the thermohaline circulation discovered by Sandström in 1908.) He also conjectured that the flux of CO2 between the air and the ocean reached equilibrium to a depth of 200 m. What Callendar precociously got right climatology has forgotten, and what he got wrong it has elevated to a movement.

        By the way, you ask,

        Do you claim that warming does not increase evaporation and absolute humidity in the atmosphere, or that these effects would be much smaller than estimated by climate scientists? Seriously?

        You misunderstand. Of course, warming increases evaporation and absolute humidity in the atmosphere, all in accord with Clausius-Clapeyron thermodynamics. IPCC apparently models it that way, but only to increase the desired dual positive feedbacks of the greenhouse effect of water vapor and of the decrease in specific cloud reflectivity. Those are quite negligible, however, compared to the effect on cloudiness (cloud cover) of increased absolute humidity which IPCC neglected to implement in its GCMs.

        This cannot be regarded as a surprise: that the sensitivity of the Earth’s climate to changing atmospheric greenhouse gas concentrations must depend strongly on cloud feedbacks can be illustrated on the simplest theoretical grounds, using data that have been available for a long time. Satellite measurements have indeed provided meaningful estimates of Earth’s radiation budget since the early 1970s. Clouds, which cover about 60% of the Earth’s surface, are responsible for up to two-thirds of the planetary albedo, which is about 30%. An albedo decrease of only 1%, bringing the Earth’s albedo from 30% to 29%, would cause an increase in the black-body radiative equilibrium temperature of about 1°C, a highly significant value, roughly equivalent to the direct radiative effect of a doubling of the atmospheric CO2 concentration., Bold added, citation deleted, 4AR, ¶1.5.2, p. 114.

        It is somewhat unsettling that the results of a complex climate model can be so drastically altered by substituting one reasonable cloud parametrization for another, thereby approximately replicating the overall intermodel range of sensitivities. 4AR, ¶1.5.2, p. 114.

        Cloud cover parameterization must be replaced with dynamic cloud feedback.

        Back to equilibrium, K&T were in error. They could have created an uncountably infinite number of energy balance budgets, one for every surface temperature from 0K to over 290K. None is a preferred state, and IPCC has no basis for relying on equilibrium in climate.

        A claim of equilibrium requires some law or principle of physics. Friction will cause a dynamic systems to reach a state of least kinetic energy. Mechanical systems will rotate to balance in a preferred state of least torque with the center of gravity directly below the center of rotation. A climate system defined without the variable heat source is not a thermodynamically closed system. As a result, the Second Law of Thermodynamics, which might provide IPCC its equilibration criterion, is not available to establish equilibrium. And this is only the tip of the equilibrium iceberg that drives the AGW model into physical impossibilities. The surface layer is not in equilibrium to cause ACO2 to accumulate in the atmosphere, or to cause CO2 to acidify the ocean. Equilibrium climate sensitivity refers to a fictitious state.

        Earth’s climate is not regulated by equilibrium. It is regulated by the strong negative feedback of cloud albedo. That should be called the Callendar Feedback.

        3. IPCC falsified the fingerprint justifications for the MLO bulge in CO2 being anthropogenic. The hockey stick effect of the rise in temperature being coincidental with the start of the industrial era is clumsy evidence of coincidence, proof by correlation, and does not establish cause and effect. No evidence remains for the MLO bulge being ACO2. IPCC’s argument to the contrary leads to the absurd result that ACO2 input at 6 GtC/yr accumulates in the atmosphere while natural CO2 at 100 to 200 GtC/yr is in “balance”, i.e., does not accumulate. Natural CO2 lagged temperature throughout the half million year paleo record, and it does so today.

        The argument that the levels are different, so ACO2 is the cause, is based on several errors. The first is that the ice core record is taken inside the polar CO2 sinks, where it is measured with a time constant ranging from several decades to one or two millennia. The ice core records should not fit smoothly into the MLO record as portrayed by IPCC using chartjunk. Since the CO2 level measured over a minute or so cannot be reconstructed from the ice core data, IPCC should have was the Vostok record with the MLO record suitably low pass filtered over 30 to 1000 years to match. IPCC’s notion that the MLO record is unprecedented over 400,000 years or so is based on comparing a record of less than 60 years with samples taken every 1,300 years. It is another example of the fallacy of cause from correlation. The confidence in IPCC’s unprecedented claim is not the usual 80 to 95%, but instead about 3%.

        The next error is that Vostok sits inside a CO2 sink, while MLO sits in the plume of the massive oceanic outgassing of CO2. Keeling himself warned not to rely on either such data.

        4. That’s quite enough for now. This is supposed to be dialog, not dueling essays. But please consider being specific about your naked accusation that I committed A series of patently untrue claims and accusations. Your charge has no basis.

      • The argumentation is growing beyond reason.

        I have tried to extract and formulate basic components that are reliably known and not subject to real uncertainties, which are still plentiful and on whose details legitimate disagreements persists.

        You do not want to proceed on this line, and there is no basis for continuing this line.

        We have seen innumerable times that the approach that I tried is not possible for the whole audience of this site. Judith has started many threads in the same spirit on one issue at the time, and we have always seen, that it’s not acceptable to all. At the same time I believe that a large number of participants were satisfied by the approach and learned from it.

        At this moment it doesn’t work in a discussion between you and me.

      • Pekka Pirilä, 7/4/11, 2:04 am, SST

        Most recently, you abandoned a dialog on the merits back when you responded

        A series of patently untrue claims and accusations.

        I answered your claim in amply referenced detail about things being “reliably known [from] theories and factual knowledge, as discussed on this site in threads about no-feedback climate sensitivity. Once again, you quit, claiming

        I have tried to extract and formulate basic components that are reliably known and not subject to real uncertainties, … .

        The extent of your extraction and formulation appears to be limited to a vague allusion about prior postings. Making bald claims is refusing to engage in a scientific discussion.

      • Pekka,

        you have finally shown that you worship at the altar of AGW.

        Where is the predictive ability in your theory?? We have many predictions, but, which ones have proven true?? When an orbit is computed and the object follows the trajectory we have proof. When the interactions of a particle with another are computed and the chemistry experiments or accelerator runs are done and the result is as predicted we have proof. Where are the predictions of AGW that are not continuously adjusted or the observations adjusted to provide confirmation?? You have none.

        The most prominent and useful predictions were the slowing of the thermohaline circulation (it sped up), hot spot and strat cooling (no observations of either in the last 15 years where we have the best observations), and the claims that the models actually have some connection to the earth’s climate. The models have no connection to the earth’s climate outside of the global temperature parameter which is rising faster than the actual temperature. Your theory must be recognized as falsified and a new one(s) advanced in its place. Not recognizing this is worse than just admitting that you operate on consensus, it is admitting that the consensus has NO relation to scientific process and cares not a whit about scientific progress or even how history will view your work.

      • You have evidently not understood anything of what I write judging from the fact that your comments concern something totally different.

      • Ahh dear Pekka, feeling hounded?

        You have ignored my question. Where are the predictions, that any good theory makes, that have been confirmed in your Climate Science World?

    • Pekka Pirilä, 7/2/11, 3:00 am, SST

      The consensus is the only way Newton’s or Einstein’s work has been accepted? What stands out in their contributions is that they broke with the consensus. Newton became the new consensus for Einstein to break, and now the Einstein consensus is showing its limitations as the standard cosmologists have had to invent unobservables, dark matter and dark energy, to balance their equations.

      Newtonian Mechanics and General Relativity became accepted because, and only because, they worked, application by application. No one ever concern himself about the extent to which a consensus existed before applying those models. No one was ever polled, no survey was circulated, no vote was ever taken. The Newtons and the Einsteins never claim to be part of any consensus.

      The consensus is what inevitably gets proved inadequate. It is a phantom shelter for the inadequate and the insecure. Pay no attention to that man behind the curtain.

      The attempt to raise the consensus-named Callendar Effect from a conjecture to a theory has instead falsified it. Callendar’s best ideas have been ignored, while his worst have been elevated to the AGW dogma, wrapped in a self-proclaimed consensus.

      • It’s the consensus that judged that the theories worked. In science we have no other judge than the science community and there are no fixed rules on, how the science community makes its judgments.

        What I mean by consensus is, is the view of this science community.

        If you disagree, I ask you to tell, what is the alternative. The alternative must be operational, not a principle without any operating agency to apply it.

      • Pekka,

        paraphrasing Einstein, 100 people in agreement does not prove a model or theory. One FACT in disagreement will disprove it.

        Consensus is a political convenience and is only good for show. It is pointless and continuously gets in the way of the advancement of science. Having a consensus agreeing on an incorrect theory may get large amounts of money thrown at the research, but, it will probably still take longer to advance as they continue, as with the BB, forcing observation after observation into the false framework and suppressing conflicting ideas to retain their position and currency.

      • The consensus is what inevitably gets proved inadequate.

        Interesting statement – and I would suppose there’s a lot of truth in it.

        But allow me to ask you to speculate: to the extent that scientific theories are proven inadequate, how would you evaluate the prevalence and/or speed of such proofs occurring if you compare consensus viewpoints to non-consensus viewpoints. To simply state that the consensus is what inevitably what gets proven inadequate without considering the full context seems to me to be misleading.

        Personally, I like the expression “It is the exception that proves the rule.” I dare say that notable examples of the consensus being wrong are the strongest evidence that there is something positive to be gleaned from evaluating which scientific theories lie within the range of predominance and which are outliers. Such a determination doesn’t determine that we should blindly follow consensus viewpoints – but if you’re betting on issues with large-scale ramifications – I’d say that following viewpoints simply because they are non-consensus (and all consensus inevitably gets proved inadequate) is a poor strategic decision.

      • Joshua, 7/2/11, 11:32 am, SST

        Speculating, as you asked, on the speed of, let me call it the discovery of inadequacies, I considered that with respect to Newton and Einstein. Newton lasted for centuries, and remains extremely useful and valid in the Earthly domain. General Relativity is not going to last near as long. The rate in the first order seems to be proportional to the rate of accumulation of facts, something determined by the rapid growth of technology.

        But a second problem is recognition of any consensus at all. As Pekka observed, the consensus is subjective. It has no measure, no accuracy. We can’t assess the subjective objectively.

        Isn’t the notion that the exception proves the rule just a logical fallacy? Perhaps more applicable is: “you can fool all of the people some of the time, and some of the people all of the time, but not all the people all the time.” This old saw is ripe for your question because time is open-ended.

        The consensus is politics. It can easily defeat science in the short term. Science has tough standards, not the consensus. The playing field is tilted to favor charlatans. Eventually science will out. On the other hand, economics trumps even politics, and that rate is measurable. The consensus is sandwiched in the middle to be squeezed out.

        Bet on AGW to beat out the science, and economics to beat out AGW. How fast this takes to evolve is quantized by the American election cycle. Could be just 18 months off.

      • Jeff,
        I repeat: What is the alternative?

        The science has been very successful. Mistakes have been plentiful, but most of them have been corrected promptly. There are very few cases where the stubbornness of the science community has lasted long against better theories of significance that have already had real supporting evidence.

        The informal process of science with all its subjectivities is behind this success.

        What is the alternative, and why it would be better?

      • Successfull compared to what. I think it can be MUCH more successfull if it concentrates on falsification and disproving rather than confirmation. Confirmation is DOGMA.

        The alternative is scientific method and complete transparency.

        There is also an illusion of success (epicycles). I can’t prove it, but when all the bureaucracy, suppression and dogma are uncovered, the public will see how bad it is.

      • Jeff – let me try to make my point with fewer words.

        Do you think in the history of science there have been anywhere near the same number of “consensus theories” that have proven wrong when compared to the number of “non-consensus theories?

        The question of how to determine what is “consensus” is obviously fraught with subjectivity – but let’s unpack that just a little bit. The vast majority of scientists with a large degree of scientific expertise in the field known as climate science are in line with the argument that global warming is happening due to anthropogenic causes with a 90% probability.

        Arguments about whether expertise in “climate science” necessarily qualifies someone as more “expert” than someone who specializes in a related field not withstanding – I think that we could objectively conclude what the “consensus” viewpoint is in this case (at least among climate scientists).

        I wish that those on the skeptical side “skeptical un-convinced” of the “skeptical un-convinced/denier” side of the debate would simply acknowledge that and move on to more productive discussions. When they seem to align themselves with people who have proven to promote specious arguments about the nature of the “consensus” (i.e, trumpeting polls where the determination of “scientist” or a description of scientists’ area of expertise are deliberately vague and unspecific), they do themselves a disservice, IMO.

      • Heh – fewer words.

        Too funny.

      • Joshua, 7/2/11, 11:26 pm, SST

        AGW seems to have no parallel in the history of science. Even if AGW were, as Pekka Pirilä and Fred Moolten claim, a valid scientific model, it has a form unparalleled in the history of science. It turned political to become a movement and a tenet of the current brand of socialism. The consensus now encompasses the near majority of left wingers in the Western World. I can think of no topic with genuine underpinnings in science that ever evolved into a political movement.

        The self-appointed leaders of the consensus are the source for the claim of a vast consensus. IPCC reports

        Over 3,500 experts coming from more than 130 countries contributed to the Fourth Assessment Report in 2007 (+450 Lead Authors, +800 Contributing Authors, and +2,500 expert reviewers provided over 90,000 review comments). IPCC Activities. http://www.ipcc.ch/activities/activities.shtml

        Conversely, the Global Warming Petition Project reports it has the signatures of over 31,000 American scientists specifically in opposition to Kyoto and generally in opposition to AGW by CO2 emissions. http://www.oism.org/pproject/ This must qualify as trumpeting in your challenge to qualifications. Regardless, the numbers don’t begin to support the consensus claim.

        The claim of a consensus points to the absence of a sound scientific basis, and responding with a survey in opposition lends to credence to the faux reliance on consensus in the first instance.

        You write about the vast majority of scientists with a large degree of scientific expertise in the field supporting AGW. This reminds me of the infamous Oreskes study. She and the movement thought she had measured a unanimous consensus among climate scientists in support of AGW, when she had actually measured that peer-reviewed, climate science journals publish no papers critical of AGW. See my post of 6/9/11, 4:23 pm, “Towards sane policies on natural disasters” thread.

        As to scientific expertise, the pro-consensus data are fully as flaky as the con. The claims of consensus rely on those same advocacy journals, where other scientists are most unlikely for professional and practical reasons even to attempt to publish. Nevertheless, climatology draws from epistemology, science, systems science, physics, chemistry, mathematics, and statistics. Specialists in those fields are not considered peers in climate journals, or in polite dialog among (or email between) climatologists. Only climatologists are approved. Yet specialists in those field are precisely positioned to spot errors. This is not peer review, but superior review repressed. Quoting from my post of 6/8/11, 6:14 pm, Polemics, Politics, and Problematizations thread.

        This points out the futility of trying to be objective about the subjective. I endorse your wish that those on the skeptical side … would move on to more productive discussions, but I wish the same for the AGW proponents. The latter at most pick among the skeptical arguments to answer only the weakest and often ignorant arguments. John Cook cooks up a Gish Gallop list of more than 160 such arguments over at SkepticalScience.com.

        The contest has no moderator. The AGW movement will fail. It will be costly, and in the end decided by economics. Science will suffer until in the last analysis it has AGW as a trophy, the prototype case study for scientific error.

      • Ok Jeff – I think I get where you’re coming from.

        Although I agree that there is abundant tribalism on both sides of the debate, I can’t agree with your opinions on the overriding political underpinnings for the “consensus” viewpoint, and I think that you seriously underestimate the counter-balancing political influences on the other side of the debate.

        Please note that I never argued that there is a “unanimous consensus” among climate scientists.

        I will fully agree that combatants on both sides of the debate should move on to more productive discussions – and I would put your (in my view one-sided) analysis of the political dimensions of the debate into the less productive category.

        Finally, I’ll ask this question again.

        Do you think in the history of science there have been anywhere near the same number of “consensus theories” that have proven wrong when compared to the number of “non-consensus theories?

        Now by “consensus” there – I don’t mean “unanimous consensus,” and I’m referring, essentially, to a high prevalence of those with the most relevant technical scientific expertise. I suppose that if you disagree that: (1) climate scientists tend to have the highest level of relevant technical scientific expertise or even further, (2) there isn’t a prevalence of not only climate scientists, but also scientists with technical backgrounds in related fields, who are in line with the “consensus” viewpoint, then it would be difficult for you to answer my question. From where I sit – although the degree of “consensus” has been overstated at times by those on the “pro-census” side, I also feel that the degree of actual consensus has been largely falsely represented by those seeking to politicize the debate from the other side. As I see the evidence, there is a prevalence not only of climate scientists who support the “pro-consensus” view, but also of scientists with technical training in other related fields.

      • Joshua, 7/2/11, 4:08 pm, SST

        My apologies for being non-responsive or obtuse to your request for an opinion on the quantity of consensus theories. I couldn’t begin to count them just for the number of alternative AGW theories that have rattled around post-AR4. I have concentrated on the IPCC claimed consensus for its version of AGW, as published. As to fields other than climate, I’m hard pressed to come up with any consensus theory.

        The number of “consensus theories” has to be about the same as the number of religions. That is not a coincidence. It’s proportional to the number of manageable networks that can be created in a fixed population. Science is much tougher than that.

        Climate scientists fall far short of having the requisite relevant technical scientific expertise, even though they have far more background in the geological sciences than I could ever possibly acquire. What is missing are key elements in the principles of science, thermodynamics, and especially, large scale modeling. AGW as presented by IPCC has butchered established and relevant physics, including systems science.

        Where you talk about scientists with technical scientific expertise in related fields, who are in line with the “consensus”, you make a good point. I am thinking in particular about the large number of scientific societies and academies that have endorsed AGW. They don’t seem to have a good grip even on what science is. Were their endorsements the result of small committees of scientists-turned-bureaucrats who have lost touch? Could they have possibly studied the matter to have rendered a professional opinion or one representative of their membership? I have to be comfortable with the idea that predictive power and not consensuses are determinative.

      • Nuclear winter was similar. It was man made, highly theoretical, doomsday, relied on science, and used for politics.

      • I can think of no topic with genuine underpinnings in science that ever evolved into a political movement.

        Think harder.

        The theory of evolution became part of a political movement.
        The safety and efficacy of vaccines became part of a political movement.
        The carcinogenic effects of cigarette smoke became part of a political movement.
        The science of HIV and AIDS became a huge political battle in South Africa, where the denialists’ lies led to hundreds of thousands of deaths.

        Science tells us about the world we live in. As such, as long as some people reject science in the name of ideology, science is inevitably going to be dragged in to the political debates of the day.

      • Kermit, 7/2/11, 5:55 pm; Robert, 6:17 pm; SST

        Good points. Thanks. You put me to the test to define genuine underpinnings in science and political movement.

        We could expand Nuclear Winter into the whole Ban-the-Bomb movement. The only prominent figure who seem convinced by the Nuclear Winter model was Carl Sagan. He used it to predict a catastrophe from the oil fires in Kuwait and the First Gulf War.

        The Theory of Evolution is the bit of science that flushed fundamentalists out of the woodwork. Cooler heads, like the Catholic Church, managed to accommodate it. Science has no conflict with religion, just with Scriptures. Still, I don’t see evolution as having any political legs. Scopes was the only victim, except, like all the topics, for school systems. He was fined $100 in 1925 dollars, and that was overturned on a technicality.

        The vaccine issue has the usual component of ignorance, but a more fundamental one of public health, or more generally insurance, and the right of government to shift risk from one group to another. Sensibly it is a political matter at its core.

        The carcinogenic effects of cigarette smoke is a good example of bad science, as it drifted into second hand smoke, and as politicians and physicians alike proved themselves unable to quantify and cope with risk. Smoking and drugs produce about the same number of deaths per year, as I recall around a half million for each, except that smoking takes 5 years off one’s life expectancy while drugs take off 45 years. It only became a serious problem when the government took over the financing of senior citizen health care.

        The HIV and AIDS issue seems to have overtaken DDT and deaths from malaria (about 2 million vs. 1 million per year worldwide). The latter is borne of environmentalism, the conversion of the science of ecology into an irrational movement. It takes a big government to make really big mistakes. The former is fraught with a struggling science and more political correctness than politics.

        Underlying these topics is the law of unintended consequences. AGW seems different, intended consequence first, distorting science to drag it into the fray.

      • No, it’s an archaism. “proves” in that sentence is in the olde sense, of “tests”, “checks”, “demands validation”.

      • Sorry, meant to include the quote:
        “Isn’t the notion that the exception proves the rule just a logical fallacy? ”

        Somewhat more modern phrasing might be something like, “The exception puts the rule to the proof.”

      • Brian H 7/5/11, 1:12 am, SST

        Proof by exception? When Revelle & Suess (1957) rationalized their buffer conjecture, they introduced their first equation by saying,

        Considering the combined marine and atmospheric carbon reservoir as a closed system in equilibrium, the following relation holds by definition: …

        This is a departure from the “rule”, actually the knowledge, that this climate subsystem is neither closed nor in equilibrium. (R&S never returned to this essential assumption.) Nicholson in her Dictionary of American English Usage, based on Fowler, uses this example:

        Special leave is given for men to be out of the barracks tonight till 11 p.m.

        In this way, exceptions demonstrate the existence of an unspecified rule. Not being defined, though, the implied rule is not subject to proof.

        Proof is part of mathematics and logic, not science. However, the converse holds. The exception doesn’t, as you suggest, put the rule to the proof . Instead, the effect is strong and immediate. In any of the three fields, the exception disproves the proposition or model.

      • You didn’t read, or misunderstood, my statement. Here’s the For Dummies version:
        Proof means “test” in olden English, in the definitive “make or break” sense.
        So, a (genuine) exception breaks/smashes the rule, eliminating it.

  49. Joshua

    The politics took over from the science about five years ago. I wrote about it in this article-coincidentally I met one of the major players in this little story just yesterday, now holding a position of great authority after being placed there by Tony Blair to promote his AGW agenda as part of his political ambitions.

    His counter part is a senior member of WWF who is running another important UK govt agency closely related to this major player. The description of article comes from my website;

    “Article: Politics of climate change. Author: Tony Brown
    Climate change has become highly politicised and the British Govt – long time leaders in funding research into the subject – were very heavily implicated in making it a political issue in order to promote their own agenda. An unsual subject for me, but very well referenced with numerous links and quotes from such bodies as the Environmental Audit Committee of the House of Commons.”
    http://noconsensus.wordpress.com/2009/10/19/crossing-the-rubicon-an-advert-to-change-hearts-and-minds/#comments

    I do not believe there is a hoax or a scam as some do, just that the AGW scenario has been manipulated for political purposes in recent years and the science is almost irrelevant now.

    Tonyb

    • That’s quite a lot of work, tony – and quite interesting results.

      I looked through some of it – and will go back later and look through some more.

      • Joshua

        Thanks. You have to remember that the US were always semi detached from the climate debate-for political reasons-and the British-with their very long history of weather with the Met office etc-took over leadership. This moved to a Eurpoean perspective because of Britains position within the EU. Because of the Climate Change Act Britain has to by law reduce its carbon emissions and therfore has also has taken it upon itself to try to get other countries to do likewise. I referred to the emissary to Australia in a previous article.

        Tonyb

      • Yes – our comments speaks to one of the things that struck me about what I’ve read of your report so far; that the situation is not really analogous to the United States.

        The other question that immediately comes to mind – as someone who is inclined to not want to accept your thesis (in other words, I’m identifying my biased starting orientation) – is that you have collected data only from one side, and indeed from the extreme end of the spectrum on that side, and present it as if it is fully representative of the entire picture. What simultaneous statements were being made by politicians or their affiliates with some power but who held opposing beliefs? What simultaneous influence were affected by opposing powerful corporate interests?

        The problem is that when I read such one-sided accounts – as someone who is naturally “skeptical” with an in-bred aversion to “authoritative” accounts” – is that I am pre-disposed to always think that there’s always another side to a story. In order to trust your thesis more, I would need to see you deal with obvious counter-arguments. I have presented some very brief ones above, without giving the issue much thought and without having much knowledge about the politics of Britain. I would imagine that more intelligent and knowledgeable people would be able to present much more sophisticated “naysayer” arguments to your thesis.

        Finally, I have to say that reading your material at the Air Vent immediately raises my skeptical hackles. Jeff Id brings an overtly politicized orientation to his views on the climate debate, and from what I’ve seen, completely fails to even attempt to take into account how his political orientation might color his analysis. When I first read his site, I saw a post that he wrote where he claimed that a comment he made on a Krugman blog was deleted because the blog’s monitors recognized his name and were fearful that a “subtle” point he was making would cause great damage to Krugman’s attempts to conspire in the liberal conspiracy to perpetrate a hoax on the public to achieve nefarious, socialist ends.

        Now obviously, you aren’t accountable for Jeff Id’s credibility – but I hope that you understand that it creates an obstacle for me in accepting the work of the “skeptic un-convinced” when the lines are blurred between the “skeptical un-convinced” and the “deniers” (people who will not accept the work of some scientists because of their own a priori biases. I should note, before my comments are misconstrued, that Jeff says that he accepts AGW, but differs with the cabal of climate scientist colluding to perpetrate a CAGW hoax). It is fallacious to assign guilt by association – but I must admit that my bar has been raised due to the seeming disinterest of people who seem reasonably skeptical in disassociating themselves from extremism (and that applies on both sides of the climate debate).

      • Sorry – “…Your comments speak to…..”

      • Layman Lurker

        When I first read his site, I saw a post that he wrote where he claimed that a comment he made on a Krugman blog was deleted because the blog’s monitors recognized his name and were fearful that a “subtle” point he was making would cause great damage to Krugman’s attempts to conspire in the liberal conspiracy to perpetrate a hoax on the public to achieve nefarious, socialist ends.

        Here is the tAV post Joshua speaks of. I’m having trouble finding the bit where Jeff “claimed” or even implied that his comment was clipped to protect the conspiracy to perpetrate an AGW hoax.

      • He’s reading from a list.
        ============

      • Really? There are more:

        Well Joshua, as you may be aware I’ve been rather critical of the ‘hide the decline’ since well before it was released in the climategate emails. They were of course first noticed right here on this very blog and I received a number of requests for interview by NYT people at the time.

        Any time I leave a comment WRT climategate, hide the decline or whatever, it is a unique statement based on literally thousands of hours of study. Not many non-climatologists I know have put that kind of time in and even less understand the statistics used in the field. Of course if you put some crazy crap which makes no sense but happens to be critical as in your example #40, it can do little harm to the intended false message – so on it goes.

        My guess is that the moderators full well recognized who was commenting and didn’t want the informed view expressed, but again you are right, it is just a guess. Interesting though that it sat in moderation all day long, and wasn’t deleted until the second closing of comments.

      • Like, for example:

        Is it possible for a person to gain such incredible name recognition that this might be the case? Hell yes. If the guys commenting on climategate don’t recognize my email and name, they probably can’t spell climategate. When the British secret service calls your home number in the US, you tell me just how you will answer.

        The comment was considered carefully. When someone makes a subtle point beyond their control, they snip it. Real Climate does the same. Free speech right up until it hurts the goal.

        Shall I go on? What “goal” was Jeff speaking about?

        You fellas are hilarious.

      • For example: Here’s another (apparently the spam-catcher doesn’t want another of Jeff’s comments to get through – so I can’t cut and paste in the actual comment. IT’S A CONSPIRACY!!!11!!!!11

        http://noconsensus.wordpress.com/2011/03/28/climate-blogger-censored-by-new-york-times/#comment-48982

        What “goal” do you think Jeff is talking about?

      • The comment was considered carefully. When someone makes a subtle point beyond their control, they snip it. Real Climate does the same. Free speech right up until it hurts the goal.

        Layman – “they” delete any comment beyond their control (his grammatical structure implies an across-the-board uniformity) – and only allow free speech if it furthers “their” goal?

        You’re going to parse that to find a difference of significance between his meaning and my characterization of his comments?

        And please, do, read other comments he made in that thread and on other threads about “them” and their “goal.”

        Ok – looks like we’ve reached one of those agree to disagree impasses (don’t you just hate that expression?)

      • Layman Lurker

        OK one last stab at this. Jeff thought his comment was censored. If this was true then could not the “goal” of the censorship be to not allow a comment which pointed out that Krugman got his facts wrong? There is nothing in Jeff’s post or his comments that would allow you to state that Jeff made “claims” about a goal to protect nefarious socialist conspiracies.

      • Layman – OK – now I get your point, and if it stands up that would be a distinction of significance.

        My immediate reaction is that when Jeff links Krugman’s blog to Real Climate (by association – through the use of “they”), there is clearly an implication of a nefarious conspiracy to stifle “free speech” among widely disparate entities by virtue of a political overlap.

        As to attributing that to socialist ideology (or other specific reflections of the great AGW cabal) – I’d need to go through his other comments on the thread in more detail to see if there is an overt political linkage. Perhaps you’ve already done so and ruled that out – so let’s assume that you have.

        I have read other posts of his which are very much overt in their political characterization of the climate debate writ large. Assuming I supplied a “missing link” (in honor to the discussion on the other thread) there connecting the underlying political motivations to the conspiracy Jeff identified on the thread in question – I think that regardless, my larger assertion stands as to Jeff’s feelings about a politically motivated conspiracy.

      • Joshua,

        YOU have a problem. You are posting incidents where Jeff claims that his comments have been censored, that is, not released from the moderation queue and eventually deleted. Your problem is proving that his comments were NOT deleted. In other words, you appear to be calling Jeff a liar and need to back this up.

        If you are accepting that his comments WERE DELETED, thank you for again showing that those sites will not accept intelligent dialogue from sceptics that they cannot effectively counter.

        If you want to try and claim that Jeff’s comments were inappropriate, again, you have the problem of proof. Vague allegations and arm waving only tells us whether you use deodorant.

      • Joshua,

        “Layman – “they” delete any comment beyond their control (his grammatical structure implies an across-the-board uniformity) – and only allow free speech if it furthers “their” goal?”

        OK, let’s try this from a different angle. You try and find the strongest comment on the threads against AGW and compare it to the ideas and information that you find on Jeff and other Sceptic sites. You won’t need us to see whether Jeff’s comment is valid or not. Of course, you may need help evaluating the strength of the comments. You can post them here and Fred, Pekka, Tony and the rest of us can help you with that. Personally you are blowing smoke up your own dress. They DO exactly what Jeff implies and you are wasting all of our time with this whining.

      • Khunkat –

        A couple of points.

        First, I never called, or implied, or even came close to implying, or even though of coming close to implying, that Jeff is a “liar.” I was merely laughing at the logic of his belief that his comment was deleted because some blog monitors on an economics blog recognized his name and deleted his comment because they recognized his name and because “they” were so afraid that his “subtle” point would be so destructive to either the AGW cabal or Krugman’s post when there were other very critical posts that were passed through the filtering process.

        Second, I have no wish to waste your time. Please, at any time that your feel that you don’t want to read one of my posts, feel free to not read it. I give you permission. Really. I know that leftists are autocratic statists, but I really do give you permission. You won’t even hurt my feelings. Alls I ask is that In the future, if you do choose to read one of my posts, you take personal responsibility for your own actions, and don’t blame me for the decisions you choose to make.

        K?

      • Joshua,
        Excellent, I apologize for misunderstanding your comments. You believe that Jeff’s comments were deleted.

        Well, we are then faced with the issue of why his comments are being deleted at these blogs, and so many others with weak arguments or in agreement with the bias of the blog are not.

        Care to speculate?? He has and I tend to agree with him.

        Now, I would suggest that you ask Jeff himself what he meant by those comments rather than trying to smear him with implications that he is a conspiracy theorist or whatever you are actually implying.

      • Layman Lurker

        Joshua, no one is disputing that Jeff thinks his comment was censored. It’s the stuff you tacked suggesting Jeff attributed it to a consipiracy cover up that has no basis other than your imagination.

      • Layman – did you read the comment that I posted and the one that I linked?

      • There is no need for a conspiracy. True believers automatically breathe together, and there are plenty of them among the ignorati.
        =============

      • Layman Lurker

        Yes I did. C’mon Joshua. You got carried away with hyperbole. You are suggesting that Jeff’s use of the word “goal” can only mean “conspiracy to perpetrate a hoax on the public to achieve nefarious, socialist ends.” If you think this is water tight logic then I am not interested in further comment.

      • The sad thing is that almost every skeptic is motivated primarily by curiosity about just how the climate works. Almost no true believers are.

        As a result, we have these bitter cynics, these fundamentally ignorant true believers, and these marvelous sophists basically just beating their tribal drums, and imagining that the skeptical noises coming from the surrounding jungle are similarly all politically lathered up.

        Fail. Now go examine some science, son. Report back when your curiosity is aroused.
        ============

      • That’s beautiful, kim.

        I am very explicit about differentiating among different types of drivers and intentions among people who fall along the “skeptical un-convinced/denier” spectrum.

        And you, on the other hand, categorically identify “almost” all “true believers.”

        And you go on to point the finger regarding tribalism.

        Beautiful.

      • Curiouser and Curiouser.
        ============

      • Beautiful – and true.

      • Stout Cortez! I’ve had about an earful of his balboaing. Show me the vase.
        =========

  50. Sigmund Fraud, loves Joshua.

  51. Joshua

    No, its not a one sided article at all. The AGW consensus is a political fact-the Climate Change Act pased through Parliament by a vote of something like 650 to 5. Our Present Prime Minister, David Cameron, met up with Al Gore just prior to the vote and the first thing he did when becoming leader of the Tories-now in Govt-was to gp to the Arctic and be photographed on a husky drawn sledge in order to establish his green credentials.
    http://www.guardian.co.uk/politics/2006/apr/16/uk.conservatives
    (note the Reference to WWF again-see my earlier comment)

    The climate change bill was a direct sucessor of a bill drafted by Green peace
    http://en.wikipedia.org/wiki/Climate_Change_Act_2008

    The secretary of State for DECC at the time-The Dept for Climate Change and Energy- is now the leader of the opposition.
    This is the full purpose of the bill;

    http://www.decc.gov.uk/en/content/cms/legislation/cc_act_08/cc_act_08.aspx

    There is just not the political debate that I can observe in the US-it is considered a fait accompli both here and in Europe.
    The quotes I made in the article came directly from the Environmental Aufit committee of the House of Commons and there is very much more of it.
    You have already seen the references to Greenpeace and the WWF popping up elsewhere

    There is certainly none of the Big Oil or Big Coal that is talked about in the US.

    We are some years further down the road than the US (where I asume you come from) Our gas is $10.50 a gallon. Our energy prices have risen 20% this year and have doubled over the last five years. They are planned to double again in 3 years or so-primarily becaise of the ‘level playing field’ the Govt is creating which is intended to make fossil fuel so expensive that the ludicrously expensive wind power (the only game in town)-can compete.

    We have had an air passenger duty tax slapped on us as a green measure and all in all tens of millions of Britains are now in fuel poverty whilst our industry can’t compete. I wrote about it in the article carried at Climate Etc entitled “the futility of carbon reduction.’

    So the political piece I referenced you earlier can probably be seen as the first stage of the process-it was written a couple of years ago.

    I suspect that I would be far to the left of Jef Id-I imagine I would be a democrat if I were American-The extreme right wing you have over there worries me as much as the extreme let wing over here. My involvement with Jeff came about as he was very interested in an article I wrote about the melting Arctic (in 1816) and enjoyed the history and it was useful to both of us for him to publish some of my work.

    I admire Jeff for his straight talking and would love to have a beer with him, but we are certainly not political bed fellows. In all honesty there are very few places that AGW articles can be published without immediately alienating one section of the debate or the other. :)

    Tonyb

    • In all honesty there are very few places that AGW articles can be published without immediately alienating one section of the debate or the other.

      Understood – and acknowledged.

      In order to respond further, I’d have to follow your links and go into depth – I’m not going to do that now but may have time later.

      I’d like to take your word that your analysis isn’t one-sided – but I’m too “skeptical” to do that. If you overtly deconstructed credible naysayers who have opposing viewpoints (assuming that there are some), it would be more convincing for me. If you say that there are no credible naysayers – then it wouldn’t jibe with my life experiences. As it stands now, in addition to following your links, I’d have to also search for credible naysayers (my assumption is that there must be some out there).

      • And just to add a minor point, tony.

        This gets back to our original exchange about the necessity (in my opinion) for you to speak about that scientific correction re: that report on trends in sea surface temps.

        I’m not saying that I’m directly carrying over your failure to do so to your work on documenting the political influences on climate change policy in the EU – but that in both situations, IMO – if you don’t fully explore all opposing arguments (to a reasonable extent and to the best of your abilities), your own analyses loses credibility. This is the argument I’ve been having with Judith since I first arrived at this website.

      • A couple more things.

        It isn’t someone’s political ideology that makes or breaks their credibility, IMO – but their openness to acknowledging and trying to control for how their ideology affects their analysis.

        Given the fact of how corporate subsidies affects gas prices in the U.S., and how every expensive wars affect oil availability, and how many negative externalities are derived from gas-powered transportation – I’m not in agreement that the high cost of petrol in England is (necessarily) direct evidence of a nefarious political influence. What do you suppose the cost of gasoline would be if it were priced independently of any political influences?

      • Joshua,

        When you talk about subsidies affecting the cost of oil and energy in the US, I will remind you that oil companies get tax breaks like all other companies. Green companies like solar and Wind get those same tax breaks PLUS SUBSIDIES and guaranteed loans and prices for their product often whether it is delivered or not and irregardless whether it is a problem for hte grid or not. Your ststement is quite misleading. Please try to improve.

      • KuhnKat 7/4/11, 5:05 pm, SST

        Three points: (1) Joshua asked,

        What do you suppose the cost of gasoline would be if it were priced independently of any political influences?

        Answer: pretty close to the price at the pump in Saudi Arabia: 16¢/liter, about $0.605/gallon.

        All sales to OPEC should be embargoed to be revenue neutral, recouping excess crude costs to the US above that which would yield 60¢/gallon, pre-tax, domestically, for gasoline. Now that would be fair trade in dealing with monopolistic practices.

        (2) Corporate subsidies are incentives for them to do what your government thinks it wants, neglecting unintended consequences.

        (3) The cost of oil, and hence energy, in the US is driven by what I recall was Bill Clinton’s Executive Order that to meet air quality standards, we would henceforth import only light sweet crude, the most expensive species of crude. This, not subsidies, was a payoff to the oil companies. Under the Order, the oil companies would have to upgrade their refineries to meet the new air standards.

        Reversing this Order should be a priority for the new President the first week after the 2013 inauguration. Instead of additional oil company incentives, we should relax the air quality standards as an interim measure.

      • Errata:

        … would NOT have to upgrade their refineries … .

      • Jeff,

        We can nitpick whether tax breaks are subsidies all day. The real issue is that Oil Companies do NOT get special treatment not available to other businesses.

        Unfortunately I think you didn’t remember correctly on the details of the low sulfur issues:

        http://www.econbrowser.com/archives/2005/08/sweet_and_sour.html
        http://lobby.la.psu.edu/024_Low-sulfur_Gasoline/frameset_sulfur.html
        http://yosemite.epa.gov/opa/admpress.nsf/b1ab9f485b098972852562e7004dc686/405d2f1b56c262e9852569bc00558db3

        There was no limitation on the purchase of lower grades. EPA set low sulfur standards for automobiles and light trucks, and the Clinton Executive Order reduced the sulfur allowed in commercial vehicles. This either required the refinery to utilize low sulfur (SWEET) crude, or upgrade the refineries to take out more sulfur. It was an economic decision for the refinery. An old one looking at needed maintenance and other pollution upgrades might have made the decision to upgrade. A refinery that had years before needing major work would simply change to SWEET crude. As is usual, when environmental requirements are forced down peoples throats the gubmint buys off the protests of the businesses involved by taking on some or all of the implementation costs!!! This ends up coming out of our pockets through both higher taxes and higher product prices. In this case, yes, it could be seen as a handout to the oil companies as the needed upgrades to the refineries would replace normal maintenance costs for a period of time. As the oil companies do not OWN very much of the oil themselves, this would have been a payoff to the Saudis and a couple of other countries that DO have large amounts of Sweet crude unlike Canada, Mexico, and Venezuela who make up a large portion of our imports.

        I would add, you provide an incomplete picture of the costs and pressure on the energy industry. Here in the US there have been no new refineries built since, I believe, the 70’s. There have been a number of attempts to build new refineries which have all been sued out of existence by envirowhackjobs keeping the amount of refined fuel and other products limited and more expensive. This was partially offset by the gubmint supported upgrades to refineries, but, the old plants will still need to be replaced eventually. Then there is the issue of rising requirements of including ethanol in the mix. More costs to the refiners and delivery system.

        We have to realize that the dollar is quickly becoming worth less than the cost of printing. With this in mind we might consider, instead of hassling with foreign suppliers, the DRILL HERE DRILL NOW campaign. We have Alaska, west and east coast shallow and deepwater areas to drill and numerous opportunities to drill on the continent being blocked for various environmental reasons. Despite the tired we CAN’T be self sufficient, if ALL the areas where we could drill are considered and the addition of Nuclear and natural gas included while restoring Coal to its baseline contribution, we CAN be self sufficient. We can forget about dealing with nut jobs of both religious and political persuasions outside of our country!!! Oh yes, Canada is a good friend and has large supplies to sell us to make up any shortfalls, but, until we can shutdown the envirowhackjob Legal avenues we will never have reasonable energy prices or security.

        While the Clinton EPA and Executive orders may be too much for a large part of the country, they probably aren’t too far off the mark for the large Metropolitan areas. Living in SoCal makes me think we NEED the higher standards to keep our air cleaner!! In the early 70’s the summers were miserable with smog burning the eyes and respiratory system. Now it isn’t too bad. Only occasional days when it approaches the intensity of the bad old days!! Regulations need to be adjusted for specific areas. That would just raise the price for us special people!! 8>)

        Finally, no one has any idea of what fuel should cost. There is so much interference with the free market that it is simply not possible to calculate what a reasonable market based price would be. My GUESS is that it would be less than HALF of what we are currently paying. Remember, not all of that regulation is totally braindead and we would retain some of the standards hopefully. Remember also that state and federal taxes and other charges probably are adding about a dollar a gallon at this point!!

      • KuhnKat 7/4/11, 8:36 pm, SST

        Thanks for the data. As I said, I was just giving you my recall. Still, I didn’t think my memory was all that bad, so I did some checking. I found this:

        In the end, President Clinton announced the final standard, a 90 percent reduction in sulfur content, a move that stands as one of his bolder environmental accomplishments. (This regulation dealt only with gasoline. Several participants in this debate, in interviews in 1999, indicated that the oil industry would fight any similar move for diesel fuel, which they argued would be much more difficult and expensive to attain. However, in one of many last-minute executive orders enhancing his environmental legacy, President Clinton adopted stringent sulfur-content regulations on diesel fuel, just before leaving office in January 2001. This was done without significant public debate.)

        It is estimated that the regulations will cost the oil industry $4 billion resulting in a rise in gasoline prices from 5 to 8 cents per gallon.

        http://lobby.la.psu.edu/024_Low-sulfur_Gasoline/frameset_sulfur.html

        Is that accurate? It suggests that Clinton executed a two-pronged restriction on sulphur content destined to have a snow ball effect on imports, refinery scheduling, and ultimately gasoline prices.

        In the spirit of completing the economics of the petroleum industry, I cite their accounting standards. My impression from reading business news is that oil profits are usually assessed as a percentage of sales. Having created an inelastic market, they can pass on import costs to their captive customers with impunity, marking up their profits accordingly at no additional cost whatsoever. In short, they have an incentive to maximize crude costs. And OPEC sees nothing wrong with that.

        In my view, a free market exists if and only if an auction exists among either suppliers or end products. As I see it, the petroleum business is not working into a free market. And the environmentalists see nothing wrong with that.

        Of course, having your competition, e.g., nukes, coal, low grade crude, dirty products, killed off is a huge advantage oil companies have over other consumer product producers. The result is an extreme monopolistic enterprise, and the unintended (I hope) consequence of environmentalism. Why waste money on new refineries?

        The petroleum industry has strong parallels to the illicit drug trade.

        P.S. Enviro whackjobs is redund.

      • Jeff,

        you were wrong in two specifics. You stated the low sulfur move was done by executive order when only the diesel was done by executive order. You were wrong when you implied that the Oil Companies were making the big bucks off of it. The oil producing countries are the only ones making a windfall off of higher per bbl prices (outside of speculators probably including Soros and Hedge fund cronies!!)

        Here is a Wikipedia article of taxes: http://en.wikipedia.org/wiki/Fuel_taxes_in_the_United_States. The Fed gets 18.4 cents per gasoline and 24.4 cents for diesel.(please note the Fed gets other taxes and fees in the pipeline) Then the states take their cut. But how much does the oil company make??

        Depending on who you believe, oil companies NET between 2 and 10 cents per gallon.
        http://jb-williams.com/4-25-06.htm
        https://formation.wallst.com/articles/2008/what_real_profit_margin_gallon_gas
        http://poorworkingstiff.wordpress.com/2008/06/11/oil-company-profits/
        http://mjperry.blogspot.com/2011/04/gasoline-taxes-per-gallon-vs-exxon.html
        When you sell billions of gallons per year you can make “windfall profits” off of one of the slimmest margins in the world!! Now, with spot oil prices running from 40-120/bbl who is making the obscene money?? How about Hugo Chavez, the House of Saud, Canada Oil Sands owners and developers, Mexico, 3rd world African Countries, Brazil, Iraq, Iran…

        We always see the Oil Companies as the bad guys and, truth be told, there have been some bad guys running them at times. but Why don’t we look at the real problems?? Taxation. fees, regulation, lawsuits, political pandering. Maybe someone can tell me why we have to bend over for the House of Saud and the rest when we have plenty of resources between our own reserves, Canada, and Mexico to supply our countries.

        Oil from our own country would even provide more revenue to the gubmint since they get lease and production fees fromn the oil companies along with the tax at the pump!! Part of the reason Obie needs more taxes is that he is ruining the revenue streams from offshore drilling and even some onshore!! We have to wonder why the gubmint WANTS to buy oil from other countries. What is hidden there?? It hurts balance of trade and loses tax and fee revenue. Anyone help me on this one??

        Now, how much of the rise in price is due to the plummet in the worth of the dollar due to insane gubmint policies over the last 25 years?? In 1970 I was buying premium gas in SoCal for .399. It is now 3.699. Again, I really don’t think the greedy B’s at the oil companies are the real problem.

      • kuhnkat 7/5/11, 3:36 am, SST

        You claimed that I was wrong in two specifics, that I

        stated the low sulfur move was done by executive order when only the diesel was done by executive order.

        I neither said that, nor implied it.

        And

        You were wrong when you implied that the Oil Companies were making the big bucks off of it. The oil producing countries are the only ones making a windfall off of higher per bbl prices (outside of speculators probably including Soros and Hedge fund cronies!!)

        You grossly understate my position to introduce a couple of irrelevancies. I’ll go further. The oil companies ARE making unprecedented profits off of a run-up in crude prices, amounting to a huge return from no relevant investment, an infinite ROI. This is not a windfall, meaning an unexpected increase, to any of the participants. It is predictable, contrived through monopolistic practices, made possible through intended or unintended consequences of environmentalism, through artificial accounting standards, through inflation, and actively pursued by those who gain (“profit”) from it.

        Nor, being a free market type, do I have a problem with speculators. I do have a problem with someone relative knowledgeable on a subject relying on ugly and evil characterizations to score points. You turn a discussion into a melodrama.

      • Jeff Glassman,

        “(3) The cost of oil, and hence energy, in the US is driven by what I recall was Bill Clinton’s Executive Order that to meet air quality standards, we would henceforth import only light sweet crude, the most expensive species of crude. This, not subsidies, was a payoff to the oil companies. Under the Order, the oil companies would have to upgrade their refineries to meet the new air standards.”

        Don’t be a putz.

      • kuhnkat, 7/6/11, 9:28 pm, SST

        Where I stated that my recall was that Clinton restricted imports of light sweet crude by executive order, (7/4/11, 11:36 am, repeated by you two days later) you helped me see that that was not the literal order, but only its effect. You said, EPA set low sulfur standards for automobiles and light trucks, and the Clinton Executive Order reduced the sulfur allowed in commercial vehicles. 7/4/11, 8:36 pm.

        However, I had to clarify your interpretation. I quoted one of your own references, which said the first order from EPA related to the sulfur content of gasoline, and the executive order extended that regulation to the sulfur content of diesel. 7/5/11, 12:26 am. The modified regulation applied to petroleum products, not to vehicle design. It could not, for example, be satisfied by improved catalytic converters.

        You responded by claiming that I had stated that the low sulfur move was done by executive order when only the diesel was done by executive order. 7/5/11, 3:36 am. Of course, I hadn’t used the vague words low sulfur move, and referred instead to import [of] light sweet crude. Initially, my recollection was imperfect on two grounds – the administration used a regulation plus an executive order, and neither related expressly to imports.

        The administration’s combined regulation and executive order restrict the content of motor vehicle fuels, not motor vehicles. The combined effect is the subject: the modified regulation is upon oil companies, but it means that they can profit from the tightened standards. They can simply import the most expensive and rarest crude, passing the cost on to their customers, and adding an investment-free, proportional mark-up. The amended regulation was just short of being so tough that refineries would have to be upgraded. The oil companies did not need to lodge any meaningful objections, and indeed gained from the new regulations at no cost.

        To score some point, even an irrelevant one, you have hung up on how the administration framed its orders and regulations instead of addressing the subject – the net effect of these administration actions on oil company operations and profits. If you were to cite accurately, paying attention to every word (e.g., fuels vs. vehicles vs. imports vs. moves) you would gain in your understanding of the economics of the petroleum business, and be able to contribute to discussions on an unmoderated thread. You would not have to rely on juvenile, bathroom wall sribblings.

  52. Joshua

    If you track back up this article you will see me confirm that I wished I had more explicitly referenced the corrections to the Argo saga.

    I assumed that by my making a direct link to the Nasa site- where the whole story was told in considerable detail in chronological order from their perspective-that this would enable people to see the other side of the argument in immediate context.

    Unfortunately-to my surprise bearing in mind the nature of this audience-people appear to have read the comment then gone elsewhere to check it when all they needed to do was click on the link. (Did you do that as well? Why?)

    As I remarked , I think to Fred, there are sometimes very long URL’s and Judith quite rightly likes to incorporate them into the body of the story, so a URL becomes ‘article’ or similar. This sort of shortening is something I suppose we’ve got used to with wikipedia, but sometimes the title of the link itself tells a story.

    As far as I’m aware I’m one of the few writers that actually puts the other side of the story when that is essential to understand the context-see my co2 article for example-Gavin and Tamino certainly would have been less than forthright in citing alternative views.

    I think your last paragraph is interesting in as much that I continually go through a filter of examining opposing views whilst carrying out research for an article, so what you see is the final result, not some intermediate stage where other view points have been examined and possibly found wanting.

    You also ought to be aware of the writers background- for instance I originally started out on the AGW side of the fence as I suspect did many other commenters here. So again we have already been through a considerable filtering process that might have lasted years.
    All the best.

    tonyb

  53. Joshua

    I remarked earlier about the problems that this system of nesting can cause and I think that is partly responsible for misunderstandings-for instance your belief I didn’t ref the further story of Argo when I did. Comments get inserted all over the place and it can become very difficult to folow a linear set of comments that builds on on the other clarifying and explaining. I just glanced at my last post and saw you hadnt replied yet then just checked the latest comments section and there are you Kim and Layman Lurker slugging it out and all appearing way before my comment which looked to be the latest one made on this thread.

    I’m not sure that people can always see and follow a developing discussion that easily, but of course its what we have and I’m sure we’re all greatful to Judith for providing this forum.
    Tonyb

    • I have not seen any forum that satisfies all requirements. The combination of a blog type posts and hundreds of comments has not found good solutions.

      Personally I use regularly and quite extensively a software tool that I have coded and put available (see my entry to Denizens of this site). Several people have downloaded it, but not necessarily found it as useful, which may also be due to not noticing, what it can do. In most cases I use it to filter a list of messages posted in a thread after some specific time. The list of author, time and link can be shown either in chronological order or in the order they appear on this site. Then the messages can be viewed by clicking on the link in an almost normal browser window, which shows the whole thread, but allows jumping from one new message to the next one with a click, wherever they are in the full thread.

      • Pekka

        I will look at your piece of software-is there anyone here that has used it? The problems with nesting are illustrated by the fact that your discussion with Jeff seems to have become completely disconneted from his replies which makes it impossible to follow unless you have asked to receive follow up comments by email, which then floods your inbox.

        tonyb

      • My software has a short help file, but perhaps not detailed enough. It’s just a rather simple Windows application with a limited number of properties. Some of these properties are early attempts, superseded by other. Dropping then might have been a better choice.

        The functions that I use regularly are:
        – Copy-pasting links to the list of threads
        – Selecting threads from this list
        – Saving and restoring the list of threads
        – Removing threads from the list by Shift-Del (has to be done immediately after selecting the thread before messages are fetched.)
        – Fetching the messages of the thread
        – Selecting the order message links
        – Filtering message links by date and time
        – Junping to a comment by clicking its link.
        – Sometimes it’s necessary to update the browser window using ‘F5’, because it’s not always updated when a new list of messages has been fetched.

        Using filtering by content and author’s name is useful sometimes, but I don’t use those possibilities often.

        Writing messages in the browser window doesn’t work well, but opening the full IE to write a comment is easy through right-clicking the mouse.

      • tonyb, 7/4/11, 3:18 am, SST

        One of the threading problems is that we quickly run out of indentations. Another is that the reader doesn’t have the option of viewing either threaded or chronological.

      • With my software you can essentially do just that.

        Actual reading is done with the threaded presentation, but the chronological order is available for jumping from one message chronologically to the next by a mouse click.

        It may be a bit more cumbersome on screens of lesser resolution than my own, but it works.

  54. Joshua

    Good luck in finding credible nayayers with only 5 Mp’s voting against the Act. Here is one of them
    http://news.bbc.co.uk/1/hi/sci/tech/7746126.stm

    Green taxes on Petrol prices have been a political issue for some time-certainly for 5 years perhaps longer. There is also a separate green tax imposed on cars that don’t meet certain emission levels (anything up to around $600 per year-very small cars are much cheaper) and a further one for people driving into London. It used to be called the Congestion zone now its a low emissions zone.

    The Govt makes no secret at all in wanting to impose additional green taxes on fossil fuel operators in order to encourage Wind generation. The trouble is that this won’t work when the wind doesn’t blow, which tends to be the case during our coldest winter weather as a high pressure sits over the Uk.

    The net result is that I have to go to a funeral in Wales tomorrow and the cost will be so great it will affect my budgeting for the rest of the month (around 60% of the petrol cost are taxes-not all originally imposed as green ones.)

    Our heating/lighting bill will be so great this year that if we have another cold winter we shall likely withdraw to one room during the coldest part. We are by no means poverty stricken, but you can only allocate so much money to various parts of your expenditure.

    All of this impacts on inflation and ironically hikes up the cost of taking a train-seen as the green alternative.

    This is all coming down the line to those signing up to drastic carbon reduction-which tends to be the West. We are shooting ourselves in the foot as regards individual prosperity and the economy.

    Look forward to seeing your list of credible naysayers-I do stress the word ‘credible’ and lets add in the word ‘influential.’

    tonyb

  55. Jeff Glassman

    As you may know Callendar was an amateur meteorologist and was considered a bit of a nuisance by the Met office. Many of his temperature observations and assumptions to support his 1938 paper seem to have been continued by Hansen in his 1986 gridding paper. I don’t know if you have ever seen the elegant rebuttal of Slocum of Callendar in his 1956 paper?
    tonyb

    • tonyb, 7/4/11, 3:40 am, SST

      Slocum’s paper is familiar to me, although I record it as a 1955 paper. He focuses entirely on the content of CO2 in the atmosphere as claimed by Callendar in support of his AGW model and compared with measurements. Slocum concludes mildly that

      All this does not refute Callendar’s thesis. The available data merely fail to confirm it. The positive evidence that the CO2 has increased is inconclusive, but seems strong enough to reward further study, and the time seems ripe for new research.

      It may be hoped that the collection of standardized measurements of CO2 can be made a part of the 1957-58 International Geophysical Year program.

      Just a year later, Revelle & Suess (1957) repeated the exercise, explicitly testing Callendar’s CO2 model and coming to the same inconclusive conclusion. The data weren’t adequate, and that could be solved with IGY funding (of course).

      The R&S paper is a fine example of elegance – politically packed, but scientifically a waste. Here is where they coined the slogan

      human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future.

      And here is where they formulated what came to be known as the Revelle Factor, drawing from what was then understood of carbonate chemistry. They derived their buffer factor by differentiating a simple formula for equilibrium for the air/sea flux of ACO2, additionally requiring constant alkalinity. R&S off-handedly dismiss the alkalinity effects as small, and never discuss the sensitivity of their results to real processes in the dynamic, never equilibrated, surface layer.

      The Revelle Factor was an elegant failure. When IPCC tried to resurrect the Revelle Factor, it rediscovered Henry’s Law – the dependence of uptake on temperature. When that dependence was questioned in the AR4 draft review, the editor promptly dropped the analysis, suppressing it in order not to confuse the reader. Suppressed was the evidence that Henry’s Law, an alien concept to IPCC, is viable in the dynamic, open ocean.

      Slocum’s paper is a commendable comparative bibliography relating to Callendar’s 1938 paper. It is a mundane analysis in which Callendar does not come off too badly. Keeling, Revelle’s protégé, presumably IGY-funded, would later save what is now called the Callendar Effect.

      You emphasize that Callendar (the steam engineer) was an amateur meteorologist, but not that Slocum (the meteorologist) was an amateur statistician. Slocum missed the opportunity for elegance by failing to calculate probabilities and confidence limits for the following indicated statistical tests. Do his 19th Century and 20th Century measurements come from the same population? Do his contemporary measurements from the three geographical regions (plotted in Figure 3) come from the same population? Do the data selected by Callendar from among available data reflect a bias, and if so, a significant bias relative to Callendar’s conclusions?

      Slocum’s conclusion includes a question:

      Is the CO2 increasing? Much seems to depend on the objectivity of Callendar’s decisions as to which data to keep.

      This is a wishy-washy conclusion about subjective appearances. He should have measured Callendar’s objectivity statistically, as suggested above. At one point, Slocum relies on J. H. Coste’s critique of Callendar’s paper, where Coste asks that same question, suggesting that at the turn of the 20th Century, the atmospheric CO2 concentration had been one third higher than Callendar showed. Slocum suggests that Coste’s comments were contemporaneous with Callendar (1938), but not that they appear as the Discussion section of that paper along with those of five other reviewers.

      That Callendar was a bit of a nuisance [to] the Met Office can be read into the well-moderated, reporter-written Discussion. The process by today’s standards appear gentlemanly, polite, and professionally deferential. In Callendar’s day, those frilled-up comments were likely as stinging as today’s AGW defenders try to be with their ad hominems and petty insults.

      Slocum has contributed nothing of value, and R&S have contributed a nuisance, a failed conjecture. On the other hand, two positive things can be taken from Callendar (1938) and the included Discussion. First, the review process was far superior to its replacement: the modern, anonymous, biased, peer-review system. Standards for professional papers still provide for a Discussion section, but now it is a monolog written by the author, and is merely a redundant extension of the Results section. The sacrifice of science and objectivity in the new professional journal environment has been the enduring impetus for the Internet.

      Secondly, Callendar agreed with his critics that Earth is not in thermal equilibrium, and he provided, at last, the lost physical reason for the climate to be stabilized: the strong negative feedback of cloud albedo. Henceforth, the name should be the Callendar Feedback.

      • Jeff

        Callendars archives and his autobiography are interesting in as much his thought prcesses on which co2 readings to keep and which to discard are rather revealing. In this respect I think Slocums comments take on a harder-yet still elegant-edge. He is in effect surely querying the veracity of Callendars findings, and in this he agrees with Coste that readings were thought to be higher than Callendar claimed.

        I think Slocum was tougher on the statistical element than you seem to infer; This from the paper.

        “Since Callendar, by basing his hypotheses on statistical
        data, has tacitly invoked the laws of statistical evidence,
        it is fitting to examine the validity of his procedure, that
        of using only the data he believed to be of the best quality
        available, rejecting the rest.
        The mathematics of statistics, and the experience of
        statisticians both indicate, as a general principle, that
        arbitrary rejection of data, without specific knowledge of
        their unreliability or unapplicability, is questionable.

        Although the purpose of such a procedure may be to remove
        an observational or sampling bias that is known to
        be present, selection of the data to be used will often introduce
        a greater source of error than that which it was
        intended to remove.

        At best, the omission of part of the data is not as necessary
        or as helpful as might appear at first thought, since
        it can be shown that when the means of two sets of data
        are compared, the presence of a given average bias in
        each set will not affect the difference nor the standard
        error of this difference, except as an added contribution
        to the variance of the sample. If, however, some of the
        data be selected to the exclusion of the rest, for the purpose,
        perhaps, of reducing the magnitude of the residual
        variance, due to crudity in some of the measurements,
        then, in addition to any unintentional bias that might be
        introduced in the comparison of the means, there might
        also result an underestimate of the standard error of the
        difference, due to the mistaken rejection of those of the
        extreme values which actually belong to the distribution.

        The result may be an entirely spurious accuracy in the
        means, which leads to unjustified conclusions.
        In the light of these considerations, a reexamination of
        the entire body of available measurements of the relative
        proportion of CO, in the atmosphere may have some value.”

        Do you have an opinon as to whether the base co2 level that Callendar claimed for the start of the 19th Century was accurate?

        tonyb

      • tonyb, 7/4/11, 5:53 pm, SST

        That passage from Slocum is a blunted warning about the hideous practice of selecting data to meet the model. Because Slocum didn’t establish that Calendar’s data selection was actually biased, the passage doesn’t rise to the level of a pointed accusation. Overall, I thought this passage to be a qualitative criticism where a quantitative one was indicated. Slocum should have calculated the feared bias and its effect on Callendar’s results.

        Was Callendar’s 1900 CO2 level accurate? I don’t have what is needed for an informed opinion. (I’m still struggling with the Keeling Curve.) I would always have a problem with an isolate data point, even a long term average. I’d have to study the datum in context with hundreds of nearby points.

        I also find that the GCMs, which initialize circa 1750 with all parameter values constant, especially CO2 concentration and surface temperature, to be a fatal modeling error. This causes estimated continuing processes to be wrongly attributed to man or any other possible causes. The best estimate for either 1750 or 1900 has to be a higher order expression, at a minimum including first order parameter rates at t = 0. If those rates turn out to be zero, that should be a calculated error of commission, not a default.

      • Jeff

        I think there is some deferential language being used as befits the time, and also reflects that Callendar had become somerhing of a war hero through his work in clearing Allied airfields of fog-ironically by burning huge amounts of oil.

        However in this context the ‘accusation’ is as pointed as it could be-Callendar was cherry picking the Co2 fruit from the lower ends of the available spectrum .

        tonyb

  56. Fred Moolten

    You state (on June 27, 2011) that the most recent studies on upper ocean temperature show that what you refer to as the recent “alleged cooling” was really simply a lack of warming rather than a real cooling.

    This is incorrect, Fred. Go back and read the studies more carefully.

    Loehle’s 2009 paper showed upper ocean cooling from 2003-2008 (bold face by me)
    http://www.ncasi.org/publications/Detail.aspx?id=3152

    Ocean heat content data from 2003 to 2008 (4.5 years) were evaluated for trend. A trend plus periodic (annual cycle) model fit with R2 = 0.85. The linear component of the model showed a trend of -0.35 (±0.2) x 1022 Joules per year.

    Link to Knox + Douglass study showing ocean cooling
    http://www.pas.rochester.edu/~douglass/papers/KD_InPress_final.pdf

    A recently published estimate of Earth’s global warming trend is 0.63 ± 0.28 W/m2, as calculated from ocean heat content anomaly data spanning 1993–2008. This value is not representative of the recent (2003–2008) warming/cooling rate because of a “flattening” that occurred around 2001–2002. Using only 2003–2008 data from Argo floats, we find by four different algorithms that the recent trend ranges from –0.010 to –0.160 W/m2 with a typical error bar of ±0.2 W/m2. These results fail to support the existence of a frequently-cited large positive computed radiative imbalance.

    But studies report recent (albeit slight) cooling, as observed from the new ARGO measurement system.

    Just to set the record straight.

    Max

    • Max,
      I’m amazed that you keep on referring to these two papers and claiming that they show cooling.

      The Loehle paper was based on a very short period, and outdated by the warmer observations included already in the Knox&Douglass paper.

      The Knox&Douglass paper has four different estimates, but the first one is wrong, the second two have no statistical significance, and the fourth one is very close to zero (negative only by a quarter of the error estimate).

      There are quite bad methodological errors also in the statistical analysis of Loehle, but not as serious as in the first estimate of Knox&Douglass, which is totally false.

      • Pekka Pirilä

        Nice of you to give your personal comments to the Loehle and Knox + Douglass studies, which reported a net cooling of the upper ocean since ARGO measurements were installed in 2003.

        Personal comments are nice. But I have not seen a peer-reviewed rebuttal refuting either of the studies.

        Have you?

        Max

      • Max,
        You are competent enough to find the errors.

        I have also explained the error of Knox&Douglass at least twice and as a comment to a comment by Douglass. He didn’t even try to argue against.

        These two papers are living proof that those journals do not have competent reviewers. It’s not really worth anybody’s effort to send refutations to such articles to those journals. Bad papers are not worth commenting in that way.

      • Pekka Pirilä

        You comment (relative to Knox + Douglass):
        :

        You are competent enough to find the errors.

        This may be true, Pekka, but K+D is a published paper, which has not (yet?) been scientifically refuted. It analyzes the ARGO data and concludes that there has been slight cooling since 2003, when these were installed.

        Earlier studies (Lyman, Loehle) came to the same basic conclusion.

        In a NASA press release, team leader, Josh Willis stated, “Short term ocean cooling suggests global warming speed bump”

        Prior to ARGO, the measurement was done on a spottier basis with expendable XBT devices. These were found to introduce a warming bias.

        Carson + Harrison 2006 compared 50-Year Trends from Different Analyses
        http://journals.ametsoc.org/doi/abs/10.1175/2007JCLI2002.1

        Together with major temporal and spatial sampling limitations, the previously described strong interdecadal and spatial variability of trends makes it very difficult to formally estimate uncertainty in World Ocean averages, but these results suggest that upper ocean heat content integrals and integral trends may be substantially more uncertain than has yet been acknowledged. Further exploration of uncertainties is needed.

        Gouretski + Koltermann 2008 asked, “How much is the ocean really warming?
        http://192.111.123.246/phod/goos/meetings/2008/XBT/2006GL027834.pdf

        We use a global hydrographic dataset to study the effect of instrument related biases on the estimates of longterm temperature changes in the global ocean since the 1950s. The largest discrepancies are found between the expendable bathythermographs (XBT) and bottle and CTD data, with XBT temperatures being positively biased by 0.2– 0.4°C on average. Since the XBT data are the largest proportion of the dataset, this bias results in a significant World Ocean warming artefact when time periods before and after introduction of XBT are compared. Using biascorrected XBT data we argue reduces the ocean heat content change since the 1950s by a factor of 0.62. Our estimate of the ocean heat content increase (0– 3000 m) between 1957 – 66 and 1987– 96 is 12.8x1022J. Because of imperfect sampling this estimate has an uncertainty of at least 8×1022 J.

        So, all in all, it appears that there was some general warming on average between the late 1960s and early 2000s and some slight cooling since then. How much of these trends is a spurious signal resulting from differences in measurement methodology is uncertain, although this appears to have played a role.

        Frankly, it would not surprise me that the upper ocean warmed during the late 20th century cycle of atmospheric warming and that this warming, like that of the atmosphere, has stopped or reversed itself slightly since around 2001.

        I do not believe that either you or I are able to predict whether or not the current hiatus in global warming will continue long enough to become a significant climate trend.

        Even if there is global cooling for the next 20-30 years, it is most likely IMO that the long-term underlying warming trend of around 0.04C per decade will eventually resume, but I do not think there will be a warming trend of 0.2C per decade, as predicted by IPCC.

        But that is just my opinion.

        Max

      • My comment is that the paper does not provide evidence for cooling of the oceans, as its correct results is close to zero, far closer than the given error estimate.

        My second comment is that one of the four ways of determining the trend is seriously in error. The truth of this claim can easily be confirmed by anybody having even some competence in statistical analysis. When the error is so obvious, it’s totally pointless to state that the paper hasn’t been refuted in any publication.

        The Loehle paper cannot be used to argue for cooling, because the same method with one half years worth of more data does not show cooling any more.

        There is no signal of warming either in the ARGO data that I have seen. Rather the data is consistent with no trend allowing modest trend in either direction. There may be newer data that I haven’t seen.

        I made these statements to counter your claim that the data shows cooling, which you based on those two papers, that I have now shown not to contain valid evidence in support of that claim.

        That’s all the content on the results. In looking at these papers I have in addition noticed, how easily papers with very little content (the Loehle paper is only a trivial and somewhat erroneous fit to data of others) and errors of basic methodology get published in some journals.

      • I add to my comment that the basic error of both papers is that the preprocess the time series in such a way that causes bias in the trend. In the case of Loehle the error is probably rather small. Thus my main comment on its result is that the result is based on very little data and obsolete.

        In the cases of the Knox&Douglass paper the 1st and 4th method measure essentially the same thing. The 4th is correct and all difference between the two methods is due to a explicit error in the 1st. The two other cases are based on one month per year, which has far too much randomness to be of interest.

      • Still on more comment.

        Everyone writing or reading the Knox&Douglass paper should have been alerted by the difference between the 1st and 4th method. Therefore it’s unbelievable that it was published as it is even in a new journal that was probably short of papers to publish.

  57. Pekka Pirilä

    Sorry, but it sort of appears that you are trying to defend a position by nitpicking on methodology (without getting too specific) and insinuating that the journals were “short of papers to publish” so published “papers with very little content”.

    Yet there have been no peer-reviewed studies refuting the data, which you find objectionable.

    Face it, Pekka, a bit of upper ocean warming is not the end of the world. It’s really no big deal. And who knows how long it will last?

    Maybe it is difficult for you to accept that the ocean has cooled – even if ever so slightly – since it could raise serious questions concerning Hansen’s “hidden in the pipeline” postulation if it were to continue.

    I just find it amusing that you are going out of your way to try to discredit three independent analyses of the ARGO data, which all show slight cooling of the upper ocean.

    Max

    .

    • Max,
      What can I do, when people are not interested enough in those papers to go and refute them.

      As I wrote before, you are not likely to have any problems in verifying that, what I have written is true.

  58. Pekka,

    “Therefore it’s unbelievable that it was published as it is even in a new journal that was probably short of papers to publish.”

    Yet you would Rally around the HockeyStick.

    Sadly, in modern times we cannot depend on the fact that being ignored shows poor science, it may just be science that Consensus doesn’t accept.

    • Yet you would Rally around the HockeyStick.

      Would I?

      More seriously. The paleoclimatic reconstructions contain many weaknesses. I don’t like the methodologies, and I don’t know, how strongly the methods suppress variations in the MWP and LIA. They are known to suppress such variations to some extent, but the strength of this suppression is still unknown as far as I am aware. That they do suppress the variations is stated explicitly also in later papers of same authors.

      The criticism of the methods by the skeptical side has brought up real weaknesses, but it cannot tell, what is the best information that can really be extracted. The McShane & Wyner paper is a demonstration of the situation. It’s critical to the original analysis along same lines than others, but it includes also its own analysis of the data and they conclude:

      We propose our own reconstruction of Northern Hemisphere average
      annual land temperature over the last millenium, assess its reliability,
      and compare it to those from the climate science literature. Our
      model provides a similar reconstruction but has much wider standard
      errors, reflecting the weak signal and large uncertainty encountered
      in this setting.

      Based on the starting points that McShane & Wyner have chosen that may be, what can be extracted from those proxy series that they used, but it’s never possible to say that there would not be any valid approach that produces more accurate results. That may require some additional assumptions, but it’s conceivable that such assumptions can be justified well enough.

      Mann et al made an early attempt. They improvised in their methodologies picking something from standard statistical methods, but adding non-standard choices. Thus they contradicted standard rules of statistical analysis, but did it in a way that had some justification. It’s very likely that they didn’t fully understand their own analysis, but neither did its critics acknowledge all valid arguments that they had in their choice of methodology.

      As a scientific paper the Mann et al paper had certainly weaknesses, but it had also a lot of new content based on a major research effort. Therefore its publication was well justified, but much of the later criticism is justified as well.

      • Pekka Pirilä, 7/5/11, 3:35 am, SST

        The Hockey Stick problem that has yet to be recognized is not merely that The paleoclimatic reconstructions contain many weaknesses. It runs much deeper. IPCC has adopted the practice of gluing paleo and other reconstructions onto the instrument records. Good evidence exists that IPCC accomplishes this by software distortions of the relevant data records, perhaps under the guise of “calibration”, to create false effects of continuity. Even without such evidence, IPCC’s Hockey Stick graphs are incredible on their face. The practice causes complete mistrust in the records.

        This is hideous science, and is unethical.

        IPCC should adopt strict rules for the display of data. It should rely on scatter plots of data points, using different symbols from different sources, overlaid with best fit lines or curves for the separate sources and, if possible, for the ensemble. If separate axes are required, they should be scaled independently, objectively and reasonably to fit the full graph. Departures from this standard must be purposeful and explained in the accompanying text.

        IPCC uses the Hockey Stick construction for ultimate visual correlation effects, to circumvent the principle that correlation does not establish cause and effect. Visual correlation is fine as it might alert the investigator, but he should never present it as conclusive. The correct and honest technique is to quantify correlation numerically, and where the skill of the investigator permits, using not just zero-lag correlation numbers but analysis of the correlation function.

      • Brandon Shollenberger

        Pekka Pirilä, just what do you think was the “new content” in MBH? What was the “major research effort”? It wasn’t the first multiproxy study looking at historical temperatures. It didn’t add any useful methodologies. In fact, the only significant differences in MBH from earlier work is the faulty methodology and it’s highly questionable proxy selections. Seeing as its results depended entirely upon those, and those were obviously wrong, I can’t think of anything you could possibly be referring to.

        As for extracting the “best information,” skeptics have shown one thing, and they have shown it conclusively. We cannot tell whether current temperatures are unprecedented in the last 1,000 years, and the “consensus” is wrong to claim we can. Even McShane and Wyner’s reconstruction supports this as it’s results are not inconsistent with a completely flat trend over the last 1,000 years.

        Nobody has been able to draw any meaningful conclusion about historical temperatures over the last 1,000 years. There is no indication applying more and more statistical methods to the data will change that. People like Steve McIntyre (and myself) think a much better approach would be to seek better proxy data, the very idea of which is resisted by the Team.

      • My view of the results are not very different from what you describe. The proxy data is very noisy and extracting correct signals from it is very difficult, and even if some method succeeds in getting the signal out, it may be impossible to know, whether it’s the correct signal.

        Making additional assumptions and developing special methods that take them into account, more efficient methods can be obtained, but at the same time it gets even more difficult to tell, whether the results are true signals or just artefacts.

        The analysis of MBH had some features of this type, and produced results whose value is questionable.

        The most generic problem is, however, that the signal is destroyed by the processing. Thus the results may indicate a more constant temperature history than it has actually been – or a hockey stick with a too straight stick.

      • Brandon Shollenberger

        Pekka Pirilä, here is something you might find interesting. As you can see in that link, practically everything about MBH was irrelevant. All that mattered for it was bristlecones. The other proxies offered basically nothing more than meaningless noise, and the methodology MBH used simply gave heavy and undue weight to a small subset of data.

        There is no particular reason to assume bristlecones offer good correlation with temperature, and there is a large amount of evidence suggesting otherwise (so much so, the NAS advises against using them as temperature proxies). The problem here isn’t methodology. The problem is the data simply doesn’t show anything, but people want it to. While there is nothing wrong with trying to find a signal, people keep “finding” one where it doesn’t exist, and that’s a serious problem.

        In any event, my main issue with your comment was I see nothing redeeming in MBH. It was a bad paper with numerous fatal flaws, and it didn’t contribute anything useful. The paper can be fairly summarized as, “Bristlecone proxies show a hockeystick shape, so modern temperatures are unprecedented.”

      • Brandon,
        I’m not going to discuss more, how bad the MBH paper was. In any case it was different from than the two papers I criticized here.

        The Loehle paper is really short. He had to ask for the data. Thus the numbers were not available to everybody, but otherwise it contained a standard fit that takes half an hour to do – and even that a little wrongly.

        The Knox and Douglass paper contained more, but it had an error that was too obvious. It’s really bizarre that they were not alarmed enough by the difference of the methods 1 and 4, which were based on the same data and calculating the same parameter, to figure out, what is the reason of the difference. It should have been obvious that their method 1 picked a signal from the regular seasonal variation and that this signal contaminated the trend. The contaminating effect was so large, because the total period was so short. If they missed that, the reviewer should have picked that. To worsen the thing, they emphasized the erroneous trend by picking it as the one to display in the figure that summarized the data. The paper contained enough material to possibly justify publication, when the first method is dropped off and the results of the single month trends deemphasized to indicate their very low statistical significance.

        These are both cases that every reviewer should notice. Even, if the MBH paper was as bad as you describe, the task of the reviewer was not as simple. That’s what I wanted to say with my comments on it.

      • Brandon Shollenberger

        Pekka Pirilä, if you don’t want to discuss something, that’s obviously your call. However, you made specific claims about MBH that as far as I can tell were completely inaccurate. I’ve pointed out (part of) what about your comments I think is wrong and explained why. If you choose not to address the issue, as you are allowed to do, we’re stuck with you holding to untrue comments. You’re effectively spreading misinformation.

        You’re welcome to try to keep the discussion focused to one specific point (even if it is one I never discussed thus rendering your comments irrelevant). However, if you’re going to do so, you should try to refrain from making comments about other controversial issues. This is especially true since this fork of the discussion started with Kuhnkat saying:

        Yet you would Rally around the HockeyStick.

        Seeing as the unprovoked positive things you’ve said about the hockey stick are untrue, and you now say you won’t discuss the accuracy of your comments, it seems you’ve done exactly what Kuhnkat said you’d do.

      • I explain the logic of my own comments once more, and leave this issue at that unless something of real interest comes up later.

        My original point was directly related to the subject of this thread and in particular countering the statement of Max (manacker) on empirical evidence on cooling. As I have discussed the papers before on this site, I didn’t repeat all my earlier comments (some of them as response of an earlier comment of Max), but made only a brief comment.

        Later it turned out to be necessary to explain in a bit more detail the weaknesses of the evidence used as reference by Max. He was referring to the fact that they were published in refereed journals, which was again a reason for me to tell, how worthless the refereeing had been for these papers.

        Then Kuhnkat wrote the comment that could be anticipated and picked the standard case of skeptics. I told that I don’t personally think that the results of MBH are strong, but I told also, why I believe that the paper was not weak in a so simple way that it would make its publication as obviously unjustified as the publication of the other papers (without a correction in case of K&G).

        That’s all.

      • Pekka,

        you claim it was a more difficult job for the reviewers to do MBH. I would think it was an impossible job with incomplete information on the methods and no data. Yet, they somehow managed. I wonder how??

        Again you try to minimize the issues with MBH and now appear to be ignoring the lack of a real review of the paper.

      • Brandon Shollenberger

        kuhnkat, personally, I don’t see what is so hard to tell about the errors in MBH. All a reviewer would need to do is ask what the verification statistics for the reconstruction were. If the R2 verification scores were provided, it would show MBH was wrong. If, on the other hand, the scores weren’t shown, the reviewer would know he didn’t have enough information to accept the paper. This is one of the most basic steps I would take as a reviewer, and it would be sufficient.

        In any event, Pekka Pirilä claimed MBH “had also a lot of new content based on a major research effort.” That’s what bothered me. I asked what “new content” or “major research effort” there was. I got no answer. As far as I can tell, there is no answer to be given. That’s what I was discussing, so nothing Pekka Pirilä said was really responsive.

        Along the same lines, I was irked at him saying, “[N]either did its critics acknowledge all valid arguments that they had in their choice of methodology.” I didn’t comment on this at the time as I wanted to focus on one point at a time. However, it is a clear statement weighing in on the MBH controversy, and as such, it was going well beyond Pekka Pirilä’s original point. It’s fine for Pekka Pirilä to want to stick with discussing his original point, but when he goes beyond it and discusses other things, he has to accept people may discuss whatever else he brings into the discussion.

        Quite frankly, I don’t care to look into Pekka Pirilä’s original point. I just don’t want to see inaccurate praise for MBH passed off as fact. It’s been over ten years since that paper came out. There is no excuse for people to still be saying untrue things about it.

  59. Brandon

    I think it is interesting how many people (I’m not accusing Pekka of this) support such ludicrous information as is promoted as cutting edge information

    Whether it is bristlecones or buckets thrown over the side of a ship in a haphazard fashion, we get people supporting that as a valid and reliable means to accurately gauge land/sea temperatures.

    We all need to be much more objective when examining the ‘evidence’ put before us and start laughing when it is patently ludicrous. We seem to have lost that ability-uniquely-with climate science.
    tonyb

    • Brandon Shollenberger

      It isn’t unique to climate science. What is relatively unique to climate science is nobody is willing to go back and admit mistakes. MBH was garbage, and people should have seen that from the start. They didn’t, partially because they “liked” the results. That’s unfortunate, but it happens. All that needs to be done to fix things is admit the mistake and move on, but climate science won’t do that. Now then, there is strong motivation to keep getting the same result (the hockey stick is a symbol now), even if it requires making more and more garbage papers (a decade later, and all we have changed about MBH is we’ve added Tiljander). There is a certain amount of sense in this, even if it is horrible.

      But what’s really bad is the climate science community seems unable to admit any mistake, no matter what the impact of the mistake would be. Even when mistakes are admitted, they’re constantly couched in caveats about how it “doesn’t matter” or the like. Nobody seems willing to just say, “Yeah, I screwed up. Sorry, I’ll try to do better next time.”

      Mind you, people on all sides (yes, including skeptics) do the same sort of thing, in all aspects of life. There’s just usually limits on how far they’re willing to delude themselves/others. Climate science seems not to have any such limits.

      All it would take to solve the global warming debate is for people to man up and be honest, both with themselves, and with others. The day that happens is the day what may be one of the most serious issues faced by man can truly be addressed.

    • tempterrain

      Have you actually tried to compare measurements of the temperature of the ocean taken directly in the water and by the bucket method? Or at least have you any references to show that such a method is as bad as you claim it to be?

      I would expect you haven’t, and furthermore I would expect the agreement to be good.

      • Tempterrain

        Do read the article and the links and see the uncertainties that even the Met office were expressing. These uncertainties are further expressed in the other material that is referenced

        Then put aside your ideology and use your commonsense. Trying to determine a highly accurate global sea temperature from someone haphazardly throwing a bucket over the side of a ship 150 yeat ago is something that-as a scientist-you shouldn’t be defending.

        tonyb

      • tempterrain

        OK I’ll take that as a “no I don’t have any reliable references ”

        A bucket of water would weigh say 10kg or be 10 litres in volume. It will take maybe 30 seconds between scooping the water from the ocean and measuring the temperature with a thermometer on the ships deck. The difference between the air and water temperatures will be a few degrees at most – so there is really no possibility of enough heat either entering or leaving the bucket to cause significant error, particularly if metal buckets are used.

        If you don’t believe me try filling a bucket of water from the tap. Measure its temperature. Measure the outside air temperature. It will probably be a few degrees different. If you put the bucket outside it will take hours for temperatures to equalise. The change of temperature after 30 seconds will be very small indeed. Let me know if you measure anything different.

        If canvas buckets were used previously there is more of a possibility that evaporation may cool the contents but again this error should be quantifiable and correctable in the records.

      • Tempterrain

        Do leave aside your well known prejudices and READ how the measurements were actually taken. They do not BEGIN to conform to the nice orderly and uniform scenario you paint.

        My criticism is of pre 1950 material. Bob Tisdale would extend that to the 1980’s.
        tonyb

      • Nebuchadnezzar

        tonyb,

        You should lead by example. The measurements aren’t nice and orderly or especially uniform, but dismissing them as ‘haphazard’ is equally evasive.

        You’ve presumably now read the HadSST3 papers and had a chance to digest them. It would be helpful if you acknowledged that the world has gone beyond categorising SST measurements as ‘highly accurate’ or not ‘highly accurate’.

        No, the individual measurements aren’t ‘highly accurate’ to say 0.01K, but the annual global averages of all those observations are ‘highly accurate’ to around 0.3K even back before 1950.

      • Nebuchadnezzar

        I live geographically close to the Met office and have asked to meet John, who commented about their research on the relevant thread . I will let you know if anything transpires.

        tonyb

      • tempterrain

        Yes, there is no such thing as absolute accuracy or inaccuracy.

        It strikes me that skeptics/deniers in the climate ‘debate’ would be happy , almost gleeful, if it could be shown that the levels of inaccuracy in historic SST records were so high as to make them useless. Whereas those in the pro-science camp would be quite saddened by this finding. We really would like know what the record shows, regardless of whether it’s warming or cooling. It is quite right that the best possible statistical analysis should be done to extract as much information as possible from the available data.

      • tempterrain

        “pro-science camp”?

        Whodat?

        (Those scientists, with which you happen, personally, to agree?)

        Max

      • Tempterrain is confused. The sceptic camp IS the pro-science camp. It’s the alarmist establishment that is anti-science, preferring to hide declines etc etc so as to buttress preconceived conclusions.

      • Actually a couple of people have done some ad hoc experiments. May I say I find your comments ludicrous based on what they found and especially on what little is known of the procedures and people taking the measurements. (do you know the grade of canvas used in all those measurements? Was it coated in any way? This issue was never even considered in all the BS.) Then there is the whole issue of totally inadequate coverage to even characterize the temperatures in the shipping lanes where most of the measurements were taken much less to extrapolate or interpolate into other areas. This is one of those times when statments of the usefulness of the data should only be met with loud guffaws until those making the claim STOP!

    • Sometimes there is no better statement of the worth of something than hysterical laughter.

      At this point in time there would appear to be several pieces of Climate Lore that should be accompanied by gales of laughter whenever mentioned.

  60. tempterrain

    “But what’s really bad is the climate science community seems unable to admit any mistake, no matter what the impact of the mistake would be”

    Like they didn’t admit the Himalayan glaciers melt rate error? Like they didn’t admit the error on percentage of the Netherlands below sea level?

    Yes mistakes do happen and when they are made they are corrected but you need to show why things are incorrect. Its not enough to huff and puff about MBH being “garbage”. You may think that. But MBH isn’t the only hockey stick, and it’s quite consistent with many similar studies.

    • Brandon Shollenberger

      Like they didn’t admit the Himalayan glaciers melt rate error? Like they didn’t admit the error on percentage of the Netherlands below sea level?

      I’ll just take one of your examples, the Himalayan glacier error. Did they admit the error? No. People admitted the value given was wrong, that is true. However, the error is not simply that a wrong value was given. There is far more to it. Another part of the error is the wrong value was taken from an unacceptable source. Another part is the error was pointed out during the review process, but the reviewers willfully ignored it. Another part of the error is Pachauri, the chairman of the IPCC, insulted a person who pointed out the error by calling the person’s work “voodoo science.” Mind you, this is the same Pachauri who apparently had a financial interest in this error being ignored. And there is more.

      People didn’t admit anything until they had to. Even then, they admitted as little as possible. There was no honest admission of error or fault, nor was there anything given to indicate the people felt remorse. The honest response to this would have been to immediately look into what happened, find the reason for it, publicly discuss it and explain what steps would be taken so it never happened again. Instead of giving an honest response, the response was one you’d expect from politicians.

      Yes mistakes do happen and when they are made they are corrected but you need to show why things are incorrect. Its not enough to huff and puff about MBH being “garbage”. You may think that. But MBH isn’t the only hockey stick, and it’s quite consistent with many similar studies.

      This is a complete non-sequitur, and ironically, it demonstrates the exact point I’ve been making. I don’t think MBH is garbage. It is garbage. The proper response to a bad paper is to acknowledge it is a bad paper. It is not to hide behind things like, “But other papers get the same results!” So what? If it is true other papers got the same results, why defend MBH? If your position is that the errors don’t matter, why don’t you fully admit the errors?

      Of course, the truth is nobody can move onto other papers because those other papers have their own problems too. The best example of this is Mann08. A decade after his original hockey stick, Michael Mann made a new and improved hockey stick. What’s changed with his results? They’re not just based on bristlecones anymore. Instead, they’re based entirely on bristlecones and proxies which cannot be calibrated to temperature (Tiljander). That’s right, the change in Mann08 can be accurately summarized as, “Mann found a few series which cannot be calibrated to temperature, flipped a couple upside down and got a hockey stick. Clearly, this provides more evidence modern temperatures are unprecedented.”

      An honest and frank admission of mistakes and fault would greatly bolster the credibility of the IPCC and the consensus position. Until such happens, a serious discussion cannot be had.

    • Temp,

      sadly you show a typical psychological issue here. Unquestioning belief in your God.

      The other studies replicating MBH used at least 2 of these three proxies, upside down tiljander, stripbark bristlecones, or what was it, 1 magnificent tree in Yamal with a few buddies that did not nearly meet its magnificence??

      HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

      Yes, Tamino’s closed mind and Radio Contol minds and other apologists probably told you that dropping one of those proxies made no difference in the end. How about dropping all of those inappropriate proxies and seeing what happens?? Oh, sorry, they did not show you the result of that now did they.

      The hockey stick I like most was McKittrick’s based on economic indicators. Matched temperature profiles better than any old stuff from MM and team!! So, would you say that was an appropriate proxie for temperature?? I wouldn’t. It is no more appropriate than any number of other things that have been tried and rejected for physical reasons.

      But, you will believe what you want to believe.

      • As persuasive as it is to be screamed at by an unthinking fanatic like yourself, between science and evidence and the loon screaming “HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA,” I’m going to go with the science.

      • Then you are going to have to start to prove it.

      • Let us know when your journey to find this mythical thing is successful.

    • tempterrain, 7/6/11, 5:46 am, SST

      MBH isn’t the only hockey stick, and it’s quite consistent with many similar studies.

      See MBH’s temperature hockey stick is TAR, SPM, Figure 1(b), p. 3:

      http://rocketscientistsjournal.com/2010/03/_res/TAR%20SPM%20hockey%20F1p3.jpg

      It is quite consistent with IPCC’s hockey stick constructions for CO2, CH4, N2O and SO2, TAR, SPM, Figure 2, p. 6:

      http://rocketscientistsjournal.com/2010/03/_res/TAR%20SPM%20chem%20hockey%20F2p6.jpg

      Notice the title of the latter chart: Indicators of the human influence on the atmosphere during the Industrial Era. IPCC manufactured these charts by smoothly blending proxy or ice core records into the modern instrument records. Then without even labeling the Industrial Era, has asked its readers to accept a conclusion that violates a precept of science, namely that correlation establishes cause and effect.

      As to other studies, MBH (1999) is a paper published in the AGU Geophysical Research Letters, a peer-reviewed climate journal. This is a warranty that the paper will agree with others on the subject of AGW. Naomi Oreskes’ highly touted study of 928 papers in such journals, claiming to have established the existence of a unanimous consensus on AGW, actually showed, as is plain on its face, that the data actually show that these journals never publish off-message papers.

      In this field, peer-reviewed agreement and consensus are legally cumulative evidence.

  61. SUGGESTION:
    The plural of “SST” should be written “SSTs”, not “SST’s” (ie no apostrophe).

    • tempterrain

      Yes, you’re (nearly wrote your :-) ) quite right!

      • Tempterrain

        Here is what Dr Curry wrote earlier;

        “I think SST data is very dubious before 1950. Apart from the quality and interpretation of individual measurements which Tony addresses, there is the very sparse sampling and dubious filling in of missing data using EOF patterns based upon the period 1960-1990. IMO Tony provides a useful historical perspective on how these data have been collected.

        Here is what the CRU “experts’ have to say on uncertainty in the SST record, see Brohan et al. See esp Figure 12. Do you for one minute believe that the uncertainty in global average sea surface temperature in the 19th century is 0.3C? I sure as heck don’t, especially when the uncertainty estimate in the 19th century for land temps is more like 0.8C (a more realistic number but still possibly too low). A more recent essay on the challenges of determining global SST is written in this article. (the link doesnt show up in this copy)

        I have had cause to utilise SST data and so has Dr Curry. That is precisely why I wrote the article concerning the accuracy of them from 1850 to 1950. There is a further question mark over them up until the 1980’s as expressed by Bob Tisdale.

        I will hope to meet up with John from the Met office to look at his research and understand his perspective. In the meantime you can perhaps clarify why you believe the historic data is so scientific bearing in mind all the uncertainties already expressed, and that your analogy of how they took a measurement doesn’t begin to bear scrutiny.

        This will enable me to understand the argument better
        tonyb

      • tempterrain

        Although it appears you have not noticed this, NOT admitting past mistakes is endemic among the AGW “mainstream insiders”. Instead, look for stone-walling and silly rationalizations.

        A good example is the silly temperature forecast made by James E. Hansen in 1988, based on three different GHG scenarios. (This was before Hansen got wise that it is better to make long-term forecasts, which no one can check.)

        As it turned out, his estimate was off by a bit more that 2:1, even ignoring that the actual CO2 emissions grew at a rate HIGHER than Hansen’s highest scenario A.

        The obvious reason for this is because Hansen used a model-derived 2xCO2 climate sensitivity that was exaggerated by more than 2:1.

        So did Hansen admit his mistake and adjust his assumed climate sensitivity downward?

        Of course not!

        Instead, Hansen’s buddy, Gavin Schimdt got in the act to defend the lousy forecast, by rationalizing that it wasn’t really all that bad, if one compared the actually observed temperatures with those Hansen predicted for a different scenario involving a reduced rate of human GHG emissions (which did not occur in actual fact).

        Duh!

        Max

      • tempterrain

        OK If anyone is interested to read Gavin’s comments on these predictions they are here:
        http://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/

  62. Just keep in mind that Hansen is basically Gavin’s boss and they are members of the same religion. 8>)

  63. Tempterrain

    Nice of you to post the link to Gavin Schmidt’s bogus rationalization of James E. Hansen’s failed 1988 GH warming forecast.

    Schmidt writes in this attempted rationalization:

    In the original 1988 paper, three different scenarios were used A, B, and C. They consisted of hypothesised future concentrations of the main greenhouse gases – CO2, CH4, CFCs etc. together with a few scattered volcanic eruptions. The details varied for each scenario, but the net effect of all the changes was that Scenario A assumed exponential growth in forcings, Scenario B was roughly a linear increase in forcings, and Scenario C was similar to B, but had close to constant forcings from 2000 onwards. Scenario B and C had an ‘El Chichon’ sized volcanic eruption in 1995. Essentially, a high, middle and low estimate were chosen to bracket the set of possibilities. Hansen specifically stated that he thought the middle scenario (B) the “most plausible”.

    Let’s check that out. In the 1988 paper, Hansen writes:
    http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf

    Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth rate averages about 1.5% of current emissions, so that the net greenhouse forcing increases exponentially.

    Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level.

    Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

    The actual emission growth rate increased from 1.5% in the 1970s and 1980s to 1.7% from 1988 to today, so the actual rate of increase was actually greater than that assumed by Hansen for Scenario A.

    Obviously, Scenarios B and C are way off the mark.

    The problem is that Hansen’s Scenario A grossly overestimated the GH warming that would result. This is because the model-derived 2xCO2 climate sensitivity, which he assumed, was exaggerated by a factor of more than 2:1.
    http://farm3.static.flickr.com/2537/5738998081_b3b3e55049_b.jpg

    Actual warming turned out to be the same as Hansen’s Scenario C, based on the drastic reduction of GHG emission growth between 1990 and 2000 ” such that the greenhouse climate forcing ceases to increase after 2000”.

    But this did not happen, did it?

    Nor did the ”decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level”, as assumed for Scenario B.

    All in all it was a forecast that turned out to be grossly exaggerated (like all of Hansen’s “predictions”), despite Schmidt’s attempt to fog up the issue with his rationalization.

    Max

  64. For the sake of completeness of material to August 2011 I wanted to add these two papers from the Met Office and their Dutch counterparts confirming the lack of upper ocean warming from 2003 to 2010 (the last full year of study)

    http://www.metoffice.gov.uk/news/releases/archive/2011/ocean-warming

    tonyb

  65. I spent some 17 years doing Voluntary Weather Observations on Merchant Ships for the UK and other Met Offices. I have sailed on most of the routes depicted on the above map. The UEA CRU used our SSts and other readings to “calculate” their SSts for their GMTs.

    I can only describe the methods and results of their efforts as stark staring bonkers.

  66. Pretty part of content. I simply stumbled upon your weblog and in accession capital to say that I acquire in fact enjoyed account your weblog posts. Any way I will be subscribing for your feeds or even I success you access consistently fast.

  67. Hello there, I found your blog by the use of Google even as searching for a comparable subject, your website got here up, it looks good. I’ve bookmarked to favourites|added to my bookmarks.

  68. Although the name may seem unfamiliar, My – Life has actually been around for some years,
    starting out in 2002 as Reunion. They also avoid products that are tested on animals and do
    not wear fur, wool or leather. This is an award which less than 8% of the nation’s Chevrolet dealers have won.

  69. Keywords:Venetian,The Venetian,las vegas,vegas,vegas <. If you're looking for a simple, cute, easy to understand game – Furry Farm is a good option without all the complexity of farming. Inside a conventional casino, this could be the card game high rollers would play as their entertainment.

  70. Although maximum site owners are familiar with these concepts, but they may lack the expertise to do it the correctly.
    In addition, many of the words used started to take on slightly different meanings,
    depending on the context in which they are used.
    To report stolen email addresses at Yahoo, click Yahoo.