On the adjustments to the HadSST3 data set

by Greg Goodman

**UPDATE at end of thread**

The effect of the adjustments introduced in Met. Office’s HadSST3 release are compared to the original ICOADS data to evaluate their effects on the frequency content of the data. The relative merits of making a simple adjustment for the war-time glitch in ICOADS are also investigated. It is demonstrated that the various adjustments made in preparing Hadley SST versions combine to effectively removing long term variations from the climate record. Frequency analysis shows the adjustments generally disrupting, rather than improving the data.

Introduction

The U.K. Meteorological Office Hadley Centre’s sea surface temperature records HadSST2 and hadSST3 are based on data from the ICOADS project which claims “ICOADS is probably the most complete and heterogeneous collection of surface marine data in existence. ” The global average monthly time series can be downloaded from the JISOA project. Temperatures are given as deviations from local seasonal averages, calculated for the period 1950-79.

These differences from the locally-averaged, monthly variations are referred to as “anomalies”.

Since the real meteorological shipping data are very non-uniform and unevenly spaced, ICOADS have done some complex interpolation and extrapolation in both time and space to provide this information as a 2×2 degree grid covering most of the global sea surface.

The Hadley Centre have reprocessed this data into a 5 x 5 degree grid, calculated a similar grid of local seasonal variations and produced a gridded database of monthly average temperature deviations from this seasonal climatology. In addition a number of adjustments are applied that aim to remove supposed systematic biases due to changes in sampling methodologies.

As well as the gridded data, HadSST2 is made available as a global mean temperature time series. The 6.6 MB HadSST3 data-set (released in July 2011) was released as 100 different “realisations” of the adjusted time series, using variations to the parameters of some of the corrections. The median average of all 100 versions is provided as a monthly, gridded dataset but do they not seem to provide a global mean time series as was done for HadSST2. To get a global average from HadSST3 it is necessary either to calculate the average from the gridded median data-set or to take the median value for each month from the 100 time series realisations.

The latter approach seemed the simplest and the least prone to possible differences in method and is what is used in this study.

War-time anomalies

One notable feature of the ICOADS data-set is a jump during the second world war. Obviously world wars do have a huge impact on shipping practices and hence on ship based data collection. During that period most merchant shipping was reduced to limited, protected convoys and the proportion of military vessels, with very different zones of activity, greatly increased. As well as the difference in shipping, the profile of the records that were preserved from that period was markedly different to the periods immediately before and after. Thompson et al [1] note:

“Between January 1942 and August 1945, ~80% of the observations are from ships of US origin and ~5% are from ships of UK origin; between late 1945 and 1949 only ~30% of the observations are of US origin and about 50% are of UK origin.”

Thousands of merchant ships were lost each year. It is perhaps surprising that the magnitude of this disturbance to shipping patterns and record-keeping did not have a larger effect on the temperature record.

The wartime glitch is clearly visible in the SST data shown in figure 1a.

Figure 1a.

A closer look at the detail shows an upward jump in December 1941 and a similar downward drop in 1946, red highlight in figure 1b.

Figure 1b.

It is accepted that these two dates correspond with entry of the USA into the war after the attack on Pearl Harbor and the demobilisation of US Navy after the war.

A simple way to deal with this is to subtract an offset from the data in this short interval. Such an adjustment is shown in figure 1b, above.

The Met. Office adjustments

From the papers documenting HadSST3: Kennedy et al [Met Office](2011c) [3c]:

Historical records of sea-surface temperature (SST) are essential to our understanding of the earth’s climate. Data sets of SST observations are used to detect climate change and attribute the observed changes to their several causes. They are used to monitor the state of the earth’s climate and predict its future course. They are also used as a boundary condition for atmospheric reanalyses and atmosphere only general circulation models (IPCC 2007).

Clearly, any adjustments made to the SST record will have far reaching implications and must be founded on sound science.

Attempts by the Hadley Centre to remove certain supposed biases from the SST data have a long history.

Folland et al (1984) [3] notes a difference between the unadjusted sea surface temperatures (SST) and the already adjusted marine air temperatures (MAT) around December 1941, as reported in Folland and Parker (1995) [4]:

However, because Jones et al. (1986) used COADS summaries, they were unable to separate NMAT from day MAT which are affected by historically varying, on-deck solar heating: their corrections therefore differed from those of Folland et al. (1984). In both these early studies, about 0.5 °C was subtracted from MAT for 1942-5, a period of non-standard measurement practices owing to war.

[Note: NMAT refers to nocturnal marine air temperature. Emphasis added ]

Folland et al. (1984) applied corrections to NMAT to compensate for the historical increases of the average height of ship’s decks. These rose from about 6 m before 1890 to 15 m by the 1930s and 17 m by the 1980s. The corrections, based on surface layer similarity theory, removed a spurious cooling of about 0.2 °C between the late nineteenth century and 1980.

Parker and Folland (1995) [5] ( emphasis added ):

Folland et al. (1984) explained this as being mainly a result of a sudden but undocumented change in the methods used to collect sea water to make measurements of SST. The methods were thought to have changed from the predominant use of canvas and other uninsulated buckets to the use of engine intakes. Anecdotal evidence from sea captains in the marine section of the Meteorological Office supported this idea. However, it is known that engine intakes did Provide some SST data as far back as the 1920s or before (Brooks 1928).

So the adjustments to the data were based “anecdotal evidence” and “undocumented change”, ie. unfounded, hypothetical speculation.

Folland and Parker (1995) [4] :

A nineteenth-century oak ships’ bucket covered in iron bands has been studied though there is no indication that it was used for taking sea temperatures.

Yet a presumed change-over from wooden to uninsulated canvas buckets was the basis for the adjustment that reduced the late 19th century cooling trend.

Again, Parker and Folland (1995) :

The correction method is based on:

(i) The observation that the earlier SSTs, expressed as anomalies from recent averages, are not only too cold relative to NMATs similarly expressed (Barnett, 1984), but also, outside the tropics, show enhanced annual cycles, presumably because more heat is lost from uninsulated buckets in winter when stronger, colder winds blow over relatively warm water (Wright, 1986; Bottomley et al., 1990);

The possibility that annual cycles may actually vary for climatic reasons is not even considered before “presuming” more bucket adjustments are needed. One could equally say: “presuming skies were generally clearer during the pre-war warming period, summers were warmer and winters colder. But such speculative presumptions seem highly inadequate in science.

Folland and Kates (1984) improved on the uncorrected SST analysis of Paltridge and Woodruff (1981) by applying an adjustment which was 0.15 °C up till 1930 and decreased linearly to -0.1 OC in the 1970s (their reference period was 1951-60). However, comparison with NMAT suggested that the change in instrumentation took place rather suddenly around the Second World War, so Folland et al. (1984) added 0.3 °C until early 1940, 0.25 °C thereafter through 1941, and nothing subsequently.

So having diagnosed a short-term problem with NMAT which was resolved by a fixed war-time adjustment, as had been done in earlier SST studies, Folland et al decided to apply a one-sided adjustment to SST. This resulted in a significant post-war drop in temperature relative to the pre-war period that was not in the original data. This was the largest of the adjustments applied in hadSST2 and has remained in place since it was published in 1984. It is difficult to understand why the post-war drop, that was previously noted and corrected by Jones et al, was retained uncorrected.

The resulting adjustment can be seen by plotting the difference between the HadSST2 and ICOADS time series. This is shown in figure 2. Note the significant warming adjustment in the earlier half of the record that “corrects” for a change-over from wooden buckets that were never known to have been used for temperature sampling in the first place.

figure 2.

Study of the papers detailing these changes shows that the meta-data (supplementary information) concerning the type of buckets or other equipment used, and data collection practices are inconsistent and often missing. Absence of reliable documentation showing the periods of application of various techniques and equipment used, make any attempt to apply a quantifiable bias adjustment highly speculative. To a large degree, all these adjustments, including more complicated recent work, are founded on gross approximation, speculation and pure hypothesis. Parker et al (1995) has 43 occurrences of “assumed” .

Before making adjustments to the data, in any scientific study, it is necessary to have solid evidence of a bias, not just hypothesis and speculative reasoning.

It is obvious that the 1946 drop, from one monthly datum to the next, is anomalous and not of climatic origin. The following analysis shows a simple correction to this period removes anomalies in the time series, its derivatives and its frequency content. The supposed instrument biases proposed by Hadley are neither obvious in the data nor is their removal beneficial when the effects are analysed.

The resulting spurious post-war drop in temperatures is one of the reasons that climate models have difficulty in reproducing temperatures going back more than 50 or 60 years. It is also disruptive to any attempt to analyse the nature and magnitude of any natural cycles in climate: the spurious drop introduces significant frequency signals that are not there in reality and almost certainly disrupts those that are there.

The magnitude of this spurious adjustment is about half the size of the total variation over whole period of the ICOADS record. It constitutes a significant rewriting of the climate record of the 20th century. HadSST2 is still the basis for the marine component of the combined HadCRUT3 land and sea record offered by the Met Office.

This issue finally got some recognition in Thompson 2008 [1] and an attempt was subsequently made to improve the adjustments to the data. This culminated in the release of HadSST3 in July 2011.

Figure 2b shows the difference between each HadSST version and the original data, displaying the adjustments that are applied in each case.

figure 2b.

The war-time glitch seems to be more reasonably dealt with in HadSST3. However, what is rather surprising is that the buckets-and-pipes adjustment, originally introduced to explain the 1941 increase, is retained. Despite the recognition of the post war drop that cancels the earlier rise, a further reduction is still applied and oddly ends up making the same overall adjustment as before.

All that HadSST3 does differently in this respect is to round off the edges and slide the change in over 40 years. Most of the defects and speculative assumptions included in the earlier work have been retained. This makes the post-war drop less obvious to the eye and presumably less disruptive to attempts to make climate models match the historic temperature record, though it may equally be argued that this is an attempt to make the record better match the assumptions implicit in the models.

Kennedy et al 2011c [3c] goes into some detail about how the duration of the change was determined.

If a linear switchover is assumed which started in 1954and was 95% complete in 1969, the middle of the James and Fox study period, then the switchover would have been completed by 1970. Based on the literature reviewed here, the start of the general transition is likely to have occurred between 1954 and 1957 and the end between 1970 and 1980.

However, this assumption seems at odds with figure 1 one from the same paper that shows a significant proportion of buckets readings in 1970. A proportion that rose from 1955-1970 and only declined from then to the end of the record. Figure 3 reproduces figure 1 from K2011c [3c]

figure 3.
Numbers of SST observations from ships for different measurement methods (1925-2006). Buckets (dark grey), ERI and Hull Contact sensors (mid grey), unknown (light grey).

Neither does this hypothesised linear change-over from 1954 onward correspond to the bulk of the adjustment actually applied, as seen in figure 2b above, where the cooling adjustment clearly starts as early as 1920 and has already achieved 2/3 of it’s final extend before 1954.

Steve McIntyre at Climate Audit had criticised pre-HadSST3 studies on a number of occasions. Since much of HadSST3 is based on earlier studies and the result retains much the same form (as shown in figure 2b), many of the issues he raised have not been resolved and are, unfortunately, still relevant.

Further, it was noted in a detailed study of the available meta data by Kent et al (2006) [10] that as late as 1970 fully 90% of temperatures, where the meta-data stated the nature of the measurement, were still done by bucket. Yet the Hadley correction is fully applied by this date assuming, incorrectly, that bucket sampling had been phased out by this time. Figure 3b shows figure 2(f) excerpted from that paper.

[EDIT] As noted by John Kennedy in comments the switch-over refereed to
in the paper was from canvas buckets to insulated buckets, not buckets
to ERI, so this particular criticism was incorrect. Apologies to Kennedy
et al for the error.

figure 3b.

From K2011c, [3c] section 4.2 (emphasis added):

It is likely that many ships that are listed as using buckets actually used the ERI method (see end Section 3.2). To reflect the uncertainty arising from this, 30 ± 10% of bucket observations were reassigned as ERI observations. For example a grid box with 100% bucket observations was reassigned to have, say, 70% bucket and 30% ERI.

Some observations could not be associated with a measurement method. These were randomly assigned to be either bucket or ERI measurements. The relative fractions were derived from a randomly-generated AR(1) time series as above but with range 0 to 1 and applied globally.

Clearly the timing and magnitude of the Hadley correction is determined by something other than recorded data.

Method: data processing.

Fourier and other frequency analysis techniques were performed on several climatic temperature data series in both the time series and the first and second derivatives. Derivatives can be helpful in identifying data sampling and data processing errors, secular changes and in isolating periodic variations. For example, and exponential rise in temperature will remain an exponential rise of the same duration in the derivative. However, a 64y cycle will shift back 16y, a 200y with shift 50y; a linear trend will become a constant offset. The dip in temperature due to a volcanic event will become a dip followed by a positive peak. This means that different effects that are superimposed and confound one another in a time series, may be distinguishable by studying the derivatives.

The monthly climate data were downloaded from the sources cited in the reference section. HadSST3 was processed by taking the median value of the 100 realisations for each month in the published data. This was then used as the global average time series for comparison with the other datasets.

The HadSST3 data runs from 1850-2006; ICOADS from 1845-2008 and HadSST2 from 1850-2011. For the frequency analysis, the data were truncated to a common end-date (Dec. 2006) to give a more direct visual comparison of results. Data are published as a list of monthly averages in the form yyyy month , this was converted to a decimal date based on the middle of the month (ie January = + 0.4/12 ; Feb=1.5/12 ; etc. )

Time derivatives were calculated on a month-to-month basis by dividing the incremental change in temperature by the time period. Derivatives are indicated on the graphs by the elements such as “-diff1” meaning the point differential taken over one data interval (typically one month). Each point is logged with the date of middle of the interval to avoid introducing a time shift.

Gaussian frequency filtering, where required, was achieved by calculating weighted mean of neighbouring points for each point in the record. A three-sigma gaussian filter was used and is indicated in the graphs by its sigma value: “gauss-24m” signifies a three-sigma gaussian filter of sigma=24 months. This method requires a full window of neighbouring points and thus shortens the records by three-sigma data points at each end. Each result is logged with the date of middle of the sample to avoid introducing a time shift.

War-time adjustments.

The dates of the step changes during the war-time period can determined by inspection. A simple correction for the war-time glitch would be to subtract a fixed amount form the monthly averages over that period. A range of values between -0.2 and -0.5K were tested. A value of 0.4 was found to best remove the disruption seen in the first and second derivatives. Considering the level of noise and variability in the signal, the probable range the offset was determined to be 0.45 +/- 0.05K. This is an approximate solution since the war in Europe started in Sept 1939 and global maritime traffic would already have been disrupted before the U.S. became involved in Dec 1941. This probably accounts for the early rise being less sharply defined than the later fall: part of the change had already occurred before 1941. This results in a small negative then positive glitch between ’39 and ’41 but this is considered insignificant in this context. A value of 0.5K seemed to be slightly better in removing disruption from the FFT periodogram. A value of 0.4K was retained as the best overall correction.

A better, more graduated correction could be made with a more rigorous examination of the precise causes but, in view of the data quality, the stated level of precision seems appropriate.

The effects of this correction on the time series and its time derivatives were examined both directly and in the frequency domain by Fourier analysis. Similar consideration was given to the correction to this period applied in HadSST3. The two were compared.

Fourier analysis.

The Fourier analysis was done with software using the well-reputed, open-source FFTW discrete Fourier transform library. The method was checked against other open-source software with it’s own FFT implementation. Results were identical to within calculation error limits.

There are well-known problems with the discontinuity created when FFT effectively joins the start and end of the data in a continuous loop. A common solution to this that is standard practice in engineering and digital signal processing is to multiply the data by a windowing function that progressively reduces both ends to zero. However this will notably affect longer frequencies and could give the impression of a long-term cycle where one does not exist in the original data. Here we restrict the use of FFT to dT/dt where the difference in level at the two ends is less marked than in the temperature time series. However, there is a rising trend and this is seen in the residual non-cyclic term. Further study of the profile of this part of the FFT will give further useful information about the nature of non cyclic rise but that is not explored here.

Since a discrete Fourier transform only gives discrete frequencies that correspond to sub-multiples of the data window, it is necessary to repeat the FFT with different window lengths in order to get detail at longer periods. This technique can also be used to get several evaluations of intermediate frequencies when different sub-multiple results covers the same period. For example the 50y period component can be evaluated by looking at the last 50, 100 or 150 years of data. If the data has a reasonably homogeneous frequency structure and little error, the three results will be close; if not, comparison can give information about how the more recent section of the data compares to the fuller, longer windows. Dividing the period by the number of the sub-multiple allows overlaying of the these different evaluations for direct visual comparison. Where this has been done is indicated in the graph’s title and the individual plot lines are labelled “half-window period” in the legend. This approach allows more detail but is also affected by the different sub-section of the data being used at each point. A peak or dip can be due to local changes at the window end as well as true frequency patterns. Punctual disturbances, such as major volcanoes or data collection changes, can be seen to cause short range anomalies. These can be usefully identified by comparison of the full window and half-window plots since these will affect all traces at the same point and will not scale as cyclic variations will. Other, longer, non-cyclic variations such as long term secular trends and data collection errors will also manifest in this way.

In ideal data with clearly defined, stable frequencies, any periodic cycles would be represented by well-defined peaks. In noisy data from a largely chaotic system, peaks will be spread and may even change from one part of the sample period to another. Here it is only possible to determine a broader range of frequency where power of the spectrum is concentrated. Other more advanced frequency analysis techniques that tolerate such changes in the data structure, such as wavelet or entropy methods, may give more precise results. However, this approach will allow some useful insight into the effects of the various adjustments on frequency and provide an informative comparison of the global datasets. It is thus possible to evaluate whether an adjustment improves the dataset or disrupts it.

Since cyclic terms will be largely unchanged in dT/dt a comparison with the frequency spectrum of the derivative is equally useful in separating secular and cyclic variations. There is the added advantage that a linear trend will be come a constant in dT/dt and the wrap-round discontinuity is less marked. As a consequence the distortion due to any windowing function will be less pronounced. There has been a recent focus in climatology on ocean heat content, ie. the total thermal energy. In the same way that the temperature reflects the thermal energy of a body, the rate of change of temperature reflects the power entering or leaving a body. Inspection of dT/dt may give useful insight into the net power entering or leaving the ocean surface layer.

Curve fitting.

Curve fitting was done using Gnuplot, an open source plotting utility. It uses an implementation of the non-linear least-squares (NLLS) Marquardt-Levenberg algorithm. Unlike linear least squares, NLLS methods require initial parameter values be supplied as a starting point. If the initial values are poorly chosen it may fail to converge on a result. If this is the case, it is indicated by the software. Where the model function is successfully fitted detailed statistics on the quality and confidence of result are given.

The data from 1940 to 1946 were given zero weighting in model fitting, so the results are not perturbed by the unreliable war-time period. This was done identically in analysing both the original and the adjusted versions.

Analysis

The original ICOADS global mean sea surface temperature is shown in figure 1 along with the simple war-time adjustment analysed in this study. This is similar to the 0.5 K adjustment applied by Jones et al (1986), as previously noted above. This adjustment was used prior to the opening of the Hadley Centre in 1990.

Expanded detail of the war-time period showing an abrupt upward jump in December 1941 and a similar downward drop in 1946 (red) can be seen in figure 1b.

Figure 2 shows the magnitude of the adjustments applied by HadSST2 and HadSST3 to the original data (note these are not the adjusted time series themselves but adjustments that are applied in each case). It can be seen that HadSST3 fully retains the step change of SST2 but phases it in over an extended period and smooths the transitions. The adjustments are mostly neutral after 1970.

The overall form of the adjustment is rather surprising in view of its supposed origin in changes to data collection practice. There seems to be a strong term cyclic element in the adjustment. Its general character can be modelled by fitting two cosine functions. This raises the question of whether HadSST3 processing is inserting or attenuating some long term cycles in the original ICOADS data. The fitted curve is shown in Figure 4.

figure 4

Figure 5 overlays the Hadley adjustment on the original data.

figure 5

It can be seen that the adjustment, which is deemed to result from the study of changes in data collection methods, is remarkably close in form to the variations in the original data itself. Its magnitude being about 50% of the variations before 1920 and around 67% between 1920 and 1970. The only variation that is not attenuated is the post 1980 rise (which actually gets gently increased towards the end). In effect, a major proportion of the long term variations over the majority of period of available data are being removed on the basis of “correcting” the hypothesised data sampling biases.
The magnitude of the adjustment is comparable in size to the total warming of the 20th century, ie. the “correction” deemed necessary is almost as big as the effect being observed.

Figure 6 shows the Fourier analysis of the rate of change of temperature in the ICOADS data (24 month gaussian low-pass filtered) with the simple war-time correction. The x-axis shows length of window of data used for each point. The subset of data used always includes the most recent data and increases in length backwards in time. Thus this is a periodogram ( period plot ) rather than a frequency plot.

figure 6

The energy in the spectrum is concentrated around 60 years, 10 years and towards the longest periods available in this data, all three being of roughly equal magnitude. The presence of a longer periodic signal running strongly to the end of the graph, where it converges with the non-cyclic residual shows there is a cyclic variation of more than 160 years in duration. [The marked trough in the blue line is due to the short window coinciding with a marked change around 1975. Analysis of the sub-20 year frequencies by conventional FFT frequency plot shows this is not a frequency feature. The full window plot better represents these frequencies.]

It is noted that there is a strong similarity between the blue and green lines. The blue line represents the shorter periods found to have two cycles in the window, the green one, those with a period equal to the window. Their similarity demonstrates a consistent distribution of energy between the more recent half of the data and the record as a whole. This would strongly argue against the need for substantial corrections to the data.

The red line shows the residual, non-cyclic component. The regular occurrence of small peaks every ten to eleven years reflects the window’s transition through the periodic variations commonly attributed to the “solar cycle”. As the length of data used in the FFT window increases, their impact on the data decreases. Also, with window length greater than 80 years, the longer term variations begin to be detected and hence the residual generally decreases. This shows how selectively restricting any analysis to only the most recent portion of the available data opens up the likelihood of confounding cyclic and non-cyclic trends leading to false diagnosis and attribution.

A similar frequency analysis of HadSST3, shown in Figure 7, shows that the long term cyclic variation, greater then 100 years, has been severely attenuated. Also that the scaled secondary frequency plot (blue) is no longer in agreement with the green line, indicating that the later half of the record is no longer in accord with the full record as it was when applying a trivial correction. Removal of the supposed biases has destroyed the homogeneity of the data. On the positive side, the two lines agree more closely with window lengths around 60y where the window end falls within the problematic war years. This suggests that HadSST3 is dealing with the detail of war-time glitch more precisely than the simple adjustment.

figure 7

A lot more can be gained from this kind of analysis and that will the subject of further study but these two main features show the ways and the extent to which the Hadley adjustments fundamentally change the nature of the data.

Figures 8, 9 and 10 show the time series and the first and second time differentials for original ICOADS data, HadSST3 and the simple 0.4K adjustment respectively. Due to short term variation and the noise being amplified by the differentiation, a 24 month gaussian filter has been applied after differentiation (48m in the case of 2nd differential.) Without this measure, the noise dominates and obscures the rest of the signal. Due to the filtering reducing the length of the available data, the fitting period was started at 1860 to make the results more directly comparable. Since the recent data is a lot less noisy and more certain the fit period runs to the end of the data in each case. The fitted data is shown in dark blue.

Figure 8 ICOADS triple plot

Figure 9 HadSST3 triple plot

Figure 10 ICOADS_adjusted triple plot

It is noted that the simple adjustment is consistent in the time derivative: the two primary components detected in the time series remain similar in period, phase (reference year) and magnitude in the first derivative (see appendix). Equally, the disruption of the war-time glitch is clearly seen in all three plots of the original data but it does not prevent similar values for the cosine fit.

It seems unlikely that any error due to sampling methods and unrelated to climate would introduce a cyclic variation that is consistently found in the time derivative. Errors and biases would likely be non-cyclic and become more obvious by comparison with the derivative. In contrast, the century scale component in HadSST3 changes completely in period, phase and magnitude in the derivative. This is not necessarily wrong in itself because non-cyclic variations will be different in the derivative, but it indicates a fundamental change is being made to the structure of the data. A time series that has a strong cyclic feature looses that quality as a result of a correction deemed to simply correct data sampling errors.

It was also noted that the cosine model fitted to the simply adjusted ICOADS data, closely follows the mid-points of the oscillations in dT/dt and does not have any post-war anomalies. There seems to be a more natural coherence to the data globally, whereas HadSST3 seems more chaotic. This needs quantifying mathematically.

The broad, positive war-time peak in dT/dt of HadSST3 is unique in the record. This feature seems anomalous in both duration and form.

Also, studying the second differential of the simply adjusted ICOADS record shows solar cycles over the last century to be regularly grouped in pairs. Since the pseudo 11y cycle is known to be two halves of circa 22y magnetic cycle this feature seems to be a coherent feature and gives credence to the data quality. This feature is destroyed by the Hadley processing, although it would be worth investigating whether some of the individual realisations preserve it.

Figure 11. d2T/dt2 : HadSST3 vs ICOADS_adjusted (animation)

Figure 12 shows part of the Wang et al [11] Total Solar Irradiance reconstruction overlaid on the d2T/dt2 plot from figure 10(c).

Figure 12 comparing d2T/dt2 to TSI

It is not the object of this study to suggest or refute any particular link between climate and TSI, nor to suggest that TSI would necessarily be the appropriate solar parameter to study. However, there is more than a coincidental similarity in the timing of variations of the two quantities over the last century and TSI can at least be regarded as an indicator of solar activity. It seems improbable that an error with such a similarity could be erroneously introduced by the sampling bias. Any data processing that removes or distorts such a signal must therefore be rejected as flawed.

Removing climate signals from SST data which is used as a primary reference for climate study will not aid scientific understanding of climate. In fact it will confound it.

Studying the second differential reveals other important changes. Notably the non-cyclic constant term ‘c’ in the fitted parameters represents an accelerating increase of the temperature with time. The fitted values are shown as annotations on the triple plots in figures 8(c), 9(c) and 10(c):

ICOADS v2.5 : 1.26 K/c/c
HadSST3 : 6.63 K/c/c
ICOADS WWII adjusted : 1.19 K/c/c

The idea that temperatures changes are accelerating at over 6K per century per century is beyond even the extreme end of the projections of the IPCC, yet this is what is found in the Hadley modified data set. The unmodified and the simply corrected ICOADS data show a more reasonable 1.2 K/c/c.

While such a simple analysis is not intended to fit a definitive model to the data, it again show how hadSST3 is fundamentally changing the nature of original data.

The convergence statistics for the various cosine fits show that the asymptotic standard error of the fitted parameters are generally very good, though notably higher for HadSST3 than for the simply adjusted ICOADS dataset(see appendix). The equivalent sine functions fitted to dT/dt are equally certain and the pairs of results are consistent: the period and reference year derived in each case are in agreement to within (or close to) the statistical fitting error reported by the fitting algorithm. The one exception is the longer cycle’s period which is shortened from circa 200y in the time series to around 150y in the derivative. This likely due to non cyclic variation in the time series that is not accounted for in the pure cosine model. The long periods found in dT/dt are different since a linear trend is accommodated by the constant rate of change ‘c’ in the model.

All the analyses find the reference year of this cycle to be around 1995. This is a peak for the temperature time series and the transition from positive to negative rate of change in dT/dt.

They are also all (including HadSST3) in general agreement about a circa 11y period peaking in 2001/2002; a circa 21y cylcle peaking in 2005/2006; and a circa 64y cycle peaking in 2010-2016 (more likely the end of that interval.)

In contrast, the longer term fitted to HadSST3 is twice that found in the original data (circa 350 years) with a somewhat larger uncertainty and parameters incompatible with the results for ICOADS. In the derivative the NNLS algorithm converges to a much shorter, incompatible period circa 120y, this indicates that the longer cycle was a fortuitous fit to a non cyclic variation and probably does not reflect a true cycle in the data. The parameters for the short term periods are similar to those in the original data and are consistent in the time derivative.

This confirms what was seen in the FFT analysis: HadSST3 is removing (or totally disrupting) the clear, long term variation found in the climate record.

Similar processing of the difference of the adjusted and non-adjusted time series (HadSST3-ICOADS) ie. the modifications being applied, shows the adjustment itself has a cycle close to the circa 165 year and 64 year cycles found in the data.The values are annotated in figure 4

However complex the methodology claims to be, the result is surprisingly simple. HadSST3 selectively removes the majority of the long term variations from the pre-1960 part of the record. ie. it removes the majority of the climate variation from the majority of the climate record. Examination of the time differentials also show it is distorting rather than improving the data.

Comparison to non-SST measurements

The Gomez Dome is an ice dome at the base of the Antarctic Peninsula, its climate is dominated by the surrounding ocean. A study by Thomas, Dennis et al 2009 [8] derived a high resolution temperature proxy record from oxygen isotope ratios from the ice core. This oxygen isotope ratio is generally regarded as being a reliable proxy for temperature of the water at the time of evaporation (ie. in this case SST in Bellingshausen Sea). Figure 13 is the d18O graph excerpted from that paper.

Figure 13 Gomez Dome d18O

Authors’ original caption:

Gomez annual average #18O (blue), running decadal mean (red) and nonlinear trend (black). The running decadal mean is derived using an 11-point Gaussian window filter.

The paper explains the derivation of the non-linear trends thus:

The EMD approach is used to extract physically meaningful modes and nonlinear trends from nonlinear and non-stationary time series that cannot be captured by a linear least-square fit.

The figure shows a non-linear trend that appears to peak around 2000 AD and has a trough around 1890. This is be consistent with a circa 200 year cycle similar in period and phase to that found in ICOADS SST in the present study.

Camp and Tung [12] recently reported finding a correlation of 0.64 between air surface temperature the solar TSI index, with a magnitude of 0.18 ± 0.08 K per W/m2 linked to the nominal 11y cycle. The present study found that evidence of a similar correlation with SST was disrupted by the HadSST3 processing.

Conclusion

HadSST3 contains a series of adjustments. With the exception of the war-time glitch, they are not obvious from study of the record. Their existence is based on speculation and hypothesis. Calculation of the biases involves inverting a significant portion of written record’s meta-data for the period of the principal adjustment and ignoring detailed studies on the proportion and timing of changes in data sampling methods as well a speculation as to the magnitude of the various effects.

The principal effect of these adjustments is to selectively remove the majority of the long term variation from the earlier 2/3 of the data record and to disrupt circa 10-11y patterns clearly visible in the data. These changes are fundamentally altering the character of the original data.

The strong similarity in form between the variations in the original ICOADS data and the corrections deemed necessary to correct sampling biases is remarkable. All the more so in view of the lack of documentary information on which to base the estimated magnitude and timing of the adjustments.

The analysis presented here indicates that, outside the immediate war-time period, these adjustments are distorting and degrading the data rather than improving it.

A number of different analyses suggest that a simple correction to the war-time period (as was used before the creation of the Hadley Centre) provides a more coherent and credible result.

Comparison to studies of non SST data suggest that much of the variation in ICOADS is quite possibly due to real climate signals, not instrument bias. These variations require proper investigation, not a priori removal from the climate record.

Data sources

[EDIT] The ICOADS data provided by JISAO, used in this study, was v 2.4, not version 2.5 as stated. There is little change between the two and
this does not make a material change to the analysis and arguments
presented in this study.

ICOADS v2.5 project http://jisao.washington.edu/data/global_sstanomts

ICOADS download http://jisao.washington.edu/data/global_sstanomts/sstglobalanom18452008

ICOADS website

UEA data page http://www.cru.uea.ac.uk/cru/data/
HadSST2 download http://www.cru.uea.ac.uk/cru/data/temperature/hadsst2gl

HadSST3 presentation http://www.metoffice.gov.uk/hadobs/hadsst3
HadSST3 download http://www.metoffice.gov.uk/hadobs/hadsst3/data/TS_all_realisations.zip

SORCE project page http://lasp.colorado.edu/sorce/data/tsi_data.htm
TSI reconstruction data http://lasp.colorado.edu/sorce/tsi_data/TSI_TIM_Reconstruction.txt

References

[1]
Thompson, D.W.J., Kennedy, J.J., Wallace, J.M. & Jones, P.D. (2008) A large discontinuity in the mid-twentieth century in observed global-mean surface temperature. Nature 453, 646-649
http://www.atmos.colostate.edu/ao/ThompsonPapers/Thompson_etal_Nature2008.pdf

[2]
Chris E. Forest and Richard W. Reynolds (2008)
Hot questions of temperature bias Nature 453, 601-602
http://www.atmos.colostate.edu/ao/ThompsonPapers/Thompson_etal_Nature2008.pdf

[3a]
Kennedy, J., R. Smith, and N. Rayner (2011a), Using AATSR data to assess the quality of in situ sea surface temperature observations for climate studies, Remote Sens. Environ. (in press) http://www.metoffice.gov.uk/hadobs/hadsst3/RSE_Kennedy_et_al_2011.doc

[3b]
Kennedy J.J., Rayner, N.A., Smith, R.O., Saunby, M. and Parker, D.E. (2011b). Reassessing biases and other uncertainties in sea-surface temperature observations since 1850 part 1: measurement and sampling errors. in press JGR Atmospheres http://www.metoffice.gov.uk/hadobs/hadsst3/part_1_figinline.pdf

[3c]
Kennedy J.J., Rayner, N.A., Smith, R.O., Saunby, M. and Parker, D.E. (2011c). Reassessing biases and other uncertainties in sea-surface temperature observations since 1850 part 2: biases and homogenisation. in press JGR Atmospheres http://www.metoffice.gov.uk/hadobs/hadsst3/part_2_figinline.pdf

[3]
Folland, C. K., D. E. Parker, and F. E. Kates. 1984. Worldwide marine temperature fluctuations 1856-1981. Nature 310, no. 5979: 670-673.
http://adsabs.harvard.edu/abs/1984Natur.310..670F

[4]
C. K. Folland* and D. E. Parker (1995) CORRECTION OF INSTRUMENTAL BIASES IN HISTORICAL SEA SURFACE TEMPERATURE DATA
Q.J.R. Meteorolol. Soc. (1995), 121, pp. 319-367
ftp://podaac.jpl.nasa.gov/allData/gosta_plus/retired/L2/hdf/docs/papers/1-crrt/1-CRRT.HTM

[5]
D. E. Parker, C. K. Folland and M. Jackson (1995)
MARINE SURFACE TEMPERATURE: OBSERVED VARIATIONS AND DATA REQUIREMENTS*
Climatic Change 31: 559-600, 1995 Kluwer Academic Publishers.
ftp://podaac.jpl.nasa.gov/allData/gosta_plus/retired/L2/binary/docs/document/papers/3_clmchg/3_clmchg.htm

[6]
A Study of Six Operational Sea Surface Temperature Analyses
C. K. Folland, R. W. Reynolds , D. E. Parker (2003)
http://www.socratesparadox.com/Temperatures/Follland_et_al1993.pdf

[7] Steve McIntyre, Climate Audit
(a) http://climateaudit.org/2011/07/12/hadsst3/
(b) http://climateaudit.org/2008/05/29/lost-at-sea-the-search-for-windowed-marine-de-trending/
(c) http://climateaudit.org/2008/05/31/lost-at-sea-the-search-party/
(d) http://climateaudit.org/2008/05/28/nature-discovers-another-climate-audit-finding/
(e) http://climateaudit.org/2008/06/01/did-canada-switch-from-engine-inlets-in-1926-back-to-buckets/

[8] Liz Thomson et al 2009
Ice core evidence for significant 100-year regional warming on the Antarctic Peninsula
GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L20704, doi:10.1029/2009GL040104,
2009

[10] E.C. Kent S.D. Woodruff D.I. Berry 2007
Metadata from WMO Publication No. 47 and an Assessment of Voluntary Observing Ship Observation Heights in ICOADS
JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY vol 24.
http://dss.ucar.edu/datasets/ds540.0/docs/WMO-Pub47_jtech07-1.pdf

[11] Wang, Y.-M.; Lean, J. L.; Sheeley, N. R., Jr. 2005
Modeling the Sun’s Magnetic Field and Irradiance since 1713
The Astrophysical Journal, 625:522-538, 2005 May 20)
DOI: 10.1086/429689
http://adsabs.harvard.edu/abs/2005ApJ…625..522W

[12] Camp,
C. D., and K. K. Tung (2007), Surface warming by the solar cycle as revealed by the composite mean difference projection,
Geophys. Res. Lett., 34, L14703, doi:10.1029/2007GL030207.
http://ruby.fgcu.edu/courses/twimberley/EnviroPhilo/CompositeMean.pdf

Appendix

Table 1. Sample of fitting errors for cosine model.

hadSST3_median time series results
Final set of parameters Asymptotic Standard Error
======================= ==========================
p1 = 344.302 +/- 44.7 (12.98%)
p2 = 66.0627 +/- 0.8661 (1.311%)
a1 = 0.361004 +/- 0.05316 (14.73%)
a2 = 0.127095 +/- 0.004566 (3.593%)
yz1 = 2052.23 +/- 19.29 (0.94%)
yz2 = 2010.63 +/- 1.169 (0.05814%)
p3 = 21.3518 +/- 0.1659 (0.7772%)
a3 = 0.0415534 +/- 0.004699 (11.31%)
yz3 = 2006.16 +/- 0.7091 (0.03535%)
c = 0.0107957 +/- 0.05011 (464.2%)

icoads_monthly_adj0_34 time series results
Final set of parameters Asymptotic Standard Error
======================= ==========================

p1 = 198.429 +/- 4.095 (2.064%)
p2 = 61.6491 +/- 0.7408 (1.202%)
a1 = 0.463026 +/- 0.006512 (1.406%)
a2 = 0.138481 +/- 0.004394 (3.173%)
yz1 = 2001.13 +/- 1.72 (0.08596%)
yz2 = 2011.54 +/- 1.013 (0.05035%)
p3 = 21.1679 +/- 0.2235 (1.056%)
a3 = 0.0293529 +/- 0.004655 (15.86%)
yz3 = 2004.16 +/- 0.9152 (0.04566%)
c = -0.189096 +/- 0.007575 (4.006%)

 

Biosketch:  The author has a graduate degree in applied physics, professional experience in spectroscopy, electronics and software engineering, including 3-D computer modelling of scattering of e-m radiation in the Earth’s atmosphere.

JC comment:  The HadSST is generally regarded as the best of the global SST data sets.  The substantial improvements to HadSST3 were discussed on this previous post, which included comments from John Kennedy.  I am particularly interested in this mid-century period, since it is an important period in the context of understanding the 20th century climate change attribution.

I have been discussing this topic with Greg for several months, and I invited him to do a guest post on this topic.  I did some light editing and suggested some shortening.  The views expressed in this post are those of GG, and not my own.

Moderation note:  this is a technical thread that will be moderated strictly for relevance.  Apologies for the glitches in the first version of the post, a few comments were lost.

UPDATE 4/12/12:

The author has had quite some interesting discussions with John Kennedy
(leading author of the three papers in the reference section above that present the HadSTT3 dataset).

As a result we were able to agree on the main points raised in the article:
1. That HadSST3 removed the majority of the variation from the majority
of the record.
2. The these adjustments are based on hypothesis rather than being
scientifically proven.

The following link will help locate the salient part of the discussion:
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-188137
otherwise use the browser search faciltiy to locate either “Goodman”
or “Kennedy” (usually control-F or “Find” on the Edit menu)

We also discussed something that I had not entered into in the article
because I wished to remain focused of the central issue. That is the question of the claimed “validation” of  HadSST3 adjustments by other studies.
I maintain that the claimed validation by comparison to computer model
outputs and extremely geographically limited data, are totally inappropriate and do neither validate nor disprove  anything. John did not agree with my position on that but was not able to show I was wrong. He did provide some useful
and interesting information on how the models are optimised to the period 1960-1990, which underlines the  problems with their use to “validate” the earlier adjustments.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-188237

He suggested that I should compare the time series of the Hadley
adjustment to the residual “corrected climate” as well as comparing to the original data as I did in the article. I  agreed it would be interesting and the result can be seen here: http://i39.tinypic.com/a2gjv8.png

The two plots are on the same scale and this confirms my intial
observation the Hadley adjustment is basically cutting the variation of the data in half before 1910, ie. they are suggesting that bias is equal to the “true” climate signal and varies in a similar way over this period. I find such  a result surprising and improbable and it is worth seeing this clearly displayed since it is not presented in  this way in the papers. In a similar way it can be noted that the 1920-1940 trend in the adjustment is very similar to the trend in the residual.

John was also good enough to send me the icoads data after the remapping
to their 5×5 degree gridding but before the application of the bias adjustments. This should be useful in  examining which aspect of the Hadley processing is responcible for the various changes I noted. I am still looking at  that in more detail.

I would like to take this opertunity to thank John for taking the time
to discuss some of the issues raised in this article and to engage in serious dialogue.  We are continuing  discussions by email and if anything significant comes out of this I hope to post further updates.

Greg Goodman.

381 responses to “On the adjustments to the HadSST3 data set

  1. Sorry Dr. Curry, still getting 404 errors.

  2. Oh sorry. Links are inline now. Ooops!

  3. Sounds like yet another climate-related miracle…HadSST3 selectively removes the majority of the long term variations from the pre-1960 part of the record. ie. it removes the majority of the climate variation……that cannot be attributed to anthropogenic global warming!

  4. Hurry up Muller and Mosher!

  5. Heckuva analysis. A question:

    In Figs 8 and 10 relating to ICOADS, the raw data (blue line) appears to go through the 2008 La Nina. In Fig 9 relating to HADSST the blue line appears to stop short of this. Why, and how much does this affect the NLLS fit/projection?

    • @ billc | March 15, 2012 at 9:46 pm

      Bill, there is no raw data, all data is cooked. Uneven distribution for monitoring places = is already cooked. One monitoring place in Oceania has more influence than 500 monitoring places in Europe. B] monitoring on ”few places” on the first 2m of the troposphere and disregarding the anomaly in the 29km + 998m above, of the rest of the troposphere = is data precooked. C] Antarctic or Greenland are large as USA – only on few places clustered monitoring there… if the weather bureau was saying on the TV box every evening: those ”few places in southern Florida represent ALL the temperature in USA…?

      2] Next time when they give the weather report; remember Stefan said: compare if when in southern Florida temp goes up, does all over USA goes UP by that much’ if not, their ”SMOCKING GUN”

      3] they collect temp data, ONLY for the hottest minute in the 24h; what about the regular anomaly in the other 1439 minutes?! Usually when day temp for that time of the year is higher than last year – night temperature is colder ( because of clear sky) but if is cloudy – hottest minute temp is cooler; but at least 750 other minutes in the 24h are warmer!!! Don’t those so many minutes count? Cooked and recooked data is exclusively for brainwashing the Warmist from the lower genera and IQ + ALL of the fake Skeptics. Because the ”secular Skeptics” don’t buy that doo-doo from the politburo

  6. Roger Andrews

    HADSST2 and HAD SST3 are classic examples of how climate data get tweaked to fit the theory rather than the other way round. Details here for anyone who might be interested:

    http://tallbloke.files.wordpress.com/2011/02/final-sst.pdf

    • Greg Goodman

      From Tallbloke’s pdf: “The coincidence of these three spikes raises the question of whether the spike in the SST series was even caused by bucket-intake changes, since such changes obviously could not have caused the spikes in the MAT and cloud cover series. ”

      This was much my conclusion looking at ICOADS, that it was data collection differences rather than physical sampling method that caused the glitch.

      I was unaware that there was also cloud cover data showing the same glitch. That is a very strong indication that this has nothing (or little) to do with buckets.

      My gut feel is that this is adjustment is contrived. That information puts some flesh on the bones. Many thanks.

      • Roger Andrews

        You’re welcome.

        And there’s nothing “gut” about your feeling that the bucket adjustments are contrived. They’re there simply to make the SST record fit the NMAT record. In fact I’ve often wondered why they don’t just keep the NMATs and throw the SSTs out altogether.

      • Morning Greg,

        The war time period is different for a number of reasons. We have fewer data for a start and they come from a much less diverse data bank than data at other times. With fewer ships, drawn from a small selection of countries, any biases (in SST, or NMAT, or cloud cover) are likely to be more pronounced at that time.

        There was also a systematic change in the time of day at which observations were being made. If my memory serves me right, there was regular sampling before and after the war (every 6 hours or so), but during the war there’s a shift to taking observations at 8am, 8pm and noon local times. We discussed this in part 2 of the HadSST3 paper.

        Best regards,

        John

      • Greg Goodman

        Good day John.

        I’m aware that there a whole bunch of issues around the war, most of which do not involve buckets :) Most of which are likely to be unquantifiable.

        An ad hoc adjustment to this short period based on minimising the disruption allows frequency analysis of whole record and assessment of the effects of HadSST3 which was the point of this study.

        Over the weekend, I have replied to a number of the points you raised, starting here.
        http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-186075

      • __RE: “There was also a systematic change in the time of day at which observations were being made.”

        I wonder whether that is of any significance at a time the taking of SST was very different from the non-war years.

        For example: The personnel of the US Air force and US Navy weather services increased from 2500 in 1941 to 25,000 in 1945, with many hundreds of trained and well-equipped weather observers doing their service on board merchant and naval ships (Bates et al. 1986).

        Of a much higher relevance is the fact of convoying. The allies completed 300,000 Atlantic voyages during the war (Winton, 1983) by roughly 8’000 convoys. In each convoy about 80% of all participating ships (20 to 50) sailed in the wake of one or up to 8 other ships. Any SST data taken by them is anything but not “normal SST”.
        http://www.oceanclimate.de/English/Atlantic_SST_1998.pdf ; see also below: tonyb March 17, 2012 at 12:50 pm

        Is it so difficult to accept what TonyB suggested: “that the war year measurements should be forgotten”? TonyB March 18, 2012 at 12:08 pm

  7. And when Girma can do that sort of analysis, or Mosher can do that depth of analyses on his Canadian data, we’ll have something well worth looking at.

    • Yet Goodman’s analysis is still largely empty of any significance. Sure it’s leaps and bounds over anything than Girma can produce, but is limited by the small dynamic range of the data.

      Personally, I would not do straight time-series analysis of temperature data in the current era unless it contained a larger dynamic range.

      From my perspective, what works much better is take paleoclimate data from say Vostok, and do something beyond straight time-series. For example, I just finished a GnuPlot of Temperature and CO2 correlation for Vostok over the last evening.

      http://theoilconundrum.blogspot.com/2012/03/co2-outgassing-model.html
      take a look at this chart in particular, which maps out the Temperature/CO2 concentration phase space via a random walk model of CO2 outgassing mixed with GHG feedback.
      GnuPlot contour fit for αβ model of Vostok data

      IMO, this is the kind of angular perspective that we ought to pursue. Be creative with respect to the data that shows more dynamic range. YMMV.

      • What do you consider the margin of error to be for the paleoclimate data?

      • Ringo, Didn’t you pay attention to what I just said? The dynamic range is much better in the paleoclimate data and that is what enables us to actually fight off the uncertainty (monster).

        There is evidently enough discrimination between the CO2 and Temperature signals to be able to tell that Temperature leads CO2. The dynamic range is about 100 PPM in CO2, 10 degrees in temperature, and hundreds of thousands of years of time-series data with an acceptable level of granularity.

        By definition, there are two sources of error in the data. There is the epistemic error of the accuracy in measurements tied in with the validity and smearing of the proxy data, and there is an aleatoric uncertainty in the randomness in the climate data variation. For this study, I was totally focused on how to devise a model-constrained random walk that travels through the state space. The epistemic error is all in the absolute value of the temperature, which does not matter as much for the relative nature of a random walk model.

        All told, I think this is a great model because it appears to require a significant GHG feedback effect from the CO2. If that CO2-induced positive feedback on temperature is not in the model, then the temperature and CO2 would not show as much of a variation. If this model is correct, and it should be as it only relies on well-understood physical principles, it is another Holy Grail signature of CO2 on climate variation.

      • Web, I thought you were moving on to thermal diffusivity? Remember? Diffusivity ~ thermal conductivity/(density times specific heat content) It is highly non-linear for CO2 and would be a great feather for your hat.

        Then you could add the T and P parts to Arrhenius’ greenhouse equation.

      • Webbyboy- You once again failed to answer the simple question. I asked what you believe to be the margin of error is using the paleo data. You could have answered for the temperature or the CO2, but you chose to answer for neither.

        Typical of your incomplete forms of analysis

      • I have a general thermal diffusivity model written up as a paper and it is going through the review process right now, before I try submitting it somewhere. That one is more general than just for applying to climate.

        Whatever you think is beyond first-order for CO2, you may as well lay it out the best you can. Of course as CO2 goes through phase transitions, its thermodyanmic properties will change.

      • Web, the Antarctic cooling seems to indicate than the -78.5C at 1 bar and -89C at .4 bar are providing a tighter limit on CO2 forcing than estimated. CO2 vibration states must be changing significantly. Comparing the MSU data to the southern south American proxy data, there seems to be a great deal of SH cooling. I was thinking about trying to set up the 3-d model, but my laptop and I don’t see eye to eye.

      • “I was thinking about trying to set up the 3-d model, but my laptop and I don’t see eye to eye.”

        Nice excuse there, Skip.

        The dog never ate my homework because I never owned a dog.

      • “Webbyboy- You once again failed to answer the simple question. I asked what you believe to be the margin of error is using the paleo data. You could have answered for the temperature or the CO2, but you chose to answer for neither.

        Typical of your incomplete forms of analysis”

        From the list of fallacious arguments, I notice that you use the technique of demanding impossible perfection mixed with the old reliable strawman.

        So it goes. The way I see it, if all the temperature and CO2 proxy measurements had some uncertainty level, it is likely near the same uncertainty across the range. Since I am plotting all these values on a 2D surface plot, whatever kind of uncertainty you would see would only get smeared to the same extent as the diffusional model automatically shows smearing. That’s exactly what a diffusional random walk simulation does — it demonstrates and quantifies the smearing and uncertainty in the path of a trajectory, in this case it is a constrained T and CO2 diffusional process.

        Try it yourself, put a +/-20% or higher error margin on all the values (relative to the base excursion) and then replace the data on the contour plot. It really won’t show much difference to the eye, as the data will still follow the contour.

        I didn’t include this uncertainty in the plots because it is best to keep the data in its raw state while doing exploratory work. So the model is our best understanding of the aleatoric uncertainty in the trajectory of the temperature and CO2 excursions, and it provides margins for where the data may diverge from our understanding.

        If you think the data needs extra epistemic uncertainty bars, why don’t you try it yourself. I have the source code right there and you can go and get the Vostok data and try to reproduce the results.

        Otherwise, I think you might be having problems with relating to scientific characterization techniques. Have you ever done industrial strength characterization of diffusional processes, like we have to do in a semiconductor fab?

      • Blue screens and laziness are the only excuse I have.

        http://i122.photobucket.com/albums/o252/captdallas2/UAHMSUSHallyears.png

        The surface station data shows the Antarctic is warming despite the Steig O’Donnell dust up. The Satellite data in the chart above shows cooling, the RSS data is about the same. You remember the satellite animation of the global CO2 where is was stable in the SH and very variably in the NH.

        There are not that many things that can cause all of those circumstantial happenstances. Add the mid-troposphere not so hot spot and you have a about 0.8 degrees per doubling instead of 1.2. Based on the engineering tool box, CO2 specific heat is 36 @ 30 C and 2 @ -20C Thermal Conductivity is .07 @ 30 and 0.115 at -20C That allows for a pretty big change. While you may believe conductivity and specific heat change are negligible with respect to convection, conduction or radiant transfer dominates at thermodynamic boundaries. Convection and latent move heat from point to point, but conduction and radiation move it through boundary layers. Increase turbulence just increases opportunities to transfer heat.

        in any case, not much has changed in the Antarctic, except for CO2, and it appears to have been cooling for the entire satellite era.

      • Isn’t it a signature of Milankovitch and a co2 lag?

      • Web, If there were another long term record to compare with the Antarctic cores, I would see being able to reduce the uncertainty. You have looked at the GRIP and noted it has much large fluctuation, so large that the CO2 data is unreliable. The only other long term CO2 reconstructions are plant stoma that I am aware of. There may be an ice core or two at the university of Ohio, but I don’t have access to them. So yes, that range of error pretty much means the Antarctic cores are novelties without verification.

        If there were not alternate theories for the Ice Ages, magnetic field reversal, then the cores might carry more weight.

        BTW, with the radiation imbalance at 0.5 +/- a touch, the entire increase of CO2 forcing per doubling at 1% of total, second and third order effect are a real possibility. Would not the Antarctic conditions indicate that something not in the first order estimates may exist?

        http://www.mpimet.mpg.de/fileadmin/staff/stevensbjorn/Documents/StevensSchwartz2012.pdf

        Interesting reading.

      • Typical of your incomplete forms of analysis

        Which in turn is typical of the denier side of the debate, which substitutes bitter judgmental sarcasm for calmly reasoned nonjudgmental logic. Obviously that approach can’t sway scientists. The question is what proportion of the lay public accepts sarcasm as a legitimate alternative to discussion and logic.

        Those who treat science as a war cannot win it in the scientific arena, even if they can in the public arena.

      • WHT

        I should speak to the “technique of demanding impossible perfection”; you are not wrong to condemn it.

        However, I believe your best condemnation of the technique is your perpetual habit of exceeding impossible perfection. You develop elegant approaches to Gordion knots, and slice through to clear solutions.

        And you don’t settle with just improving on what was; you go further by improving on your own breakthroughs.

        I use the less impressive response to the problem of impossible obstacles by asking, “Do I need to cross that minefield?”

        While BEST has moved the land portion of the global temperature record forward immensely, and to the point where it is likely sufficient for most purposes current at the time it was begun.. I still wonder, why bother with temperature at all?

        CO2 rise itself is far closer to undeniable than temperature rise can ever get; its reasoning far clearer and simpler, and simply the obvious human signature in this rise is itself sufficient and ample cause to act to reverse the influence on the principle of leaving the world no worse than we found it.

        Arguments of benefit of CO2 are wildly dyskeptic, unproven, unbalanced and insufficient, by any standard falling below the claims of GHE, or of GHI, or of AGW, or of CAGW, or of any single paleo reconstruction such as tree ring proxies or even stomata. Heck, they don’t even match the quality of tonyb’s work.

        Given the clarity of the CO2 more nearly perfect argument, temperature is superfluous.

      • Rob, looks like one of us ticked him off :) I have been trying for a while to get him to notice a few things. His dynamic range of 100PPM with 10PPM per degree is a bit of a puzzle if you look at the satellite data for the current CO2 distribution. It can vary by 50PPM in the northern hemisphere and is pretty rock solid stable at 370PPM in the southern part of the southern hemisphere. That could indicate that 5 degrees fluctuation in the northern hemisphere is perfectly normal. So based on the current satellite data, I would have to say the +/-5 degrees and +/-50PPM is the minimum margin of error.

        Now he can ignore the satellite data and simplify the problem. After all, 5 degrees would probably fall just outside his 95% confidence interval, pretty much like anything over 0.2 degrees falls outside of his confidence interval. Which makes Dr. Spencer’s post on the seasonal precipitation oscillation of 0.5 degrees fairly interesting.

        If he does look at the thermal diffusivity issue I mentioned, he might find that the Antarctic would be the location on Earth to reach local thermal equilibrium first along with the tropopause. That would make both pretty good heat sinks. That may also indicate that since the Antarctic appears to have reached some relative equilibrium state around 1995, that there is a shift in one or more internal oscillations. That would be interesting, if someone wanted to fine tune some prediction of climate for the next decade or so. That could even mean that the Southern South American Oscillation will be the new climate buzz word.

        Then again, I could be wrong :)

      • “Then again, I could be wrong :)”

        I would suggest that you are misguided because you have built up an argument based on sand. There is no foundation for your second-order effects to rest on. There is no context for any math to plug into, even if you had a mathematical model to begin with.

        “So based on the current satellite data, I would have to say the +/-5 degrees and +/-50PPM is the minimum margin of error.”

        If a the temperature anomaly was zero and you gave it +/- 5 degree uncertainty, there goes the entire dynamic range. So then I take it you also don’t believe that Temperature leads CO2 during warming periods? You would have to leave that as ambiguous as well because that much uncertainty wouldn’t allow you to discriminate the signal from the noise.

      • cd, you can’t compare paleo data with satellite data because the time constants differ by orders of magnitude. It takes at least 18 months for the SH to feel the effects of changes in the NH because the Hadley cells are decoupled between those two hemispheres. Furthermore the ocean acts as a capacitor filtering out the high frequencies, an effect that is very visible in the satellite data but not at all in the ice core data because of the huge difference in rates of change.

      • The question was a reasonable one if someone is forming conclusions based upon evaluating the paleo record of temperature and/or CO2. I simply asked webby what he/she considers the margin or error to be in the historical record.

      • Vaughan Pratt said, “cd, you can’t compare paleo data with satellite data because the time constants differ by orders of magnitude. It takes at least 18 months for the SH to feel the effects of changes in the NH because the Hadley cells are decoupled between those two hemispheres.”

        Comparing paleo to satellite doesn’t provide much information. Paleo does has an average. Not much to work with right? But it is the only thing you can have a great deal of confidence in. Since circa 1400, the SSA reconstruction has been trying to return to average. It was depressed considerable around 1400. So there is not a great deal of information, just a fairly good indication that growing condition for those trees was not as good. That could be due to temperature, precipitation and a variety other cause, but temperature and precipitation are at least half of the impact on the growth rate.

        Comparing the satellite data to the paleo average, just gives a reasonable ballpark of where the modern era is with respect to a fairly long term average. There is no solid indication of the magnitude of the exertions in the paleo as far as temperature or precipitation, only that things changed fairly abruptly and are recovering.

        Comparing the satellite data to the surface is different. The satellite indicates cooling from the start of the series to 1995 then leveling off. The surface indicates warming. Most of the satellite data agrees pretty well with the surface data, except for the Antarctic. It is an anomaly that I think has not been properly explored.

        With your AGU poster, did you compare global, to hemispherical to latitudinal? If you did, you should have found the warming to CO2 concentration greatest in the northern extent and weakest in the southern extent. Which brings me to Web.

        His Vostok Ice core analysis is good but not complete. Using the GRIP data there is significantly different results. Comparing the satellite measurement of atmospheric co2 to Mona Loa, there is indication that CO2 is not that well mixed. It should be, what would cause it not to be?
        a cooling Antarctic.

        Then if you compare the warming by latitude from 1900, you see that CO2 forcing is non-linear, IF you assume too much impact of CO2. If you factor in land use change, which amplifies CO2 forcing, you see that it is likely a significant factor. Then if you happen to really get curious, you would consider the possibility that CO2 impact heat transfer in other ways than just grabbing more photons. It likely to mess with phonons too. My estimate is that the conductive impact or sensible if you like, is between 27% and 33% with radiant impact of 33% to 47%. That is a large range, but non-linear conductivity with sufficient time has a significant impact.

        As I told Web, convection moves heat around, it doesn’t transfer heat through thermal boundary layers. That requires conduction or radiation. If you look at the changes in Thermal Diffusivity you can get a better feel for the magnitude of individual changes in the properties of a mixed gas environment than just obsessing on one factor.

      • Rob Starkey | March 16, 2012 at 12:36 pm

        + stefan: What is the margin of error +/- for EVERY phony GLOBAL warming and GLOBAL Ice Age in the past?!?!?! Should be answered by the Fakes also. That should be as antidote for the Fanatic / Fundamentalist sickness, in both camps

      • Web, the Ar spike in your last figure (11 or 11) can only be caused by a change in atmospheric Ar, caused by ocean warming. Ar is non-biotic and partitions between the atmosphere and oceans, based on absolute temperatures of the two bulk phases.
        That you do not see a CO2 spike at the same time, but may see a CH4 spike, shows that changes in the CO2 partition coefficient is not the biggest driver of atmospheric CO2 changes.

      • “Web, the Ar spike in your last figure (11 or 11) can only be caused by a change in atmospheric Ar, caused by ocean warming. ”

        Nice time-waster there Doc. Apparently you didn’t read this graph carefully enough. Since those charts are proxy measurements, you have to treat the differences as relative levels. The spike that you see in Argon needs to be compressed so that it fits the same dynamic range as the CO2 signal. In the following figure, I did a graphical compression of the Ar signal so it is the same level as the CO2 signal.
        http://img141.imageshack.us/img141/1543/co2argon.gif

        Looks a lot different now, eh?

        That’s OK. You can waste my time now, because I would rather waste it at early stages than at a later stage in where I have invested more effort. That is how one takes advantage of open-access science.

      • ‘Looks a lot different now, eh?’

        Er, no.Try the three point drop; trough:Peak:trough

      • Not good enough Doc. You will have to get the entire set of Argon data and correlate it with the entire set of CO2 data. Then you would have to set the partial pressure activation energy levels of CO2 and Ar properly so that you can determine the relative scaling.

        It looks as if Argon has around half the activation energy as CO2, so that considering two comparable peaks at low temperatures, a temperature increase will create a much larger relative CO2 peak — and by a scale relative to the size of their activation energies, to first order. That’s how these physical processes work out.

        Strange that you would believe that Argon partial pressure has a strong temperature dependence yet CO2 has some other mechanism? Is this that iron-influenced biotic process that you have been toying with?

        BTW, the sinusoidal ripple on the famous Mauna Loa CO2 curves is likely completely due to seasonal changes in average global temperature and the effect that has on CO2 partial pressure (and some due to seasonal biotic changes in CO2 uptake and release). The fact that it is so sensitively seen in relative short duration data is remarkable. Actually, you just gave me a good idea of an another characterization I can do based on this behavior.

      • Excuse me but activation energy is a measure of the likelihood of chemical reactions. What we have with gases is simple solubility dependent on partial pressure and temperature. Activation energies are irrelevant.

        The other influence of temperature on glacial/intergalcial cycles is albedo. A bit hard to miss really.

        So Webby’s so called analysis is fantastical nonsense as usual. I am not sure why I waste my 5 minutes even thinking about it except that Webby is such an obnoxiuous little troll – it is worthwhile knowing where he gets it wrong again and again.

        Best regards
        Captain Kangaroo

      • “Excuse me but activation energy is a measure of the likelihood of chemical reactions. What we have with gases is simple solubility dependent on partial pressure and temperature. Activation energies are irrelevant.”

        Nice teaching moment. All of statistical mechanics is based on Boltzmann activation energies. Molecules from the liquid phase are activated into the gas phase by exceeding an energy barrier. That is why these curves are universal and look the same independent of the underlying physical process.

        e^{\frac{-E_A}{kT}}

        “So Webby’s so called analysis is fantastical nonsense as usual. I am not sure why I waste my 5 minutes even thinking about it except that Webby is such an obnoxiuous little troll – it is worthwhile knowing where he gets it wrong again and again.

        Best regards
        Captain Kangaroo”

        Notice the lack an actual argument, replaced by vague assertions of wrongness.

        Do people want to try to understand the aleatory uncertainty? I assume so, and that is why these foundational models are so essential.

      • Nothing vague about it Webby. Fundamental conceptual problems leading to a grossly misleading formulation.

        Still less do changes in state have anything to do with C02 – which is gaseous at any pressure and temperature to be found on Earth. The solubility curve is hyperbolic – so what? Why not just use the empirical curves and calculate the volume outgassed with temperature. You will find that CO2 is a multi-compartment problem – rather than one of a simple solubility in ocean water.

        Again with the fundamental conceptual problems. Sometimes I wonder if you are hoaxing or simply delusional. Perhaps Occam’s razor suggests the latter.

        Best regards
        Captain Kangaroo

      • “Again with the fundamental conceptual problems. Sometimes I wonder if you are hoaxing or simply delusional. Perhaps Occam’s razor suggests the latter.

        Best regards
        Captain Kangaroo”

        Notice how the guy shows anger over a simple applied math exercise which worked out a canonical set of differential equations. He is bitter that I could work it out as a closed-form analytical expression and ends with the typical “so what” marginalization.

        If that is all you got, I am a happy camper.

      • Web, Keeling shows that CO2 peaks in May and is at its lowest in October. Should sea temperature be the driving force, the we should expect the average ocean temperature (not anomaly) to be greatest in May and lowest in Oct; it isn’t.
        The Keeling curve does not show a distortion in the recorded SST either, the big heat pulse between 1996 and 2000 is missing.

      • “The Keeling curve does not show a distortion in the recorded SST either, the big heat pulse between 1996 and 2000 is missing.”

        I did work out an analysis of correlating recent CO2 data against global temperature data, and it does show the “heat pulse” between 1996 and 2000.
        http://theoilconundrum.blogspot.com/2011/09/sensitivity-of-global-temperature-to.html
        In particular, take a look at this T + CO2 correlation that I mapped:

        “Web, Keeling shows that CO2 peaks in May and is at its lowest in October. Should sea temperature be the driving force, the we should expect the average ocean temperature (not anomaly) to be greatest in May and lowest in Oct; it isn’t.”

        As I showed, the temperature change is proportional to the slope of the CO2 concentration during the year as well as transients in anthro FF over the course of several years (see above chart). This is a lag factor as the CO2 is trying to keep up with the temperature but transiently lags behind it (same on the cooling side as well). The greatest slope is at the beginning of the year, which corresponds to southern hemisphere summer. Since there is more ocean in the south this makes perfect sense.

        All the pieces really fit well together, so I am not really sure what you are getting at. I think you have preconceived notions as to what is happening and since you haven’t actually methodically looked at the volume of data, you are letting those preconceptions lead you astray. Fortunate that you have guys like Steven Mosher, Tamino, and Nick Stokes (and me) that can look at the data with at least some level of expertise.

        Greg Goodman can also do this kind of analysis, but my recommendation is, like I said earlier, is for him to not look at straight time series, but cross-correlate against other factors. That is what helps improve our understanding.

        Again, I want to point out it is not primarily about trying to do trendology of the current temperature time series, ala that Girma clown, but getting at the mechanisms, and those will help reveal the uncertainties. Straight trendology is chasing phantoms (Tamino is the somewhat convincing exception, but I wouldn’t touch that approach yet) and it doesn’t help at all with our understanding.

      • Why don’t you try responding to the issues raised – the fundamental error of ignoring albedo in glacials/integlacials – obviously nonsense – and formulating gas solubility questions in terms of activation energy – for which you use odd units of Kelvins equivlent to electron volts typically used in plasma physics. Not that any of it is relevant at all – because you are talking solubility. It is – at any rate – entirely irrelevant because the carbon cycle is not a single compartment problem in any feasible conceptualisation.

        There are a number of other problems of a similar funamental formulation variety and yet you continue to respond only with hand waving.

        I stand by my comment – it is either a hoax or delusional.

      • The units of Kelvins is OK to use. All it does is transform the activation energy into units of temperature by dividing E by k (Boltzmann’s constant). Many theorists like this approach because they don’t have to drag k around everywhere. It is very similar to setting k=1 as Baez describes here:
        http://johncarlosbaez.wordpress.com/category/information-and-entropy/

        So lots of choices for units for energy.

        Where does albedo go into this formulation? For now it goes into the climate sensitivity factor, as that can contain feedback.

      • So you concentrate on the units? Which I said were used in plasma physics? Rather than the utter inapplicability of anything other than gas solubility? The multiple dimensions of the carbon cycle?

        http://en.wikipedia.org/wiki/Ice_age – so we get to the core of the problem? The reduction of climate to a single degree of freedom.

      • Captain Kangaroo: So Webby’s so called analysis is fantastical nonsense as usual. I am not sure why I waste my 5 minutes even thinking about it except that Webby is such an obnoxiuous little troll – it is worthwhile knowing where he gets it wrong again and again.

        JC: Moderation note: this is a technical thread that will be moderated strictly for relevance. Apologies for the glitches in the first version of the post, a few comments were lost.

        Judith, why do you never keep your promise? Anyone can say “Webby is such an obnoxiuous little troll” but unless there is clear evidence of this on Webby’s part, this is nothing but Captain Kangaroo working off his anger. God knows where he got his anger from, but it has no relevance whatsoever to the question at hand.

        As long as the allegedly “obnoxiuous little troll” is sticking to the point, insults like CK’s are completely irrelevant and should be moderated. You’ve been promising for a year that you’d “moderate for relevance” on technical posts. When are you going to start actually doing so?

      • “You’ve been promising for a year that you’d “moderate for relevance” on technical posts. When are you going to start actually doing so?”

        Um, maybe she’s got better things to do than break up these regular pissing contests? She might have her own kids you know…

      • Discussing interesting time-series analyses, I forgot to add Vaughan’s approach, which is to boil the numbers down to their essence. Vaughan presented at the AGU last year (the link to that presentation is broken), but this prior little gem is indicative of the salient features:
        http://thue.stanford.edu/killamo.pdf

        Note what you can do with an analysis.

        In Vaughan’s case, it is to extract by filtering only the underlying TREND, and then to show how the logarithmic climate sensitivity plays into that trend using a best estimate of the increasing CO2 concentration. The sensitivity he finds is relatively modest in AGW terms, but that’s what it show.

        OTOH, in Girma’s case, the approach is to do whatever it takes to hide the trend, and instead extract only the FLUCTUATIONS.

        What Tamino does is to model both the TREND and the FLUCTUATIONS. It takes a lot of bookkeeping and the number of independent variables is beyond my capacity to justify, but that’s the ultimate goal.

        What Goodman does is to question whether the removal of the systemic errors contributing to the fluctuations and to the trend is valid.

      • Web here are some quick plots.

        Firstly this is the monthly average of Keelings CO2 and the absolute SST, from Jan 1982 to Dec 2011.

        http://i179.photobucket.com/albums/w318/DocMartyn/keelingandSSTmonthly.jpg

        (SST here :- http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?ctlfile=monoiv2.ctl&varlist=on&psfile=on&new_window=on&ptype=ts&dir=)

        Now can see that the line shape of Keelings CO2 (which is in one location) is quite different from GLOBAL SST.

        Next plot SST and [CO2]. We take the monthly temperatures away from last years data point for both data sets. Both sets have been treated identically and we are not introducing biases to only one data set.

        http://i179.photobucket.com/albums/w318/DocMartyn/1982to2012sstandco2.jpg

        Looking at the 1998 spike you sort of think there is a 9 month lag in CO2. So, average the series and then divide by the SD; both data sets now have a common mean (zero) and a common SD (one), and then plot

        http://i179.photobucket.com/albums/w318/DocMartyn/keelingandSSTmonth9monthlaganszee.jpg

        Now you can see that it is quite crap. We can prove it is quite crap by looking at the Zee series plotted against each other.

        http://i179.photobucket.com/albums/w318/DocMartyn/keeling9monandSSTslope.jpg

        Both axes are in SD, the r2 value of <0.3 tells you, even though we have messed around with the data, the correlation is crap.

        A global CO2 dataset, which was measured in the same areas and at the same sampling rates as ocean temperature might show a better correlation, but i doubt it.

        Web, the annual variation of CO2 is not due to outgassing of the surface layer. However, the outgassing of Ar and other inert gasses will be.

        .

      • No Bart. What I said was that was ‘Thus climate is entirely stochastic and is as likely as not to end up as something entirely improbable. A duck or a watermelon for instance.’ It was something nonsensical to do with a Bose Einstein condensate. I wish people would pick me up on these things – perhaps suggest that this applies at near absolute zero. Sheesh.

        Regardless the use of the word watermelon in itself is not racist. Here for instance is the top 10 4th of July traditions. – http://www.examiner.com/northside-family-parenting-in-atlanta/top-10-fourth-of-july-traditions-to-celebrate-america-s-birthday – Please you are an opportunist nincompoop climate warrior who would say or do anything if you thought it would provide a rhetorical advantage. The racist taunt is offensive and false – and made on utterly ridiculous grounds.

        The other taunts are simply laugable double speak coming from a tax and be damned radical.

      • CK

        BEC.. is a technical topic, I’ll grant. A red herring, obviously, as you demonstrate by trolling for responses to off-topic ideas, and then switch tacks again when caught out.

        I can hardly see either Albert Einstein or Satyendra Bose entirely comfortable with the use of their ideas to support your Moncktonish watermelon double entendres, whether you imply them out of a misplaced hatred of the tired and spent political ideas of socialism, or any other misplaced hatred, or simply because you do not know how your idiom sounds in America. No amount of websearching can make you American, CK, and that’s not entirely a bad thing but it means you may never grasp how you sound over here. That I warned you of how you were sounding, in as friendly a manner possible, and without seeking to prejudice, I feel hurt now by your baseless reproofs and rebuffs.

        Given your ironic preference for serving as factotum on politburo-style committees to tell the collective under your control how to live their lives only further perplexes. In the past, your dyslexic interpretations of Hayek seemed perhaps humoresque, but I think now Vaughan Pratt is not entirely wrong in his intuition that something darker is indicated.

        I need no rhetorical advantage but the data, and sound understanding of technical issues. I come here for those. You come here to obscure them.

      • me “Excuse me but activation energy is a measure of the likelihood of chemical reactions. What we have with gases is simple solubility dependent on partial pressure and temperature. Activation energies are irrelevant.”

        Webby ‘Nice teaching moment. All of statistical mechanics is based on Boltzmann activation energies. Molecules from the liquid phase are activated into the gas phase by exceeding an energy barrier. That is why these curves are universal and look the same independent of the underlying physical process.’

        We have a one degree of freedom – CO2 – model of the paleoclimate. Everything else is a feedback. The lag of CO2 behind temperature is given as conceptually an outgassing phenomenon – but we are only talking about the gas phase. But he doesn’t really care about the physical processes.

      • Webby – “Again, I want to point out it is not primarily about trying to do trendology of the current temperature time series, ala that Girma clown, but getting at the mechanisms, and those will help reveal the uncertainties.”

        Obnoxious trollery at work? Hypocrisy?

        Bart,

        We have been moderated here before for pursuing irrelevancies that you introduce and doggedly continue to gnaw at. To stay relevant – I am supremely indifferent to bucket adjustments in the lack of instrumental data on the oceans.

        An ironic comment on stochasticity involving ducks and watermelons is neither here nor there. Watermelon is a word used every day by Americans – and whether I intended it as a reference to green politics is irrelevant.

        My preference is for the market to respond to conditions independently of government to the degree possible. In such fundamental areas of energy production the interventionist ideas – tax and dividend – have essential problems of the lack of knowledge of particular consequences.

        I have never suggested anything but that continued increase in greenhouse gases was a desirable objective. However, the preferred solutions don’t include cap and trade and tax and be damned as these simply artificially raise prices of essential production. In the context of global needs for food and energy – it amounts to genocide. In a context of a liberal – in the sense of Heyak – it is the road to serfdom. Surely – such an enlightened one as yourself has read ‘A road to serfdom’.

        “From the saintly and single-minded idealist to the fanatic is often but a step.”
        One of the famous quotes on liberty informed by a century and more of socialism. But I am sure that with your facility for double speak – slavery is freedom shouldn’t be a problem. You are 100%l rhetoric Bart – I know you can do it.

        What really matters is what the temperature does in the next 10 or 30 years. I am afraid the peer reviewed decadal forecasts are looking pretty dismal for your tribe. We are not expecting any warming over another decade or three. I think that’s pretty funny – don’t you.

        Best regards
        Captain Kangaroo

      • CK

        The last thing I wish is to burden our good host with more moderation headaches.

        Given your stated market preference, and that you’re one of the brightest people on the planet, I wouldn’t mind your views on http://prezi.com/jpced0jg1chv/edit/#0_5732647, where I hope I make plain the distinction between taxing and pricing even to you.

        It could certainly stand some critical review.

        And as I still hold that it’s CO2 level, not temperature, that matters in all this, I hope to hear your view pertaining why you focus on the mess of temperature over the clarity of CO2.

      • CK

        Sorry, your website’s a bit cluttered for me to follow. I’d advise trimming it down a bit, for better visual appeal.

        While I’m not surprised that from your lofty orbit you’d find my presentation pedestrian, I’m surprised you think it over intellectualised. That’s high praise from you.

        I could tighten and reword the text to hit a more plebian level appealing to the man on the street, and were my purpose ever to be to convince the multitude rather than to help me clarify my own thinking, I might do so. I take it from your comments that you believe this is the right next step.

        Thank you for the positive and useful feedback. If you ever need me to help you out in a similar circumstance, you know I’ll be glad to help.

      • Doc,
        Granted, there is likely some seasonal biotic fraction to the CO2 changes. But that said, let’s look in detail on what you have done.

        If you take the 0 to 90 latitudes NH and 0 to -90 latitudes SH, you get the following charts (where you can see that they are obviously half-a-cycle out of sync).
        http://img685.imageshack.us/img685/1079/southernnortherntemp.gif

        Those are obviously sine waves.

        The CO2 ripple is also a sine wave with an extra harmonic less than a third of the seasonal. This was a pretty good fit I did a while back.
        2.78cos(2πt−θ1)+0.8cos(4πt−θ2)
        The second term is the harmonic.

        If you have sine waves describing the SST seasonal variations and sine waves describing the CO2 seasonal ripple, these should match.

        And lo and behold, they match extremely well if you apply the derivative of CO2, with a phase shift included.
        http://img836.imageshack.us/img836/1332/co2withphaseshift.gif

        That derivative is very important as that is part of the lagged response that is seen over several decades:
        http://theoilconundrum.blogspot.com/2011/09/sensitivity-of-global-temperature-to.html

        I find it odd that you somehow find a way to exclude the possibility that CO2 outgasses seasonally as the water warms. And it is also odd how you can totally mess up this analysis by doing some amateur job on the signal processing. It is almost as if you are tampering with the data on purpose to try to diminish the importance of the incredible correlation between CO2 concentrations and temperature.

        So what’s up, Doc?

      • Bart,
        Chief must have some knowledge of American idioms, but you are right in that he comes across pretentiously rough-hewn. He must realize that Captain Kangaroo is an ancient America kiddie clown show as well.

        Other than that, one of the all time best threads I have been involved in.
        This has a lot to do with DocMartyn for pointing out that NOAA SST site, which I wasn’t aware of. I also thank him for providing that averaged SST yearly cycle.

        Doc essentially provided good boomerang material, as the Mauna Loa CO2 data fits that temperature profile remarkably well.

        Again, the important teaching moment is realizing that there are fundamental macroscopic processes at work here. These all have to fit together tightly, and if you characterize them by applying tested physical models, they come out of the wash as robust as ever.

        I don’t pretend to understand why climate scientists don’t fly the “simplicity” flag more often. As Bart said, and I agree totally, the CO2 signature is the important aspect — this has the big dynamic range and low noise, and thus everything else follows. I suspect the climate scientists are going after the models in greater depth and can’t really be bothered with first-order analyses, and so it is up to us amateur sleuths to come up with the interesting perspectives.

      • CK you relentless flatterer!

        Are you trying to butter me up to ask me for some favor?

        That’s twice you’ve implied the word intellect is somehow associated with me, and now you’re allowing — however indirectly — that I write well enough to be a pamphleteer like the great Thomas Payne.

        I may faint from a surge of modest vapors.

        You .. aren’t writing a book you need proofread, are you?

      • Web, please take a little time to have a look at this thesis by one of Keelings Graduate Students; Dr. Tegan Woodward Blaine.

        http://bluemoon.ucsd.edu/publications/tegan/Blaine_thesis.pdf

        What I would like is for you to have a look at chapter 4 & 5 (the whole thing is worth a read).
        Figure 4.15., page 135, shows the average diurnal cycle of atmospheric CO2 and δ(Ar/N2) over 14 months. The difference between the biotic and chemically rich CO2 and the chemically poor and non-biotic argon is striking.
        In Chapter 5 there are steady state measurements of Ar/N2, CO2/N2 and O2/N2; along with photosynthetic radiation flux.
        One can observe in Figure 5.1a. (149) and b (150) that the changes in primary biotic processes, the increase in atmospheric O2, are far larger than the change in atmospheric CO2. This indicates huge CO2 buffering capacity. Argon is unbuffered and responds to surface temperature.
        In Figure 5.7. (167) Dr.Blaine has a go at modeling the annual δ(O2/N2) cycle due to land biology, to the air-sea heat flux, and to changes in ocean biology. The difference between biology and chemistry is marked.
        Finally, have a look at how the models of ocean heating compare to the actual heating Ar/N2 signal; Figure 5.6. (162); epic fail.
        Web, modeling the outgassing of CO2 by temperature changes from atmospheric CO2 and SST is not possible. The biotic component is more than an order of magnitude greater than the physico-chemical signal.
        Argon fluxes are an internal control for the measurements of partition of biotic gasses; the profiles of the changes in the daily and seasonal levels of biotic gasses do not match this internal, inert, control.

      • “Finally, have a look at how the models of ocean heating compare to the actual heating Ar/N2 signal; Figure 5.6. (162); epic fail.”

        Doc, it looks like you are the one experiencing an epic fail. Those are all second-order effects (typical for a PhD thesis as they tend to be thoroughly researched). Yet the huge first order effect is the Boltzmann activation with temperature. Some of that will be due to the physical ocean outgassing and of course some of that is due to biotic processes. Both of these are thermally activated.

        In broader terms, it doesn’t matter to delineate the thermal activation of CO2 outgassing in the ocean and the analogous effects in the biota. They are both positive feedbacks in the alpha-beta model, showing strong latent lags with respect to temperature.

        Ironic that I annotated your graph with the clear and obvious match of CO2 with SST and you turn a blind eye to it:
        http://img836.imageshack.us/img836/1332/co2withphaseshift.gif
        It’s the red and the green curves, not the red and blue curves, Doc. That green curve I added to your plot demonstrates the inherent temporal lag within the system. The way you pull out the lags is by differentiating signals with respect to time. I notice that you don’t want to do this rather obvious signal processing step. Not surprising, in that skeptics typically won’t do the math that reduces the FUD.

        I also recall that you very recently pointed out the phase space relationship of CO2 and temperature for the Antarctic data, and then I remarked that I beat you to the punch. I can’t find that blog comment right now because Google hasn’t indexed this site recently, but I will dig it out if you decide to argue.

        I think you are just doing the sour grapes thing because your biotic thesis based on Iron space dust is falling apart. Say, whatever happened to that Part 2 you were promising? :) :) :)
        I guess Iron is part of irony. Is that the actual epic fail, and this discussion is all inadequacy-based projection on your part?

        Sorry to be harshing on you, but with jerks like the Dingo-man on my case constantly, a boomerang is the best weapon to use. It’s not what I say, it’s what your own data says, and you have to deal with it when it comes flying back at you.

      • Web, at what point would you begin to consider second order effects? Most people that I know used an adjusted Arrhenius rate equation. If you consider the specific heat of CO2, Cp=36.4@ 303K, 1.94 @ 243K, that is a fairly large range for 60 degrees. If you use Rspecific instead of K, what is the difference at the extremes?

      • Web, I pulled up the wrong table. http://www.ceere.org/beep/docs/FY2003/thermal%20properties.pdf

        CO2 Cp increases with temperature linearly, N2 decreases with temperature non-linearly. That should have a significant impact on the activation energy of CO2 in a mixed gas environment.

      • richardscourtney

        Captain Kangaroo says;
        “Still less do changes in state have anything to do with C02 – which is gaseous at any pressure and temperature to be found on Earth”

        No. CO2 is liquid at the temperatures and pressures of the bottom of deep oceans. However, no pools of liquid CO2 have been observed at ocean bottoms.

        Please note that this correction is not a trivial knit-pick because almost all CO2 “to be found on Earth” exists as ions dissolved in deep ocean. And nobody knows the exchange rates of CO2 between deep ocean and upper ocean layers. But these exchanges are critically important for assessment of e.g. atmospheric CO2 budgets. And the behaviour of CO2 in the oceans is critically important to the objective of obtaining an understanding of the carbon cycle.

        Hence, the apparent absence of liquid CO2 at ocean floors is worthy of consideration and statements such as “C02 – which is gaseous at any pressure and temperature to be found on Earth” should not be uncontested,.

        Richard

      • “Most people that I know used an adjusted Arrhenius rate equation.”

        There are second-order adjustments to the thermal rate laws based on physical considerations, and there are obviously adjustments based on heterogeneous and statistical adjustments. The latter are obviously important for spatial variations and one can use superstatistical techniques to smear the distribution. I do this kind of stuff all the time wrt dispersion, but apply it only after I first understand the first-order effects as a foundation. Again, exactly what foundation are you pinning these to?

        As far as that specific heat number, are you talking about liquid or dry ice phases of CO2? You might be off in left field on this topic.

      • Yeah, I pulled up the wrong table. What I am getting at is that is the conductive impact of CO2 increases with decreasing temperature. As you know, conductivity is considered negligible in the atmosphere, but I just don’t believe it is. The Antarctic tends to agree with me. I would believe that the conductive impact should be evident in the rate of CO2 uptake in the Antarctic, which would explain the lower CO2 concentration shown by the satellite data. That would have to be a second order effect, which I believe is what Doc was getting at.

      • MattStat/MatthewRMarler

        WebHubTelescope: Yet Goodman’s analysis is still largely empty of any significance. Sure it’s leaps and bounds over anything than Girma can produce, but is limited by the small dynamic range of the data.

        Personally, I would not do straight time-series analysis of temperature data in the current era unless it contained a larger dynamic range.

        I disagree with your criticisms. Goodman’s analysis is important because it shows that the particular adjustment procedure affects estimates of periodicity in the data, where the data are both relatively recent and pertinent to the warming or not of the Earth Surface in the period most affected by anthropogenic CO2. Granting some looseness in your phrase “straight time-series”, I think you are mistaken is your disdain for the limited dynamic range in these data. That’s the nature of the phenomenon over the range of greatest interest for assessing any potential effects of anthropogenic CO2; it is valuable to do the best possible work on the most relevant data. If the Vostok data are better for the purpose, it has to be because they are some combination of more accurate and more representative of global temperature than the data that Goodman has worked with, not merely that they have a greater dynamic range.

      • WebHubTelescope: Yet Goodman’s analysis is still largely empty of any significance. Sure it’s leaps and bounds over anything than Girma can produce, but is limited by the small dynamic range of the data.

        Personally, I would not do straight time-series analysis of temperature data in the current era unless it contained a larger dynamic range.

        “I disagree with your criticisms.”

        How can you disagree with my personal decision? I am not delusional enough to think that I can make any sense out of about a 1 degree C change over the course of a century. If you think you can do it, then fine, go ahead. I just won’t do it, and will look for other measures to try to gain insight.
        (Take for example oil depletion, which has a huge dynamic range, yet no one looks at those numbers. Which I find kind of odd)

  8. What an excellent post, kudos.

    It seems pretty damning that the adjustments break the natural cycles, but I was also wondering if anyone actually went on a ship and tested canvas vs wood vs intake. Surely a few ships that did all three methods for a year would be enough to know a ballpark of the error?

    It led me to this paper, which also has a pretty detailed history of the adjustments. It seems the original idea that canvas buckets would be a problem – was from the sudden change in 1941 itself. It assumed it was the “result of a sudden but undocumented change in the methods used to collect sea water” (eg canvas buckets). Instead on wondering if the use of canvas also suddenly stopped in 1945, that was then used to adjust pre war temperatures as well. (hey, when you’re on a roll…)

    ftp://podaac.jpl.nasa.gov/allData/gosta_plus/retired/L2/hdf/docs/papers/1-crrt/1-CRRT.HTM

    It is striking how rigorous the physics and math looks when calculating heat loss of a bucket, and yet how casually assumptions are made about the way things were measured by sailors, and how the actually use of bucket(s) vs intake are basically guesses. It’s like they round the integer portion of a number, and then calculate the decimals to 20 places.

    Thanks for the post, looking forward to reading it a few more times : ).

    • “It is striking how rigorous the physics and math looks when calculating heat loss of a bucket, and yet how casually assumptions are made about the way things were measured by sailors, and how the actually use of bucket(s) vs intake are basically guesses. It’s like they round the integer portion of a number, and then calculate the decimals to 20 places.”

      Perfect, Robin. Climate science in one paragraph.

      On blogs like Dr. Curry’s I continually see learned, and heated, arguments over the meaning of fluctuations in the ‘annual temperature of the earth’ in the hundredths of a degree range (sometimes thousandths), with data plotted over hundreds or thousands of years, while noticing that there doesn’t seem to be a DEFINITION of the ‘Annual Temperature of the Earth’ and that the climate science community, collectively, would be hard pressed to provide me with an ‘Annual Temperature of Bob’s House’ with a credible and defensible resolution and precision of +/- .01 degree, using an instrumentation system of their choice. Their ATOBH would be especially suspect if, like in the real world, it were based on readings of my living room thermostat as reported by my seven year old grandson who was instructed to look at the thermostat when he got home from school every day and write the temperature in the ATOBH Notebook, for analysis later. Which is roughly analogous to the sailor/bucket/thermometer data collection system.

      Thank you.

    • Hi Robin,

      The changeover was documented in various places, but as I understand it Folland and Parker had access to only a limited amount of metadata at the time. In the latest version of ICOADS there are more records with measurement method indicators. We have instructions given to the crews who were making the measurements which reflect the change in practice. Putting that information together supports Folland and Parker’s statement.

      The Folland and Parker adjustments stopped in 1941 because there was no strong evidence at the time of biases post-1941. Reading through this discussion, it’s generally agreed that where there is no strong evidence for the need for adjustments, they ought not to be applied. With ICOADS 2.5 we have access to much more information than Folland and Parker had so it was possible to revisit that assumption.

      Best regards,
      John

      • Hi John,

        I really appreciate you taking the time to respond here, following your conversation with Greg has been most interesting. I think a rational debate with open minds, focused on the science, is exactly what almost everyone has been hoping for. It is clearly a hard problem, zig zagging and missteps are to be expected as the path to improvement.

        I think the problem for most ‘skeptics’ has been more about trust, the actions of others elsewhere have really eroded that for climate science. A conversation like in this thread goes a long way to restoring the tone that trust depends on. I hope we can see more people like you and Greg step forward and eventually take the (public) lead in this science.

        Respectfully,
        Robin

  9. The introduction of convoys by the RN, than the US, did a number of things. Firstly, convoys travel as the speed of the slowest vessel; there were three speeds chosen for different convoys (the older coal burners were also slow). Convoys would go much further North, to gain as much air cover as possible. Finally, convoys raced towards bad weather. They loved rain, fog and big waves. Rain reduced visibility hid hid smoke, fog hides everything and large waves shows periscopes.
    In the Pacific thinks were much simpler and much more complex; the whole of SE Asia became under the control of Japan and all British and US shipping didn’t return until 1945.

  10. My familiarity with the literature details on SST adjustments and their rationale is limited, but the change from HadSST2 to HadSST3 does appear to have implications regarding factors responsible for some of the temperature changes. The following relationships appear to hold.

    1. The HadSST2 and HadSST3 curves differ mainly in the magnitude of the peaks and valleys, but in most cases tend to appear in the same places in the record, even if much reduced around 1940-1945 in HadSST3.

    2. SST peaks and valleys (both late 19th century and in the 1940-1950 interval) are seen also in the land temperature data. This would seem to eliminate the possibility that they exclusively represent SST measurement artifacts.

    3. In HadSST2, the SST fluctuations were larger than the land fluctuations. By smoothing these out to some extent in HadSST3, the relationship is reversed – land changes exceed ocean changes.

    Based on the above, the following tentative conclusions seem justified.

    First, as stated in the post, we still face uncertainties in trying to eliminate contamination of real data by measurement artifacts..

    Second, the fluctuations are real, even if their real magnitude is somewhat uncertain.

    Third, the adjustments leading from SST2 to SST3 are not hiding a forced climate signal, either anthropogenic or solar. Forced changes mediate greater land than ocean temperature fluctuations due to the thermal inertia of the oceans, the moderating effect of evaporation, and probably other factors. In terms of mid-century fluctuations, the HadSST3 data are consistent with a larger ratio of forced to unforced variation than is suggested by the HadSST2 data, which are not what one would expect from solar or anthropogenic changes.

    Conversely, if the HadSST2 data reflected climate signals that are partially obscured in HadSST3, these would most likely be internal climate modes of the ENSO, AMO, and PDO type. In fact, prior to HadSST3, these modes had been invoked as a partial explanation for some of mid-century observations, along with measurement artifacts as additional contributors.

    It is entirely possible that unforced climate modes, persistent forcing, and measurement errors all contributed to the recorded temperatures in the mid-century record, although anthropogenically forced variation appears to have dominated after 1950.

    • Red said: “Third, the adjustments leading from SST2 to SST3 are not hiding a forced climate signal, either anthropogenic or solar.”

      GG said: “It is not the object of this study to suggest or refute any particular link between climate and TSI…However, there is more than a coincidental similarity…It seems improbable that an error with such a similarity could be erroneously introduced by the sampling bias.”

      Fred, I don’t see how your elaboration of your third tentative conclusion vitiates GG’s observation that the observed similarity is improbable.

      • Ooops…not “Red said” but rather “Fred said”…and I swear to god that wasn’t Freudian or anything.

      • I was looking all over for Red!

      • NW – In my analysis, I gave Greg Goodman credit for not suggesting that the adjustments were made for the deliberate purpose of creating a false impression. There can be disagreements about whether HadSST3 is an improvement on HadSST2, but attributing a dishonorable motive to them would I think discredit the person who suggests it, and would be more typical of fierce blogosphere partisanship than rational discourse.. My more important point is that the adjustments change little in terms of global trends, and in particular, don’t throw additional light on putative cycles or other oscillations based on solar variation or other forcings. The latter would provide no rational motive for wanting to see the SST data adjusted, since the land data are the more critical ones in terms of those forcings.

        There is a back-and-forth at ClimateAudit vs a slightly earlier post at RealClimate on the topic. Questions are raised about the value of the adjustments, but commendably, not about its honesty. In comparing the actual global curves, the lack of dramatic differences is what strikes me, but others may have a different perspective.

    • Fred
      If you are right that it is mainly internal climate variation that is partially obscured in HadSST3, that could still have a large effect on the optimal fingerprinting methods favoured by many climate scientists, and the IPCC, for climate change detection/attribution and estimation of key climate parameters. The inverted correlation matrices reflecting patterns of natural internal climate variability (“noise”) are very important in these methods.

      • Nic – I agree with you that we need to get internal variability right for accurate attribution. I don’t know whether HadSST3 obscures it or HadSST2 exaggerated it, or both. Currently, we have additional metrics to help us (e.g., ocean heat uptake data), but we may never be able to go back and get a completely accurate fix on the 1940s and earlier.

      • Can’t we get, like, some astrophysicist to estimate the SW radiation from the Earth reflected off some object, like, 35 light-years away?

      • billc | March 16, 2012 at 12:17 pm said: Can’t we get, like, some astrophysicist to estimate the SW radiation from the Earth reflected off some object, like, 35 light-years away?

        Yes BillC, we can get astrophysicist, they are multiplying as fast as climatologist for the last two decades… faster than Queensland’s cane toads… Problem is: ”astrophysicists and climatologists” are two of the three oldest professions… money + power talks / it’s written in the stars. Astrophysicist are regularly telling us that: they discovered ”another star with planet to sustain life” 50 light years away” – yes, they have telescopes… standby for when they start telling ”how many chickens in his-backyard that alien has on that or another planet” Because for you is cheaper to believe them; than go there and see for yourself that they are lying – and they know that.

        All they know is that: some star appears warbling; could be because of mirage, or too much ethanol or weed consumption; or both. That’s irrelevant; usually the discoveries happen weeks before the taxpayer’s cash is distributed by the ”honest” publicity seekers politicians.. Billc, ”many of the fundamentalist involved in climatology are ”astrophysicist” their genes fit for contemporary phony GLOBAL warming. The Nostradamus gene”

    • MattStat/MatthewRMarler

      Fred Moolten: Third, the adjustments leading from SST2 to SST3 are not hiding a forced climate signal, either anthropogenic or solar. Forced changes mediate greater land than ocean temperature fluctuations due to the thermal inertia of the oceans, the moderating effect of evaporation, and probably other factors. In terms of mid-century fluctuations, the HadSST3 data are consistent with a larger ratio of forced to unforced variation than is suggested by the HadSST2 data, which are not what one would expect from solar or anthropogenic changes.

      I think you took a step beyond what is justified by the data at hand to date. It only makes sense if you know for sure that all the solar and other forcings are completely known qualitatively and quantitatively, and if you know exactly how the water/earth balances are affected by all forcings.

      All you can say is that the adjustments leading from SST2 to SST3 may be hiding evidence of a process that needs to be better understood if it is there.

  11. Hi Greg,

    Were sea surface temperatures immune to H-bomb tests in the Pacific?

    http://news.bbc.co.uk/onthisday/hi/dates/stories/march/1/newsid_2781000/2781419.stm

  12. Is this the official implementation of “containing the 1940s blip”?

  13. There is something odd about late 1960s:
    http://www.vukcevic.talktalk.net/GT-AMO.htm
    (graph 2)

    • Greg Goodman

      Vuk, how about doing your FFT of the last two graphs on that page on ICOADS rather than Hadley contaminated HadCRUT data? This was exactly one of the thing I found, that is shown in fig 11, Hadley processing is distruping the 10y cycles and perhaps favouring 20y.

      These relationships need investigating *before* Hadley modify the data.

      It would be interesting to see the same plots done on the 0.4K adjusted ICOADS. Please post a comment if you do that.

      • Can do, if you post links to the relevant data.

      • Greg Goodman

        What I meant was redo the AMO spectral analysis using ICOADS – 0.4K WWII instead of what I guess was hadSST subset for AMO. Source for ICOADS is in the post.

        If WP does not mess this up here’s the dates I apply the simple correction;
        [sourcecode]
        if (($1>1941.71) && ($1<1946.12)){
        print $1, ($2-correction);
        [/sourcecode]

      • Greg
        for the AMO I used: http://www.esrl.noaa.gov/psd/data/correlation/amon.us.long.data
        If you apply your mod and send the ‘new’ annual data to the email shown at the top right on this graph
        http://www.vukcevic.talktalk.net/CET-NAP-SSN.htm
        I’ll do spectrum graph.

      • Sounds like a Goldilocks argument for interpolation by stepwise refinement.

        The observations are too hot, as verified by comparison with other data; the adjustments are too cold, validated against stringent analyses.

        Split the difference and repeat, until adjustments no longer produce significant differences on validation.

        It’s still adjusted data, which reflects the poor quality of not just the part of the dataset it lies within but also the impoverished quantity and quality of data overall in SST, it may be gilding a turd, but at least it isn’t giving up a job half-done.

      • Steven Mosher

        sugar coating a turd is easier than gliding

      • gilding. thought you’re probably also right.

      • gelding

      • gilding?

    • Gentlemen
      When you call in guest room of a fine lady’s (as our hostess is), for science or otherwise conversation, it would be good manners to leave your vulgarities at home.

      • vukcevic

        Said like a gentleman.

        Which is a timely reminder, as I must admit to having not a trace of personal experience in these practices, and would be resigned to defer to the obviously great expertise of others.

      • Very good said Vuk; ”gentlemen” for a start don’t tell lies – if one says – when proven wrong, apologizes publicly, and avoids telling lies again. Can you imagine the Warmist B/S distributors, or the Fakes apologizing for every lie they say- they wouldn’t have any time left for taking breath. We don’t want them to suffocate without oxygen. Life without the Swindlers and their servants / the Fakes, would have being boring. Same as in early 40’s, without war – life would have being boring… they didn’t even have TV to watch. Children from both camps; Sesame street is on TV, stop telling fairy-tales on the net. Planet doesn’t need cooling. the past GLOBAL warmings and brimstone falling where never GLOBAL. The B/S merchants will apologize – if they are a gentlemen…

  14. Greg
    Very interesting post; thank you for putting in so much effort.

    I read the post before the graphics were fixed, so I may not yet have fully absorbed it. But one initial comment. You cite the Gomez Dome d18O ice core derived proxy record as displaying a long term non-linear trend with similarities to the ICOADS SST record, stating:

    “This oxygen isotope ratio is generally regarded as being a reliable proxy for temperature of the water at the time of evaporation (ie. in this case SST in Bellingshausen Sea).”

    A high correlation with the 60+ year long instrumental temperature record at Faraday station, further up the Antartic peninsula, was used as evidence that the Gomez d18O ice core proxy reconstruction was a valid representation of temperatures in the vincinity of the peninsula. I have analysed the Gomez Dome d18O data, kindly provided to me by Paul Dennis, and found that the high correlation with the Gomez d18O record arises only during the first half of the Faraday record; the correlation is pretty low in the last few decades. This does not appear to be due to varying effects of the non-negligible distance between Faraday and Gomez. There is no significant correlation between the Gomez d18O record and the post 1981 cloud-masked gridded AVHRR satellite surface temperature data for peninsula grid cells.

    These fluctuating correlations suggest to me that the Gomez d18O record only reflects local temperatures (e.g., in the Bellinghausen Sea) some of the time. Paul Dennis may be able to throw further light on this issue. But might the assumption that the rest of the time the Gomez data reflects SSTs from further afield be more consistent with your observation, about similarities between the Gomez record and the ICOADS SST data, than if the Gomez record always reflected only local SSTs?

  15. Greg Goodman

    Nic, interesting point. You were lucky to get the data my request just got ignored. I wanted to run the same processing on Gomez but the data is not published.

  16. Peter Davies

    The series seems to be corrupted from the start. Statistics need to be separated from any agenda! Full stop.

  17. Judith,

    Temperature data should only hold possibly 2% of the overall area of climate science. It cannot predict weather events, nor should be used for prediction of any future events. It should be strictly used as reference of the past.

    • Not sure I agree there, Joe’s World. If the temperature had followed some periodic pattern such as a sine wave, square wave, triangular wave, or whatever, for ten cycles straight with extreme accuracy, and there was a bet on what would happen next, I would gladly bet that the eleventh cycle would happen, against anyone who claimed otherwise. I’d even settle for worse than even odds.

    • Joe’s World | March 16, 2012 at 7:08 am said: temp records / climate science cannot predict weather events, nor should be used for prediction of any future events. It should be strictly used as reference of the past.

      Joe, buddy; all the past references should be used with a pinch of salt! Past has being much more falsified, than what they can do now. Finding Rhino bones is Spain and declaring GLOBAL warming by many degrees; is like finding Siberian caviar in Al Gore’s stomach – and declaring 2012 as GLOBAL ice age.

      Reason Warmist are wining with lies; is because of Ian Plimer’s collected crap from around the planet – presented by him and his zombies as REAL evidences. On Australian TV, 3 days ago, Michael Mann stated that: ”1000 y ago, the PLANET was warmer by 1C” He is using what the Fake Skeptics promote, to cover up his shame = that makes half of the commenters on this blog ( the fake Skeptics) to be ”Mann’s fig leafs”…. Or for doing the Warmist dirty jobs = ?!

  18. Greg

    Very good stuff

    I commented on the unreliability of the historic SST record in my article here;

    http://judithcurry.com/2011/06/27/unknown-and-uncertain-sea-surface-temperatures/#more-3817

    We simply should not be using historic data collected in such unscientific ways which is then used as the basis for models that will help to determine govt policy.

    The war years are problematic for a number of reasons and I exchanged considerable correspondence with John Kennedy on this subject.

    I believe pre war material is for the most part pretty worthless (with some localised exceptions) and that we need to see what we can salvage from post 1950 observations. There may be enough data collected in a proper ‘scientific’ manner (i.e non buckets and from the same depth, at frequent intervals, from the same location etc) to enable us to determine what was happening in selected areas of the globe (well used shipping lanes) from 1950 onwards.

    However, until less emphasis is put on the belief that we have a global SST record back to 1850 or so, sufficient emphasis will not be put on salvaging more modern data that might be able to tell us something useful
    tonyb

    • It seems that many issues exist as far as the raw data is concerned.

    • tonyb

      “We simply should not be using historic data collected in such unscientific ways ..”

      Words to live by, man. Words to live by.

      Even BEST, an effort of years (and still ongoing), including some excellent minds and so much of the raw data as is easy to assemble, validate and verify, may barely qualify pending completion of competent review as a sound dataset on which to draw conclusions.

      SST records? Orders of magnitude less data, and much more difficult to validate with added complications in verifying.

      The situation is not dissimilar with most paleo, reconstructed or otherwise, with a few notable exceptions in the area of the CO2 record from ice cores.. and even that has some dispute still ongoing which ought be recognized.

      What are we left with, but general principles supported somewhat by observation?

      How are we to leap, when the look we have is so astigmatic?

      • ceteris non paribus


        How are we to leap, when the look we have is so astigmatic?

        Once you see it, the rapidly approaching Mack truck will provide the required incentive to move away from the middle of the road.

      • cnp

        Your view is not so different from mine, except that I believe we don’t see the Mack truck until seventeen years after it has hit us.

        The incentive therefore is to stay off the road, build speed bumps, put bigger horns and headlights on Mack trucks, and buy a lot of insurance, rather than for instance to argue about the ten-year wide spot on the road in the opposite direction the Mack truck is coming from.

    • Steven Mosher

      for somebody who put faith in anecdotal accounts from centuries ago you are awefully quick to throw out measurements. odd that

      • Mosh

        I record anecdotal accounts (often instrumental records) then try to match them to scientifc studies or other cross references. I hope to get the tendancy of any trend but not the precision. I consistently say that. .

        Historic SST’s are in large part little more than vague guesses using water drawn at all sorts of different depths,at different times of the day in different locations, samples often left in the sun, using un calibrated equipment and then the results interpolated in order to come up with data supposedly accurate to fractions of a degree.. How is that any sort of scientific measure? Do you seriously have faith in them as a means to inform Govt policy?
        tonyb

      • Steven Mosher

        You seem to have a lot of certainty about the uncertainty in the data.
        That is a blind spot.
        You also dont understand that nobody claims accuracy to fractions of a degree. That’s a misunderstanding.
        The measure is simply out best estimate GIVEN the data we have.
        would you guess that the arctic ocean averaged 212F? no. why not?
        would you guess that the tropics averaged -2C? no. why not?

        The records are more than vague guesses. They are informed estimates with uncertainity. Now, we may argue that the uncertainty is estimated incorrectly. That requires an ARGUMENT not an assertion.

        our best estimate of SST is not your shoulder shrug. Our best estimate is not a litany of problems with the collection process. Our best estimate
        is based on the data(both temps and proxies) we have, the physics we know, and the proper application of uncertainty calculations.

        WRT government policy.

        Policy should be informed by the best evidence available. I would be more than happy to base policy on the following.

        1. It is known that GHGs warm the planet. That doubling C02 will
        warm the surface by approx 1.2C
        2. It is estimated from Paleo studies of LGM that climate sensitivity is
        around 3C.

        If you want to plan, plan around that. and put more research into narrowing the estimates of sensitivity.
        Uncertainties about SST are a wheel that doesnt turn much in that argument. Facts about the LIA could disappear and that argument still holds.

      • Mosh said

        ‘The records are more than vague guesses. They are informed estimates with uncertainty. Now, we may argue that the uncertainty is estimated incorrectly. That requires an ARGUMENT not an assertion..’

        IF the data are informed estimates with uncertainty, that range of uncertainty -in this instance many degrees-renders them pointless as a scientific measure that underwrites govt policy. The sheer variability in methodology and the huge gaps in the data-large parts of the globe were not measured on a regular basis let alone methodically – surely means we should not be using them without warning notices writ very large. In my dictionary that is the definition of vague.

        tonyb

      • I’m forced to tread a middle ground.

        On the one hand, Steven Mosher is absolutely right, and what he says in large part overlaps tonyb’s precepts too, making tonyb absolutely right, about a particular perspective and a particular portion of the issue.

        Without taking into account so much of the historical record as tonyb ably does, the detailed narrative of what actually happened surrounding the measurements, if you will a provenance of the dataset, one’s interpretation might be impoverished.

        However, one’s interpretation ought be of recorded observations made using so reliable scientific measure as available, not of the narratives only, or one is dealing strictly in speculation. (Perhaps better-or-worse-founded speculation, if cunning-enough reconstruction can be done from facts, but still speculation.)

        Pekka made an excellent point illustrating just this point of the value of historical narrative, in underscoring how thoroughgoing and diligent many were in validating and verifying and improving the instrumental record at sea, so far as they were able.

        Also, the opposite excellent point has been made about those cases where the record is unreliable, and this is another benefit of historical narrative, to tell us where we are looking not at original observations but either adjustments or wholesale falsification (for whatever original purpose, a hindrance to us now).

        On the other hand, tonyb appears to be either unaware of or unimpressed by the many statistical and analytic tools available to deliver information about the reliability of a dataset from the data itself.

        Steven Mosher is right, obviously, to value the numbers more highly than the narrative tonyb implies is “king of the lab” (but wrong if, for example, he tries to reconstruct Canada’s temperature record without knowing which sites used the same candy thermometer as they employed in making maple syrop).

        Further, we have to recall that all of this data is improvised, re-purposed from weather-recordings and not terribly suitable to climatology. The repurposing is an intellectual exercise that has produced surprising utility through the cleverness of the BEST team, that is true.

        Speaking of cleverness, let’s look at WebHub’s amazing and insightful approaches to taking advantage of poorer resolution data. It’s a special form of brilliance, and I often find myself feeling immensely dim in comparison while examining his methods and results.

        I think Greg Goodman’s contribution has leant considerable meat to the discussion of the SST and its meaning, and I think John Kennedy’s very kind explanations and his attention to this work of Greg Goodman show that this dataset, while fraught with challenges, is being handled in an sober and competent manner.

        This is the sort of discussion I came to Climate Etc. for. Thank you Dr. Curry.

      • @ tonyb | March 16, 2012 at 6:30 pm

        Well Tony Brown; should the users as you / vuk; of vague data as factual, be declared as ”Fakes” with warming sign; for occasional visitors to any blog discussing climate; or phony ”GLOBAL warming”

        Yesterday Michael Man said to an Australian reporter: the planet is warmer by 1C now, than a 1000y ago. The only reason he can get away with lies and destructions is; thanks to the Fake Skeptics. Mann is using Fake’s lies, to continue and cover up for his lies. was WARMER ONE DEGREE 1012 than today…?!?! Saying it with confidence, because you, the Fake Skeptics are doing Michael’s dirty job – he must be saving himself millions of $$$, on loo paper….

      • BartR

        We need you in the dipomatic service!

        You said;

        “On the other hand, tonyb appears to be either unaware of or unimpressed by the many statistical and analytic tools available to deliver information about the reliability of a dataset from the data itself.

        It is the second, I am unimpressed because the basic data is so random, dirty, noisy, missing and liable to so many factors that could render it meaningless compared to another and you can put it through as much torture as you like but that basic fact remains.

        Don’t get me wrong John Kennedy is a genuinely impressive scientist but the record he seeks to defend is indefensible for such large parts of the record and in so many places.

        I invite him or Mosh to apply even the basic standards of global land based temperatures, that is to say that in this context two readings are taken (min/max) in the same location on the same day at the same depth every day all year every year, using the same calibrated equipment which is then immediately read by a trained person AND recorded. .

        Variance of two daily temperature is pretty dependent on latitidude and month (very limited change at this time of the year in the NH but considerable differences in the warmer months), also dependent on amount of sunshine. Highly dependent on the depth the sample is taken at and when it is read. If the sample is left in the sun or shade for more than a minute or two the temperature will become very different.

        In all this I am primarily referring to bucket measurements which were not a precise science except on scientific expeditions or on ships where the results were recognised as being of scientific importance.

        Of course in some places more readings were taken than oters and there might be a geuine record, but for most of the world back to 1850 our knowlege is rudimentary.

        Perhps Mosh or John would care to produce a chart using all the criteria I mention above and we can all see how much of it can be considered reliable to a fraction of a degree?
        tonyb

      • You also dont understand that nobody claims accuracy to fractions of a degree.

        Nobody???

        I sure do.

        Maybe you mean “nobody that counts.”

      • Policy should be informed by the best evidence available. I would be more than happy to base policy on the following.

        1. It is known that GHGs warm the planet. That doubling C02 will
        warm the surface by approx 1.2C
        2. It is estimated from Paleo studies of LGM that climate sensitivity is
        around 3C.

        Steven, you’re practicing caveman science. You have no supportable basis for your 1.2C number. And the time constants for paleo are so many orders of magnitude different from modern climate change as to be totally irrelevant to our current circumstances.

      • Tony,

        You are certainly right in that the data from bucket measurements has innumerable faults. That may mean that no precise estimates of past SST can be made, but the relationship between problems of any individual measurement and the accuracy of the average that can be calculated from a large number of such measurements when best possible methods are used is so complex that simple guesses have little value. What is ultimately possible can be known only after very much effort has been spent in studying the data.

        The main reason for the possibility that rather accurate estimates of SST may be possible is in the fact that a large number of individually inaccurate measurements may allow for calculation of an accurate average. That’s possible if systematic errors can be eliminated for and the remaining errors of data points are random with an expectation value known to be small. Therefore all real difficulties concern estimating the systematic errors. Combining various data sets may allow for estimation of the systematic errors. That’s not easy but that may be possible.

        It’s clear that John Kennedy and other scientists who have worked on the issue have learned much, but it’s likely that there remain opportunities for further science that will improve the understanding and provide better estimates both for SST and in particular for the related uncertainties.

        You cannot substantiate the strength of your conclusions. You list real issues, but you don’t make them quantitative. It appears likely that the issues have been resolved better than you imply, but I’m not an expert to say more on that.

      • Tony b,

        As I’ve said before, the question is not are surface temperature records reliable? The question is, how reliable are surface temperature records? Your figure for the uncertainty range of “many degrees” simply does not tally with what we see in the data themselves.

        Estimates of biases from the literature are typically of order 0.1degC and random measurement errors for ship data are typically estimated to be around 1 degC. The random measurement errors matter much less than the biases for large scale averages.

        Even if SST measurements were subject to random errors of even 10 degrees, which they are not, then we would still be able to make a reasonable estimate of global annual average SST because individual random errors tend to cancel in the aggregate.

        For more detail see here:
        http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html

        Best regards,

        John

      • John Kennedy said to me

        ‘Even if SST measurements were subject to random errors of even 10 degrees, which they are not, then we would still be able to make a reasonable estimate of global annual average SST because individual random errors tend to cancel in the aggregate.’

        Random suggests very infrequently and haphazard. You cite 10 degrees as still being within the range that would enable you to retrieve a reasonable global average.

        What % of the record base would be able to exhibit these ‘random’ errors yet still enable you to overcome this sort of degree of inaccuracy?

        Half a percent? 20%?

        . I don’t want to put words in your mouth so perhaps you can tell me.

        ..tonyb

      • Hi Tonyb,

        Thanks for not putting words in my mouth. The scenario I was imagining was that each observation had a number added to it that was drawn randomly from a normal distribution with a standard deviation of 10 degrees.

        In that case, if we have 10,000 observations in a year, the uncertainty on the annual average would be roughly 10 divided by the square root of 10,000, which is 0.1degC.

        One could argue that real SST measurements aren’t quite so well-behaved, but it is possible to show (see Figure 11 of the HadSST2 paper, Rayner et al. 2006 for details) that the standard deviation of grid box averages falls roughly as one over the square root of the number of contributing observations and that the standard deviation for gridbox averages based on a single observation is a lot less than 10 degrees. Outside of the western boundary currents and frontal regions the combined measurement and sampling uncertainty is typically less than 2 degC for a single observations.

        There are caveats to this, and refinements which are described in the HadSST3 paper.

        Best regards,

        John

      • John

        You said

        ‘Outside of the western boundary currents and frontal regions the combined measurement and sampling uncertainty is typically less than 2 degC for a single observations.’

        So are you saying that single observations can be incorrect by UP to 2 degrees?.

        Bearing in mind the paucity of data in many parts of the world the further back in time you go, the potential to ‘double check’ a single observation with many other observations from the same grid/date/conditions in order to validate it and minimise the uncertainty becomes limited.
        tonyb

      • Tony,

        You continue to worry about single measurements and write: “the potential to ‘double check’ a single observation”, but that’s not the idea. Single measurements may, however, be left as they are, when we look at the average.

        Cross checking measurements is useful for improving the error estimates, but not always necessary for getting a rather accurate average.

      • Pekka

        You say

        ‘You continue to worry about single measurements and write: “the potential to ‘double check’ a single observation”, but that’s not the idea. Single measurements may, however, be left as they are, when we look at the average.’

        A single measurement within tens of thousands of other similar ones will of course get averaged out. I understand all this of course, but sometimes (back in history) we only have a single measurement, and it concerns me that it can’t be cross checked as often there is no other reading from the same time and place. Taking another measurement from another time and place that is totally unrelated and averaging it out is worrying when there is often so little data in the first place and the quality of that data is highly suspect.

        We have become oversophisticated if we think that the wildly different ways in which the reading was taken doesn’t matter as it will all average out in the end. Statistical analysis can only work if the raw data is good enough to begin with as applying all the corrections in the world to dubious data doesnt get away from the fact that it is dubious data.

        tonyb

      • Hi Tonyb,

        So are you saying that single observations can be incorrect by UP to 2 degrees?.

        No. The diagram in the paper shows the combined measurement and sampling uncertainty for a grid-box average. If we estimate the SST anomaly in a grid box using a single observation then there will inevitably be some error. Because we only have that one observation we cannot know what the error was. The uncertainty can be thought of as a probability distribution of the possible errors. The width of that distribution (if we assume it is normal) is 2 degrees, which means that a grid-box average in the gulf stream region based on a single observation can be in error by more than 2 degrees, possibly less than 2, possibly more.

        In the open oceans – most of the oceans – this uncertainty is generally much smaller because temperature gradients are smaller. In regions where there are plentiful observations, the uncertainty is much smaller. In the global average, the uncertainty is much, much smaller. At larger scales, other types of errors become more important.

        Bearing in mind the paucity of data in many parts of the world the further back in time you go, the potential to ‘double check’ a single observation with many other observations from the same grid/date/conditions in order to validate it and minimise the uncertainty becomes limited

        Our ability to estimate regional changes is limited by the quality of single observations. If you want to know the precise sea-surface temperature at a depth of approximately 30cm at 38.9degrees north and 73.7 degrees west at noon on the 21st August 1850, then we have to rely on the report of the Alabama, which reported an air temperature of 20 degC a sea-surface temperature of 22.2 degC and a pressure of 1012 hPa.

        As you have pointed out, there are any number of reasons why that number might be inaccurate and nothing we can do can greatly reduce that inaccuracy. [In this case, I suspect the original report was in Fahrenheit so something might be gained there.]

        However, if we want to know the global average SST for 1850 then that observation is aggregated with thousands of others. Whatever its peculiarities, they will largely be averaged out by circumstances peculiar to the many other observations contributing to the aggregate. What will be left are the systematic errors, the things that don’t cancel because they are common to all the measurements. For example the fact that every ship was using buckets to make measurements. The studies of these systematic effects indicate that they are typically far smaller than the errors on individual measurements.

        Best regards,

        John

      • JohnHi Joh

        John

        You said to me;

        “Our ability to estimate regional changes is limited by the quality of single observations. If you want to know the precise sea-surface temperature at a depth of approximately 30cm at 38.9degrees north and 73.7 degrees west at noon on the 21st August 1850, then we have to rely on the report of the Alabama, which reported an air temperature of 20 degC a sea-surface temperature of 22.2 degC and a pressure of 1012 hPa.”

        From the Met office library I have borrowed the books ‘Improved understanding of Past Climatic Variability from early Daily European Instrumental sources’ By Camuffo and Jones plus ‘ History and Climate’ by Jones et al. Heaven knows that the land surface record is inadequate enough (although I don’t disagree with the general upwards direction of travel from the depths of the LIA).

        In these books the authors try to quantify- by physical observation of the current circumstances and historic records-the accuracy or not of the temperatures in a specific city. They report along the lines of that there seems to be a warming bias in the observations according to their models and therfore the observations have been adjusted to fit the models.

        We seem to want to take data and try to hammer it into the shape we require no matter how umpromising or unlikely the original data is, or what it tells us.. So I will reissue my challenge for you (or Greg or Mosh) to produce an updated consistent SST record that is at least comparable to the land record. This would range from 1850 to the 1940 war time break point, with measurements taken under the following circumstances.

        1) Each grid box should have a MINIMUM of two readings per day(max and Min) taken within through that period * No interpolation is permitted, they have to be actual physical readings or else the grid box should be omitted..
        2) Each reading used (in each grid) should be taken at the same consistent depth (say 30cm) using broadly similar buckets.
        3) Each sample taken shoud be known to have been read immediately by a competent person using calibrated good quality equipment that has not been biased by the external ambient (for example by a thernometer that wasn’t hanging in a hook in the sun)
        4) Those readings should have been immediately transposed to a permanent record by a competent person.
        5) That permanent record should note the air temperature and pressure/sea/wind conditions at the time. Ideally it should record sun hours prior to the reading. Sea conditions are especially important.
        6) They should note if they were travelling in convoy, with all that implies in perhaps churning up water, plus any oddities that might have a material impact.
        7) We need to know the location of the reading within a grd box, as one taken close to land is likely to be different to one in the deep ocean.

        Note; *A Single reading per day is acceptable during ‘winter’ months in either hemisphere as SSTs are likely to be more stable due to lack of solar gain.

        best regards
        pS (this has reminded me that the books are due back this week-they are a challenging read!)
        Tonyb

      • Tony,

        Why should anybody use your criteria?

        Much less is sufficient for calculating global average SST with good accuracy. I’m not making claims on the accuracy of the present estimates, I claim only that your criteria are not justified.

      • Pekka

        Ok, what do you think are reasonable criteria that would enable us to come up with consistent and worthwhile SST data?

        We need to bear in mind that generally land based records were being taken twice a day by people who had been trained (especially in the early days) They used science grade thermometers . Temperatures were taken from a static platform and at the same height. Even then the records can be very uncertain for reasons I wrote about here;

        http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%e2%80%93-history-and-reliability-2/

        We need at least the same minimum criteria to be applied to SST’s, hence the depth stipulation and that the varying ocean temperature during the day is taken into account.

        Thermometers (which were many and varied) could potentially be broken so needed proper storage and it is not unreasonable to want them to be calibrated once in a while.

        Similarly the criteria for water readings to be taken immediately recognises that a small sample of water left in the sun quickly escalates in temperature and that a thermometer kept on a hook in the sun takes time to reflect the ambient temperature of the water (of course some buckets eventually contained an integral thermometer)

        The criteria of writing down the reading immediately reflects that this may not happen when a reading is taken in a gale or at night when the officer is asleep. John just cited the Alamama which appears to be a steamship so engine water outflow needs to be taken into account. This ship was used to transport prospectors to the California gold rush during that period. Do I believe that in these circumstances a merchant vessel obtained comprehensive and consistent scientific readings that can be used 150 years later within a global record that is used by govts to initiate policy? No I don’t .

        Over to you for your criteria.
        tonyb

      • Tony,

        It’s easy to say that your criteria are far more stringent than required of the coverage of the data, but there are no unique answers for what’s required at the minimum, because the answers depends both on the actual data and on all methods applied in the analysis.

        When the whole analysis has been completed it’s possible to estimate the accuracy of the results obtained. Some preliminary estimates can be made when both the data set and the methods have been characterized in sufficient detail. Having major fractions of the sea surface totally uncovered or very poorly covered introduces some limits for the obtainable accuracy, but the uncertainty introduced by that depends not only on the surface area but also on other properties of the poorly covered regions.

      • Hi Tonyb,

        What you are describing is a ship’s log book.
        https://s3.amazonaws.com/oldweather/ADM53-78592/0066_1.jpg

        Taken from:
        http://www.oldweather.org/

        There are elements there – measured 6 times a day – for wind speed and direction, present weather, pressure, air temperature, wet bulb temperature, sea temperature, position and other remarks.

        The Oldweather website is fascinating. The discussion forums and blog come up with some marvellous historical tidbits recorded in the logs. I particularly liked these from a compilation of things “lost overboard by an idiot… (…or any other reason!)”

        http://forum.oldweather.org/index.php?topic=2155.0

        HMS Bramble 6 July 1919 Muscat:
        “Hands washing down. Lost overboard 1 thermometer No. 10860 and guard. New thermometer in use No. 2816”

        HMS Woodlark 26 Feb 1919 Changsha:
        “Canvas bucket (1 Gallon) lost overboard by accident”

        Best regards,
        John

      • John

        Yes, thats a great record. I think I have used it in one of my articles before. I especially liked the ‘lost overboard by an idiot’. There has to be a good story behind that comment!

        It will be very useful when all this material is digitised, reading handwriting is not always easy.

        Mind you, I was doing research on the Mannheim palatine (hence Camuffos book) not only are the records handwritten but they are in Latin. For all I know Camuffo could have made everything up (although I’m sure he didn’t!)

        Bearing in mind we have all this information you should find it easy to respond to my challenge that Pekka is trying is best to water down (pun intended)

        All the best and thanks for your time.
        tonyb

    • The war years are problematic for a number of reasons and I exchanged considerable correspondence with John Kennedy on this subject.

      Whoa. Suddenly you have my full attention. ;)

      • Vaughan

        Following my article John Kennedy and I had a long private exchange relating to the SST records and the data during the war years was part of it. I also forwarded him some material from a third party.

        People send me stuff because I am discreet, which then results in a frank discussion, so I dont intend to divulge a private conversation here, but I have asked permission from the third party if I can place some war time information on this thread. Its probably in the public domain ayway but its polite to ask. Keep tuned but don’t get too excited, its not that revealing :)

        tonyb

      • Dr. Pratt currently an Americanised Ozzie would only know of ‘the’ John Kennedy, I know yet another ‘John Kennedy’.

      • Vaughan

        My 5.00am

        Here is the most easily extractable part of the wartime SST discussion, some from an agreeable third party, there was much else but it would need sifting or is private;.

        ” The War Years and lack of reliable data

        I have deliberately excluded the war time period and used this as a natural break between the modern and historic records as there is considerable Data sparsity (and other concerns) during the war period, as acknowledged by Phil Jones;

        http://www.cru.uea.ac.uk/cru/data/temperature/

        With regards to the war time period I received this from Dr Arnd Bernaerts who agreed I could forward it.

        —From Dr Bernaerts;
        “ Your SST paper is great and I agree with your conclusion: ‘ Historic Sea Surface Temperatures in particular are highly uncertain and should not be considered as any sort of reliable measure.’, particularly as far as these data are used in climatic research.

        My experience concerning SST measurements, covering the time period from about 1955 to 1964, is that they may have well served preparing weather forecasts for the next few days, and that even when all care and precautions had been taken, the result was at best plus/minus 0.5°C. “Correcting” this data for ‘climate-change research’ is dubious to say the least. That applies particularly for the time period when the measuring methods changed around the mid of the last century. That was the time of WWII, which is subject of two papers I wrote a long time ago:

        ____ (1997); ‘Reliability of sea-surface temperature data taken during wartime in the Pacific’, presented at Symposium on Resource Development, August 8-9, 1997, Hong Kong, in: PACON 97 Proceedings, pp. 240-250. (www.oceanclimate.de, Previous Essays).

        ____ (1998); “How useful are Atlantic sea-surface temperature measurements taken during World War II”, paper submitted at the Oceanology International 1998 Conference, “The Global Ocean”, 10-13 March 1998, Brighton/UK; published in Conference Proceedings Vol. 1, p 121-130. (www.oceanclimate.de, Previous Essays).”

        These papers are referenced in the next two links;
        http://www.oceanclimate.de/English/Atlantic_SST_1998.pdf
        http://www.oceanclimate.de/English/Pacific_SST_1997.pdf
        —-
        This latter paper is especially interesting as it goes into the history of SST’s and the attempts at corrections in order to change the collected data into a matrix that enabled the measurement of ‘global warming’. Both studies are taken from Dr Bernaerts web site;

        http://www.oceanclimate.de/
        —–
        This war time period is also dealt with by, amongst others, Folland and Parker;

        ftp://podaac.jpl.nasa.gov/allData/gosta_plus/retired/L2/hdf/docs/papers/1-crrt/1-CRRT.HTM

        It is reasonable to observe that, in reality, the war time data from the period commencing 1940 is so potentially flawed and sparse that it seems best to ignore it, and instead accept there are two separate data bases of differing character that lie each side of the war . The ‘modern’ era starts in 1951 (a date set by Phil Jones) where the data is thought to be four times more accurate than the first ‘historic’ data base commencing 1850, with increasing accuracy confidence as we move towards the modern era.

        This separation and exclusion of the war time period also reflects that the two data bases are not constructed on a like for like basis, and that potentially the modern era data has a greater chance of having somewhat more meaning than the data collected prior to 1940, although the accuracy levels claimed by Phil Jones (above) stretches credulity somewhat.

        tonyb

      • Vaughan Pratt | March 17, 2012 at 4:31 am |

        Hoping the threading isn’t affecting my aim too badly.

        Have we already disproven the hypothesis that sensitivity is dynamically dependent on CO2 level and other factors?

        It appears to me the sensitivity question is insoluble, and pretty much any value in the solution space may be valid for a given combinaton of surface ice, sea, solar, particulate and so forth conditions, some of which are feedbacks.

        No?

  19. Thus Spaketh the goddess Graphitquik to The Climate Science Elders:(in resonant booming godlike voice)

    “I give you this gift of post facto data adjustment. Use it wisely to save the world.”

    Andrew

    • Dan 12:4 But you, Daniel, keep this prophecy a secret; seal up the book until the time of the end, when many will rush here and there, and knowledge will increase.”

  20. Greg, nice work. Please note correct reference to ICOADS web site is
    http://icoads.noaa.gov/

    Thanks, John

    • Hi John, I’ve added the ICOADS web site under data sources

      • Greg Goodman

        The link I gave to JISOA has the ICOADS link bang in the middle of the page.

      • Greg, Do you know which version of ICOADS is presented on the JISAO page? It was last update October 2008, which I think predates ICOADS 2.5.

      • Dipl. Ing. Limburg Michael

        Hi Greg,
        a very interesting article. I am coming to a similar result, although on a more broader scope. I would like to get in contact with you, in order to share our results. Perhaps can Judy give you my email address
        best regards
        Michael Limburg

  21. HadSST3 selectively removes the majority of the longterm variations from the pre-1960 part of the record. ie. it removes the majority of the climate variation from the majority of the climate record.

    They love hockey stick. At the moment its handle is a bit kinky around 1940s. They want to remove that as they removed MWP.

    Sad.

  22. Greg Goodman

    Thanks for a very useful post.

    I would appreciate your comment on the validity the following plot of mine.

    http://bit.ly/FO7Xhi

    Thanks in advance.

    • Greg Goodman

      Well for the reasons that are stated in the article I would not waste time analysing data that have been distorted. Why look for trends, cycles or anything else in a record that had been selectively distorted.

      Long and medium variations have been messed with , what use is it to analyse it?

      • Greg Goodman

        Are you then supporting the following Lindzen statement?

        Obsessing on the details of this record is more akin to a
        spectator sport (or tea leaf reading) than a serious contributor
        to scientific efforts – at least so far.

      • Greg Goodman

        As I understood it from the paper I quoted, Jones’ 1984 adjustment was -0.5C over a similar period. I found 0.5 gave best results in FFT analysis but left some disruption particularly in dT/dt. I think a more refined adjustment with something done between ’39 and ’42 would be better but I’m really not interested in that kind of detail here.

  23. Dr. Curry,

    Perhaps you could help Mr. Goodman find a co-author to assist in whipping this into a peer-reviewed publishable paper.

    Maybe one of the denizens here, with experience in the subject matter, would volunteer.

  24. How does this exchange between Wigley and Jones play in to this analysis?
    From Climategate email 1254108338.txt

    “Here are some speculations on correcting SSTs to partly
    explain the 1940s warming blip.

    If you look at the attached plot you will see that the
    land also shows the 1940s blip (as I’m sure you know).

    So, if we could reduce the ocean blip by, say, 0.15 degC,
    then this would be significant for the global mean — but
    we’d still have to explain the land blip.

    I’ve chosen 0.15 here deliberately. This still leaves an
    ocean blip, and i think one needs to have some form of
    ocean blip to explain the land blip (via either some common
    forcing, or ocean forcing land, or vice versa, or all of
    these). When you look at other blips, the land blips are
    1.5 to 2 times (roughly) the ocean blips — higher sensitivity
    plus thermal inertia effects. My 0.15 adjustment leaves things
    consistent with this, so you can see where I am coming from.

    Removing ENSO does not affect this.

    It would be good to remove at least part of the 1940s blip,
    but we are still left with “why the blip”.

    • Spot on that man!
      It shows that the fiddlers fiddled again!

    • Jim S,
      This shows a deliberate bad faith effort by the team to play with the numbers.

    • Jim – The Wigley speculations you cite are similar to points I made in my above comment, It’s necessary for any adjusted SST data, if it is to accurately reflect reality, to be reconcilable with land data that also shows a 1940’s peak and dip. Wigley was trying to determine what adjustments might do this, but note that he wasn’t recommending that any adjustments be made arbitrarily. In fact, he wasn’t recommending anything. Rather, he was asking questions about how an adjustment would affect the overall trends.

      You only quoted part of his speculations. A more complete citation is as follows:

      “Here are some speculations on correcting SSTs to partly explain the 1940s warming blip. If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know). So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean — but we’d still have to explain the land blip. I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and i think one needs to have some form of ocean blip to explain the land blip (via either some common forcing, or ocean forcing land, or vice versa, or all of these). When you look at other blips, the land blips are 1.5 to 2 times (roughly) the ocean blips — higher sensitivity plus thermal inertia effects. My 0.15 adjustment leaves things consistent with this, so you can see where I am coming from. Removing ENSO does not affect this. It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”. Let me go further. If you look at NH vs SH and the aerosol effect (qualitatively or with MAGICC) then with a reduced ocean blip we get continuous warming in the SH, and a cooling in the NH — just as one would expect with mainly NH aerosols. The other interesting thing is (as Foukal et al. note — from MAGICC) that the 1910-40 warming cannot be solar. The Sun can get at most 10% of this with Wang et al solar, less with Foukal solar. So this may well be NADW, as Sarah and I noted in 1987 (and also Schlesinger later). A reduced SST blip in the 1940s makes the 1910-40 warming larger than the SH (which it currently is not) — but not really enough. So … why was the SH so cold around 1910? Another SST problem? (SH/NH data also attached.) This stuff is in a report I am writing for EPRI, so I’d appreciate any comments you (and Ben) might have.”

  25. If SST measurements are problematic, what about the pH record?

    • Latimer Alder

      The pH record is extremely limited in scope. The best I have seen was 114 data points covering two non-contiguous six year periods in Hawaii.

      And from that, they somehow decide that all the oceans of the world are ‘acidifying’ and that all marine life is under severe threat.

      http://hahana.soest.hawaii.edu/hot/trends/trends.html

      Yet again, the claimtotheoreticians and modellers come up with a doomsday theory. And yet again they fail to do any work to confirm it in reality before bruiting it from the rooftops.

      This ain’t the way science is supposed to be done guys. You need to do some experimental stuff as well.

      • While I don’t believe models are completely useless, the more I think about it the more much of the modeling research looks like government welfare for academics.

  26. Hi Greg,

    It’s nice to have the opportunity to discuss the various ins and outs of the sea-surface temperature record. I agree with your statement that “much of the variation in ICOADS is quite possibly due to real climate signals, not instrument bias” although I’m sure we differ on the exact interpretation. We also agree, I think, that there are systematic instrumental errors (biases) in the raw data that can confound the extraction of the real climate signals and that “before making adjustments to the data, in any scientific study, it is necessary to have solid evidence of a bias”. Where we disagree, clearly, is in how the effects of those biases might best be assessed and, if necessary, adjusted.

    I would strongly encourage anyone who has read your article to also read the papers that describe the creation of the HadSST3 data set. Copies of the paper are freely available for download from:
    http://www.metoffice.gov.uk/hadobs/hadsst3/

    There is also a broader discussion of the uncertainties in SST data here:
    http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html

    There is also a recent review paper on SST biases:
    http://wires.wiley.com/WileyCDA/WiresArticle/wisId-WCC55.html
    (copy here http://www.knmi.nl/~koek/Publicaties/10.1002_wcc.55.pdf )

    In putting together the HadSST3 analysis, my approach was to read the historical literature on sea-surface temperature measurements to understand how the measurements were made and the systematic biases associated with different methods. As well as scientific papers from the late nineteenth century to the present, there are instruction manuals given to sailors which tell them, in varying degrees of detail, how measurements ought to be made. We also have a near complete record of a WMO publication (no 47 http://icoads.noaa.gov/metadata/wmo47/ ) which lists the ships recruited into the voluntary observing fleet. WMO pub 47 contains information about how the ship’s crew made measurements of various meteorological elements, including sea-surface temperature. The individual reports in ICOADS also contain an indicator to say how the measurement was made, although that information isn’t available for every report.

    Altogether we have a lot of information concerning how measurements were made. It isn’t perfect and therefore there are uncertainties in the record. Our method for understanding this uncertainty was to test the sensitivity of the analysis to a range of different assumptions. The 100 realisations of the data set (available from http://www.metoffice.gov.uk/hadobs/hadsst3/data/download.html ) span a wide range of reasonable assumptions. For example, if no metadata is available for observations they can be either bucket or engine intake measurements. In evaluating the reliability of the data record we have to accept the possibility that the metadata are not perfect. There’s nothing special about metadata, they have uncertainties just the same as regular data do. Therefore, we also explore the possible effects of incorrect metadata by reassigning some observations.

    The 100 realisations have different long term evolutions. Taking the median, as you have, gives a restricted view of what we were trying to achieve in our study. As you acknowledge: “it would be worth investigating whether some of the individual realisations preserve [the cycle in d2T/dt2].

    Using the metadata we can split the dataset into contributions from buckets and from engine intake measurements (See Part 2 Figure 7, 8 and 9 http://www.metoffice.gov.uk/hadobs/hadsst3/diagrams.html ). Comparing global and hemispheric averages from these two components shows a clear relative bias with bucket measurements typically cooler than ERI (Engine Room Intake) measurements. This is consistent with the Folland and Parker hypothesis that buckets are generally cooler than intake measurements (see also Kent and Kaplan 2006 for a more modern assessment of bucket cool biases). The relative difference between buckets and engine intake measurements is largest in the earlier period which is consistent with the use of uninsulated buckets in the 1950s transitioning to insulated buckets in the 1970s as documented in the literature.

    Regardless of your position on the reliability of our adjustments it is clear that combining these two independent subsets of the data in time-varying proportions without adjustment – as happens in the ICOADS data base – will lead to variability in the global SST series that is not climatic in origin.

    In your conclusions you state “HadSST3 contains a series of adjustments. With the exception of the war-time glitch, they are not obvious from study of the record.” This is the difficulty of assessing biases in the data: they are not necessarily obvious from perusing the global average time series. Slowly varying biases are easily mistaken for real climate signals. It is necessary to look at individual measurements and work out how they were made. When this is done, the biases are obvious in the raw data as in the example of separating the buckets from the ERI measurements shown in our paper. The existence of biases is based on documentary evidence and estimates of the magnitudes of the biases are based on the scientific literature. Perhaps most importantly of all, they are clearly present in the raw data.

    Best regards,
    John

    • John, thanks very much for our comments here.

    • Hi John,
      I’m curious if anyone has reproduced the various methods of measurements in the field? For example, use SST measurement via a canvas bag in a range of locations and compare that with a well calibrated thermometer submerged in the location of the sample. Wouldn’t this give you a better idea of systematic bias of the various method that is better than a statistical analysis?

      • Latimer Alder

        Whoa…hold on there, Jim. That sounds perilously close to an experiment!

        And we know that climatology shuns and despises any such things, All work must be done completely from theory and statistics and with no practical input whatsoever. Otherwise the conclusions of imminent doom will get polluted by a dose of reality.

        And then what sort of a mess would we be in?

      • @ Latimer Alder | March 16, 2012 at 4:58 pm

        +1 + all the honest / secular people (from both camps) on the street

      • A simple Google search using words like canvas bucked insulated reveals immediately a few articles that discuss such measurements that have been done by numerous groups since 1966 at least.

        It’s quite depressing that so many think repeatedly that scientists are idiots who would not realize that such measurements must be done or that they would not do them. About 99% of all the proposals presented have been thought of before and implemented if found applicable.

        As one example we can read from the discussion chapter of this paper

        http://journals.ametsoc.org/doi/full/10.1175/JTECH1845.1

        references to several empirical studies but also discussion that reveals that even best experiments cannot tell accurately what the errors have been. It’s clear that a bias has been introduced and it’s approximate size is known, but knowledge at a level that would allow for fully accurate corrections is likely to remain unachievable.

      • Latimer Alder

        @pekka

        I look forward to your numerous references to show:

        1. The number of climate models that were bold enough to make testable predictions of the global temperature behaviour over the decade 2000-2010

        2. The stunningly good agreement between the models and reality (or not)

        3. The incredibly long list of excuses as to why expecting a climate model to accurately forecast the f****g climate ‘

        ‘is very unfair to the poor climate modellers who are doing their best and anyway only people with a doctorate in Radiative Physics know enough about the climate to be able to read a thermometer, so it’s no business of the public to even think of examining our great works. Or expect it to be useful or anything, And we’ll get it spot on right for 100 years hence. Just give us another supercomputer, then shut up and stop asking difficult questions’

        Even the article you link to doesn’t discuss any actual experiments but just another model to supposedly make corrections. I didn’t see the bit where they tested the model by trying it on the water anywhere either.

      • 3. The incredibly long list of excuses as to why expecting a climate model to accurately forecast the f****g climate ‘

        Recommending anger management to Latimer is a waste of time. After every such outburst he denies needing it. My forecast: in 2013 Latimer will be just as angry as in 2012.

      • Latimer Alder

        @vaughan

        See my comment below

        http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-186166

        which seems to have been misplaced.

        And are you seriously suggesting that it is unreasonable for us poor Joe and Jenny Publics – who pay for the modellers, their models and their infrastructure – to expect a useful return on our ‘investment’?

        That we should just hose our money at them and allow then to play in their sandpits to their heart’s content? Sticking their heads above the parapet just enough to bellow ‘you are all going to fry, evil deniers’ like Hansen before returning to their burrows?

        If so, then I fear that we have a profound and probably unbridgeable chasm between us on this one. But, I guess that is always the case when ivory-towered academia meets reality.

      • Well, Pekka. It took a small army of volunteers to survey the land stations in the US, thanks to Anthony Watts. My question isn’t as stupid as you imply. I was quite depressed to find scientists hadn’t already done that.

      • But what did WUWT’s survey actually accomplish, Jim2, besides the obvious fact that different land stations were at different temperatures? A land station at the top of Mt. Everest will presumably consistently register colder temperatures than one on a hot airport tarmac at sea level.

        The whole point of the anomaly concept is to measure the dependence of temperature at each station on time, not on altitude above sea level or proximity to hot objects. Running around pointing out that not all thermometers are created equal is an exercise in futility since this is well known.

      • Surface Station Project. Nice propaganda exercise. Very little science.

      • Again you may go back to scientific papers of past decades when the issue of land based observations was studied by the scientists as there was not yet much knowledge on the suitability of the available data for calculating averages of the temperature change. The scientists did publish papers where the issue was analyzed and both the severity of the problem and ways to reduce biases were studied. That work did result in the conclusion that such problems that Watts has emphasized are not large. Different groups chose different approaches to handle the issue, but the final results were rather similar. The remaining differences in the temperature development are mainly due to differing areal coverages. (And in the case of satellite measurements to the fact that satellites do not measure the surface temperature but something else that can be defined precisely only as the result that their analysis provides and that has contributions from a wide range of altitudes in a complex fashion.)

        Recently we have seen the outcome of the BEST group. That confirms that the earlier analyses were not in error in their judgment.

        As usual, the scientists have thought about most of the issues much earlier than others. It happens that non-scientists realize something significant not yet analyzed by the scientists but that’s an exception, not the rule.

      • Latimer Alder

        @vaughan

        You ask:

        ‘But what did WUWT’s survey actually accomplish, Jim2, besides the obvious fact that different land stations were at different temperatures?’

        It showed that, despite all the money thrown at ‘climatology’, none ofof them had bothered to check whether their primary means of data collection …the basis for all of their work and theories… was based on solid ground. It took a bunch of amateurs with cameras to show that it wasn’t.

        You can do all the post-hoc ‘rationalisation’ and ‘homegenisation’ and ‘adjustments’ you like. If you haven’t collected the data correctly in the first place, you are doing bad and careless science. And that this simple truth has been so neglected by so many for so long about something so fundamental really only adds to my conviction that climatology stinks.

        The rational man, before he did anything to do with worrying about whether the climate was warming or cooling, would take steps to ensure that they had a robust and effective way of collecting the raw data. But instead, they missed out this vital step in the rush to demonstrate ‘global warming’. Sloppy work, sloppy science from second-rate people.

      • LA

        No it didn’t show that. WUWT made such claims but they were unfounded. Necessary checks had been made. They were not done in the same way WUWT community did their checks, but they were done well enough.

        BEST analysis was initialized to find out, whether there was something essential left undone, but nothing substantial came out. It’s always possible to improve on earlier work, but the improvements were inconsequential and that was to be expected, because the earlier work had been good enough.

        It’s easy to make unsubstantiated accusations, but it’s wise to stop repeating them when it has been shown that they were without basis.

      • Latimer Alder

        @pekka

        Ok. Happy to believe that the checks had been made. Please just show me where the checks are documented, both in principle and in practice.

        Something like a record of their annual ‘health check’. So that there is a complete lifetime record of the provenance of all the data and any external or internal changes that may affect them.

        Where do I go and look for these?

      • LA

        You must make some web searches. I have done that perhaps one year ago, but haven’t saved the links.

        One difficulty is that many issues have been studied so long ago that finding links is not as easy as for more recent research.

        The IPCC reports are on good place to search for references, but even they are sometimes of limited help when the questions have been studied early enough. At the minimum it’s then necessary to check the earlier reports as AR4 covers mainly research done after TAR.

      • The WUWT survey showed that most of the stations were not well sited. When you say scientists had considered that before, do you mean by using the NASA night lights compensation? Could you be specific?

      • Latimer Alder

        @pekka

        So there is no single centrally documented and consistent place that I can go and look for the history of any weather station used in climatology? It may have a log, it may not. It doesn’t have a standard format nor a standard reporting period. It may be that it has never been recalibrated after its initial installation…and we would have absolutely no way of knowing.

        All of which is bad news.

        But what is even worse is that thise who rely on this data don’t even seem to care. They appear to be happy to have any old junk chucked into their digital mincing machine, confident that the answer will always be ‘global warming’, and ‘worse than we thought’.

        At the risk of sounding like my grandfather, when I were a lad we were taught that making and recording accurate observations was the foundation of all science. That without accurate data, there can be no accurate theories.

        What has changed in the last 40 years? Why is climatology immune from this fundamental truth? How can you draw any good conclusions from bum data? Or have they suddenly discovered the Philosopher’s Stone and can turn base metal data into gold?

      • “So there is no single centrally documented and consistent place that I can go and look for the history of any weather station used in climatology?”

        WUWT didn’t produce any such thing. All it produced was a snap-shot of stations.

        Scientists already knew the station siting issues WUWT raised had no biasing impact on the global mean temperature anomaly.

      • If there’s anything like one central place to look at in any science, the IPCC reports are closer to that than you can easily find in other fields. I have proposed some times that a continuing cumulative process might be better than the present one of one report every 5-7 years. That would leave many of the issues that can be raised concerning the role of IPCC, but that could at least provide a real central depository, where old papers could be located as easily as new ones.

        That would help in finding out, what the state of science is also in areas which may be common knowledge for specialists but raise continuously question among other people who would like to check the issues themselves.

        It could really be valuable for the public discussion to have such a central depository of publications indexed in a way that would make it possible to find relevant publications.

        My idea as I have presented it before, is that the depository would accept publications setting only minimal requirements for the quality but that it would contain also evaluations that would help in finding publications considered to be of high scientific quality. Such judgments could be contested which should help in reducing the weight of questionable subjective evaluations.

      • Latimer Alder

        @lolwot

        As you so rightly say, WUWT produced just a snapshot. And nobody, certainly not me, has ever suggested that they did anything else. Your ‘point’ is meaningless.

        But that such a snapshot was even needed is a great indictment of climatology and climatologists. For – in any other serious scientific/engineering endeavour – a detailed, available and up to date knowledge of the state of all your measuring instruments, their service history and calibration, their external and internal surroundings and anything else that could potentially affect their readings is a prereq to making sure that you are actually collecting valid data. And can make appropriate corrections if needed.

        Letting the whole measurement estate fall into rack and ruin until it all goes t**s up and a band of volunteers is needed to record the shambles, then making a post-hoc rationalisation that it didn’t matter anyway, is not the textbook recommended way to work on ‘the most important problem facing humanity’

        But is, I fear, only symptomatic of the lackadaiscal and seriously unprofessional approach of so much of ‘climatology’.

      • Latimer Alder

        @vaughan

        You’ve lived in politically correct California too long mon brave. That a wee bit of bad language should shake your tree so hard. You could hear far worse than that in the Dog and Duck any day of the week. and it is only my cool and calming influence that, as ever, keeps the conversations reasonably civil.

        But I am expecting that even my diplomacy will be stretched for this afternoon’s Rugby when Ireland play England at Twickenham on the St Patrick’s Day weekend……there may indeed be the odd touch of profanity involved……..

        But by then Wales will have won the Grand Slam and the Championship, so it will all be irrelevant.

        And yes I am annoyed with the frigging climatologits. They seem to be hellbent on ignoring every scientific principle that I was taught to adhere to. Maybe, as an academic, you have been too close to the action to appreciate the gradual decline of these principles into a mush of activo-propaganda.

        But for me, returning to look at it after 30 years doing other stuff, the difference is profound and shocking. That’s why even my normal equanimity gets disturbed by the antics of these psuedoscientists.

      • Jim2

        Yes I have, but only on a very experimental scale. During and after the writing of my own article on SST’s I took measurements of SST’s from my rowing boat on the ocean and also from a convenient pier. The intention on my part was to see the difference in temperatures in slightly changed locations and also at different depths. (as much as 3 Degrees C)
        The thing I was most interested in however was to ascertain how quickly the temperature in the bucket changed once it was brought to the surface and exposed to shade/sun.

        Whilst this would all be very different in the winter months I found that unless the reading was taken within thirty seconds the water sample left in the (warm) sun would increase by 1 degree C after that 30 seconds and be as much as 5C higher within five/ten minutes. . A shade sample would react slower but could drop by 1 C in the same time. After ten minutes or so the modest and chilly bucket sample was distinctly tepid.

        In the real world bucket samples were gathered at a variety of depths and measurements taken at various times after it was brought on board. These fundamentally affect the record (even assuming a sample was ever taken in the first place-which for most of the world back to 1850 didnt happen)

        I’m not presenting this as science, just practical observations which have some bearing on the subject
        tonyb

      • Thanks, Tony. Of course, ocean going vessels are 1) much higher from the ocean surface than a row boat and 2) moving at a faster speed. There may be other variables like heat from the engines and equipment, etc. My question was if such measurements have been conducted in the past, not necessarily by you personally.

      • jim2

        Absolutely apreciate your point-as I say the main aim was to quantify the real world taking of the temperature in the bucket, and it was an ancillary matter to see how temperature varied at depth/location.

        I am sure someone has measured it in the past in order to see the correlation but I am not aware of any published studies
        tonyb

      • Greg Goodman

        Read the papers linked to the article. Folland and Parker, from memory, documents some wind tunnel tests and one student ocean test in US. Not a final word on the story but considerably more rigorous than Tony’s rowing boat.

        Having done all that it still comes down to gross speculation as to deployment.

    • Dr. Kennedy
      I have read 2 or 3 papers of yours on the SST, have occasional exchange of views with TonyB, but on this one we don’t agree since I think that the AMO data is most likely good, except for two short periods (around 1920 and 1950).
      In an article I wrote, the N.A. SST is compared with the directly measured atmospheric pressure at Stykkisholmur weather station / Reykjavik, since 1860’s, I see no reason to suspect accuracy of it, since it was measured at a single location by professionals.
      To my surprise, except for the two short periods mentioned above, I found extremely high correlation with the AMO, which makes me think that the AMO data must be OK.
      Relevant section can be found at page 8 by following this link:
      http://www.vukcevic.talktalk.net/theAMO-NAO.htm
      If you whish to make a private comment my email is at top of the page 1.

    • John, do you know if the Met office used the weather reports from German U-boats and weather ships to reconstruct the SST?
      The Rn and then USN intercepted weather data from German weather ships (initially sent without ENIGMA) and from U-boats (sent with ENIGMA).
      I know that the weather reports were used as a crib to get settings for the description, as they generally had the same format, so the code breakers had a very good idea what half the message would be.
      The U-boats would give location and weather, ‘possibly’ including water temperature.
      Those records must exist somewhere. The U-boats would all have used the same temperature, manifold measurements, and would have had a greater coverage of the oceans than did the allied convoys.
      The USN also intercepted Japanese signals and temperature was a huge issue in Japanese submarine habitability (they had no air conditioning).
      Have you tried to get Axis weather data?

      • Hi DocMartyn,

        There are some near-surface sub-surface measurements in ICOADS that came from the World Ocean Database. I think the metadata associated with these from the 1930s and 40s indicates that they were mostly made using Mechanical BathyThermographs (MBTs) and CTDs. There was no mention of submarines and there’s no mention of submarines in the latest version of the WOD documentation that I could see.

        Best regards,

        John

  27. Hi Greg,

    I might have misunderstood your study, but there are some aspects of the discussion of Kennedy et al. 2001c that seem not to reflect accurately the work that we did.

    1.
    Where you quote Kennedy et al. 2011c
    Kennedy et al 2011c [3c] goes into some detail about how the duration of the change was determined.

    If a linear switchover is assumed which started in 1954and was 95% complete in 1969, the middle of the James and Fox study period, then the switchover would have been completed by 1970. Based on the literature reviewed here, the start of the general transition is likely to have occurred between 1954 and 1957 and the end between 1970 and 1980.

    In the context you give it seems as if the switchover referred to is the switch from buckets to engine intakes when in fact the switchover being described is from uninsulated canvas buckets to better insulated models on those ships which used buckets. The switch from canvas to insulated buckets occurred concurrently with changes in the balance of engine intake and bucket measurements. The statement is therefore consistent with Figure 1 in the paper (http://www.metoffice.gov.uk/hadobs/hadsst3/diagrams.html) and with Kent al. 2006. Note that in the Kent et al. 2006 diagram a large fraction of the observations are indicated as having unknown measurement method. By using information about the country of recruitment of ships we were able to assign metadata to a greater number of observations than Kent et al. 2006.

    2.
    Your statement that “Further, it was noted in a detailed study of the available meta data by Kent et al (2006) [10] that as late as 1970 fully 90% of temperatures, where the meta-data stated the nature of the measurement, were still done by bucket. Yet the Hadley correction is fully applied by this date assuming, incorrectly, that bucket sampling had been phased out by this time.” is likewise incorrect as is clearly shown in Part 2 Figure 2 (http://www.metoffice.gov.uk/hadobs/hadsst3/diagrams.html) where buckets are shown contributing to the SST series throughout the entire HadSST3 record.

    3.
    For HadSST3 we used version 2.5 of ICOADS. HadSST2 was based on version 2.0. Earlier versions of ICOADS contain fewer data than later versions and the balance of different measurement types, of ships from different countries has changed. Comparing HadSST3 to a different version of ICOADS, processed in a different way does not give the clearest indication of the adjustments made to the data.

    Part 2 Figure 3 (http://www.metoffice.gov.uk/hadobs/hadsst3/diagrams.html) shows the adjustments applied to the HadSST3 data and their estimated uncertainties. Part 2, Figure 4 shows how the adjustments vary geographically reflecting the different histories of shipping and measurement practice around the world.

    The HadSST2 paper, Rayner et al. 2006, shows the adjustments applied to the HadSST2 data set in Figure 8 and how they compare to the Folland and Parker adjustments in Figure 6. A copy of the paper can be found here: http://www.metoffice.gov.uk/hadobs/hadsst2/

    Best regards,
    John

    • John Kennedy,
      Thanks for your posts here. Whatever adjustments are made to any temperature construction, I hope the raw data is always available for comparison. I have a general question about adjustments and confirmation bias. I read many skeptics comments and even articles which allege or show what appears to be a systematic bias to adjust recent temperatures upward, adjust past temperatures downward or smooth temperature fluxuations so as to reduce natural variability. Such adjustments, many skeptics allege, is to make recent warming seem more pronounced and more predominantly a GHG forcing. These comments usually refer to GISTEMP reconstructions, but sometimes NOAA or Hadcru. Are you aware of these allegations and do you agree that most of the adjustments to the temperature record have had the effect of making global warming appear more pronounced as the skeptics allege? I wonder if confirmation bias may not be at work in some of the adjustments and also in the quickness that some skeptics cry foul.

      • Hi Doug,

        Adjustments applied to the data are intended to minimise the influence of non-climatic effects such as changes in instrumentation, or observing practice. Which direction they go will depend on the nature of the biases. In HadSST3 for example, the adjustments for buckets in the early period tend to reduce the long-term warming relative to the unadjusted observations. On the other hand, the adjustments for the transition from ships to buoys over the past twenty years has slightly warmed the record though not by a significant amount.

        I read this blog and others and I’m aware of a very wide range of skeptical opinions concerning climate data. I’ve never made a list of all the adjustments applied to all the different data sets and tallied up how many warm the record and how many cool it. I don’t think it would be worth the effort because all it would tell us is how many adjustments warm the record and how many cool it.

        The more interesting question (for me anyway) is whether the methods are effective at reducing non-climatic influences, whether they can be improved, and what uncertainties remain.

        The individual observations we use come from ICOADS. This is an incredible resource and is publicly and freely available. It contains individual marine reports in all their glory.

        http://icoads.noaa.gov/

        As well as providing HadSST3, the Met Office also provides the gridded data before adjustments have been applied:

        http://www.metoffice.gov.uk/hadobs/hadsst3/data/download.html

        Best regards,

        John

    • Greg Goodman

      re. “2” John, you are correct, I was misreading which adjustment that comment related to, That explains the apparent contradiction in the paper that I noted. Apologies for an unwarranted criticism.

      re. “3” . I have checked back at the JISAO page and in the downloaded file and there is no indication which version of ICOADS they are providing. I recall having quite a run around trying to find global average of that dataset. It seems that when I found the JISAO download I assumed it was up to date and was the v2.5 that I had seen everywhere else but only in gridded format.

      From the similarity of HadSST2 and HadSST3 it would seem that there is not a huge difference between the versions. However, I agree it would be more rigorous to use the same version when comparing to hadSST3. I can quickly re-run the analysis if you (anyone) are aware of a link to a global mean for ICOADS v2.5

      • Hi Greg,

        re “2”: is it possible for you to correct the article? Such small mistakes tend to take on a life of their own.

        re “3”: I could calculate global time series of the unadjusted data if that would be useful to you.

        John

  28. Try telling a molecular biologist that you know that all of your measurements are wrong, but you have removed the errors with ‘adjustments,’ and now you will analyze your new ‘data.’ Just don’t stand too close to them, to avoid the spittle that comes out of his/her mouth as they explode into laughter. Seriously, folks, massaged data is not data. Why not just be honest and admit that you can’t do anything with bad data?

  29. Greg Goodman

    John, thanks for the detailed comments. I will have to take time go through them in detail before commenting. I’m sure I may have got a few things out of context etc. However, despite the criticisms of methodology my two main concerns are about the results.

    In view of the gross uncertainty and necessarily approximative nature of making adjustment in the absence of meta-data or even when you say that you need to regard significant proportions of the meta as wrong, I find it surprising how similar the final adjustment is to the uncorrected data.

    One does get this impression that the arguments and the timing were tweaked to fit. That the answer was known in advance and an explanation sought. Indeed Folland and Parker makes frequent references to using the data to detect the error due to lack of data from which to derive it.

    Only half the late 19th c. cooling is deemed to be a bias. Presumably you are accepting the rest as climate. Does it not seem odd that the bias of buckets, deck heights etc would produce a variation similar to half the climate signal over the same period?

    Do you have any comment on the difference of the periodograms? I find it a little hard to conceive that the disruption caused to the homogeneity of the data indicates an improvement.

    Best regards. Greg.

    • Greg,
      This is an excellent article. Others have suggested that you seek a way to put this into the peer review process, and I would endorse that advice. I believe many areas of life benefit from ‘fresh eyes’ on long term problems. You seem to be fulfilling that role in a positive and admirable way.
      A general observation that your article, and the defense of the status quo both raise is that it appears that whatever is going on in the climate, it is barely if at all detectable out of the range of noise/uncertainty/accuracy/historic variability. Your comments regarding this observation would be most appreciated.

    • Greg,

      I look forward to your response to my more detailed comments. It would be good to straighten out any misunderstandings I have about your analysis and vice versa.

      In that spirit…

      I’m not sure how to interpret your periodogram analysis or your analysis of the derivatives. You may be assuming more knowledge on my part concerning their interpretation than I possess. I would be helpful if you could clarify or expand on a few points.

      You say: “A simple correction for the war-time glitch would be to subtract a fixed amount form the monthly averages over that period. A range of values between -0.2 and -0.5K were tested. A value of 0.4 was found to best remove the disruption seen in the first and second derivatives.” I’d be interested to know how you decide which value best removes the disruption and what the correct forms of the first and second derivatives should be.

      I’d also be grateful if you could expand a little on this comment: “Removal of the supposed biases has destroyed the homogeneity of the data.” On what physical basis would you expect the blue and green lines to agree with each other.

      An aside on the Folland and Parker corrections borrowed from an earlier essay (http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html ). The reliability of Folland and Parker is founded on more than just the 1995 paper:

      “The validity of the bias adjustments and their uncertainties can be assessed via other means. SSTs adjusted using the scheme of Folland and Parker (1995) were used by Folland (2005) to drive an atmosphere only GCM. The modelled air temperatures over land were compared to land station data and the adjusted SST data were found to give a significantly better agreement with the observed land temperatures. Folland et al. (2003) compared the adjusted SST to air temperatures on Pacific Islands. Hanawa et al. (2000) showed good agreement with independent SST data from Japanese coastal stations.”

      Thanks,
      John

      • Greg Goodman

        You’ve made quite a lot of points so I’ll address them piecemeal.

        John says:
        You say: “A simple correction for the war-time glitch would be to subtract a fixed amount form the monthly averages over that period. A range of values between -0.2 and -0.5K were tested. A value of 0.4 was found to best remove the disruption seen in the first and second derivatives.” I’d be interested to know how you decide which value best removes the disruption and what the correct forms of the first and second derivatives should be.

        I think there is agreement that the sudden war-time rise and fall is not of climatic origin. My criterion was a simple one. Remove/minimise the obvious anomaly. Often dT/dt and d2T/dt2 can help highlight a variation / anomaly that is less obvious in the TS.

        I worked on the expectation that the form of changes in this period should be similar to those immediately surrounding it and not show any unique features not seen in the rest of the data.

        The value of 0.4 seemed to best fill this criterion for both 1st and second diff. This could perhaps be analysed mathematically but visual inspection is sufficient to say 0.4 provides a better result and 0.35 and 0.5, which is a sufficient degree of precision for this data.

        Where is see HadSST3 failing here is that it produced a pronounced, extended positive lobe that is a prominent and unique feature in the entire record. An attempt to remove an anomaly is leaving another.

      • Thanks Greg,

        To me your frequency and derivative analysis falls into the category of interesting but ambiguous. The problem is that, as you say, there is no a priori reason that spectra should match. Neither is there an a priori reason why the derivatives should take a particular form. It rests on generalising an observation from one part of the data set to the whole. Clearly, that’s not to say that it is wrong, only that it is inconclusive.

        There are a couple of problems I could see with your proposed alternative adjustment.

        The first is that it is ad hoc. You picked a feature that seemed ‘unusual’ and you minimised it. This forces the data to conform to a particular idea of what the data should look like. There’s no explanation for why such an adjustment is necessary, nor what it is adjusting for, other than that it makes the data look less ‘unusual’.

        Second, Kennedy et al. and earlier papers show that there are biases in the data throughout the record. There is no explanation in your analysis as to why those biases do not affect the global temperature record at any time other than WWII. Your own statement thus applies to your preferred adjustments:

        Calculation of the biases involves … ignoring detailed studies on the proportion and timing of changes in data sampling methods as well a speculation as to the magnitude of the various effects.

        A third, more minor point is that you are adjusting a global number, whereas global average SST is the aggregate of local measurements. The HadSST3 analysis works from the bottom up, so that there are consistent adjustments at all scales, whereas your method, being top down, does not specify what the adjustments would be at smaller scales limiting their usefulness and also the possibility of validating them against independent data.

        Best regards,

        John

      • Greg Goodman

        John says: “It rests on generalising an observation from one part of the data set to the whole. Clearly, that’s not to say that it is wrong, only that it is inconclusive.”

        No, I would say it’s just the opposite. It is generalising FROM the whole record to infer the likely adjustment to a small 4 year period in 160 years of data.

        You are perhaps misunderstanding the purpose of this adjustment. As you note, it is a global adjustment, not something that could be applied instead of hadSST3 at local temporal and spacial resolutions.

        The point of the exercise was to remove the disruption caused by this glitch in order to examine the over-all frequency content of the data. Having done so, far from being chaotic, there are notable order in the data that can identified.

        What was even more surprising to me was similarity of the spectral analysis comparing that later 80y to the whole record.
        This shows a homogeneity in the which I did not expect.

        While there is no reason to expect that true climate *should* be homogeneous on that time scale, it is a notable result when it is found. Such order is unlikely to be the result of sampling error.

        Before doing this study, I was inclined to think that the global SST record was too full of uncertainly, error and bias and that any hope that errors would average out well enough to leave anything meaningful was just wishful thinking.

        I was surprised to find this sort of order in the data. I do not believe that that sort of order is likely to be created the cumulative result of a multitude of biases. This leads me to the conclusion that the bias adjustments that are destroying that homogeneity are not correcting the data but are further disrupting it.

        While the various studies have identified possible sources of bias, the lack of documentary evidence for the changes and the degree of speculation concerning the geographical extent , duration and timing of the changes makes the adjustments no less ad hoc than the simple adjustment.

        I find that useful in correcting a 4y glitch , I do not think it is a valid basis for reorganising 2/3 of the data record.

      • Greg Goodman

        PS, I should add that I don’t think ICOADS is without problems or that HadSST3 is all bad. I noted elsewhere that it does seem to have made improvements around WWII better than the simple adjustment.

        However, over-all examination of the effects on the global mean TS suggest that, in it’s present form, it is disrupting that data more than correcting it.

        Best regards, Greg.

      • Hi Greg,

        I think we might have to agree to disagree about the implications of your analysis.

        The metadata are far more complete than your description of them allows. They show changes in the observing system with time scales measured from years to decades. I think that the ‘structure’ you see in the raw data is contaminated by artificial biases in the data, not just during the 4 year ‘glitch’, although that is their most obvious manifestation, but in the whole of the record.

        The raw data can be thought of as a true climate signal (T) plus a bias term (B). i.e Raw data = T+B (for the sake of temporary simplification I’ll discount spatial sampling and other measurement errors although obviously they are important). What you see in the raw data is variations arising from T and from B. You say that the similarity between T+B and B “seems to be pushing the bounds of coincidental similarity”, but I think that by definition there ought to be a similarity. The proper comparison would be between T and B.

        Unfortunately, we don’t have T and only estimates of B. The estimated B’s from HadSST3 (remember there are more than one) don’t look like the estimated T’s.

        I can understand your uneasiness with the adjustments to the data: the adjustments are large and there are uncertainties. That is why I have been trying to persuade other SST data set developers to address the problem and why I’m keen to see how the Berkeley team approach the problem.

        Best regards,

        John

      • Greg Goodman

        “You say that the similarity between T+B and B ‘seems to be pushing the bounds of coincidental similarity’, but I think that by definition there ought to be a similarity. ”

        By definitions? Why do you think that?

        The point is, I am not working with T and B , both of which remain unknown. What I have is T(had) and T(icoads) ; B(had) being the difference.

        Please see my reply to your other comment where I explain why the similarity, far from being expected, is problematic.

        “I think that the ‘structure’ you see in the raw data is contaminated by artificial biases in the data, not just during the 4 year ‘glitch’, although that is their most obvious manifestation, but in the whole of the record.”

        There will be bias in all parts of the record to one degree or another. The point I have made several times is that it seems unlikely that bias and error would insert order and structure into the data that was not there to start with. You seem to have missed the point of that argument.

        I would be happier if you could indicate you have at least understood the points raised before agreeing to differ on them.

        best regards, Greg.

      • Hi Greg,

        I would be happier if you could indicate you have at least understood the points raised before agreeing to differ on them.

        I could say that I’ve understood them, but that wouldn’t be definitive proof that I have. All I can say is that I think I’ve understood your points. If it’s clear that I haven’t, I apologise and hope that I haven’t drained your reserves of patience to their final drop.

        “You say that the similarity between T+B and B ‘seems to be pushing the bounds of coincidental similarity’, but I think that by definition there ought to be a similarity. ” By definitions? Why do you think that?

        Say you have two arbitrary time series A and B which are uncorrelated, then even so there will be a correlation between A and A+B or B and A+B.

        That’s all.

        The point is, I am not working with T and B , both of which remain unknown. What I have is T(had) and T(icoads) ; B(had) being the difference.

        I may have worded that badly. T is what actually happened. It is the global average SST time series we would have if history had granted us perfect observations at all points and all times, the one True time series.

        Sadly, the observations we do have have biases in them. B is the time series of biases in the global average.

        T and B are what actually happened, but are essentially unknowable. All we know is T+B, which is what was measured.

        ICOADS is T+B i.e. the true evolution of the SST plus the influence of any biases. Lets denote observed values by O, so:

        O(icoads) = T + B

        In HadSST3, we estimated the biases B(had) and subtracted them from ICOADS so we have

        O(had) = T + B – B(had)

        If B(had) is exactly equal to B then we recover T. I’m not saying that’s the case, but if we had got it right then B(had) would be correlated with T+B. It would be correlated with the raw ICOADS.

        The biases would look like the raw data because the raw data have those biases in them.

        In your argument, the biases look like the raw data even though the biases are incorrect.

        Logically, if two hypotheses lead to the same observation, that observation cannot be used to decide which is correct.

        There will be bias in all parts of the record to one degree or another. The point I have made several times is that it seems unlikely that bias and error would insert order and structure into the data that was not there to start with. You seem to have missed the point of that argument.

        I didn’t miss the argument. I grant you it’s an interesting observation. What I’m saying is that there is likely to be structure in the bias (if not the other errors) and that you need to discount that possibility before you can draw the conclusions that you have drawn.

        Best regards,

        John

      • Greg Goodman

        “I didn’t miss the argument. I grant you it’s an interesting observation. ”
        Yes, it’s unlikely that there would be a (true) bias that is creating a false appearance of homogeneity in the sampled data. So if removing a supposed bias breaks the homogeneity of the data it is not formally wrong but has to be regarded as highly likely to be flawed.

        “What I’m saying is that there is likely to be structure in the bias (if not the other errors) and that you need to discount that possibility before you can draw the conclusions that you have drawn.”
        I have nothing against structure in a supposed bias. It could have a positive effect on the data if it is close the true bias. I don’t see why I have to discount structure in the bias. What I do is analyse the effect of removing the supposed bias using a variety of techniques that look at different properties of the data. That is what I presented here.

        I don’t expect you to jump to any conclusions but I’m glad you think it is an interesting approach. I would suggest that the apparent lack of a thorough evaluation of the effects of the supposed bias corrections on the data is a serious omission in the volume of published work in recent decades.

        “… but if we had got it right then B(had) would be correlated with T+B. It would be correlated with the raw ICOADS.”

        Especially if B(had)>T !! It’s a good point, it would have been worth plotting the hadSST3 adjustment compared to the resulting HADSST3 time series.So let’s have a look:

        http://i39.tinypic.com/a2gjv8.png

        This is indeed revealing.
        There is a clearly a strong similarity <1910. Here the supposed bias correction is very similar to the residual "climate". This is precisely the scenario that I described earlier: http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-187027 That there should be true climate signal hiding behind a bias of the same form duration and amplitude over a period of 70 years seems improbable. It seems that the largely speculative and ad hoc nature of determining the bias has resulted in it neatly removing half variation in the original data. That is problematic.

        There is no obvious similarity from 1910 -1940, so this observation does not apply there. It neither argues for or against that part of the adjustment. However, referring back to the article, in this period you are removing fully 2/3 of the variation from the pre-war warming period. Again the supposed bias adjustment has a variation that closely follows that in the data and uses methods that contain a high degree of speculation and hypothesis. Again, the "correction" results in removing a very significant part of the variation in the original data. Here the presumed bias swamps the remaining signal. In view of the claimed accuracy, that is problematic.

        There is some localised correlations in the post war adjustment but I regard that as a subsidiary issue in this context. So the correlation observation is also neutral for this period. Nonetheless, substantial changes are being made to the record and in this case we have the issue of rewriting the meta-data which are "presumed" to be wrong and using random numbers to arbitrarily assign others.

        "To reflect the uncertainty arising from this, 30 ± 10% of bucket observations were reassigned as ERI observations." [Kennedy 2011c]

        Declaring the meta data unreliable implies an added *uncertainty* of the size of the ERI-bucket adjustment in the data. An uncertainty that is not removed by arbitrary reassignment. I do not see this reflected in your uncertainty estimates.

        You made the reasonable comment the the simple war-time adjustment was ad hoc. Insofar as it is based on the assumption that the background variations in the '42-46 period would be similar to those in the rest of climate record that is true. Although that would not seem to be a particularly contentious assumption. As I explained, the magnitude was determined to remove the disruption caused by the glitch and allow study of the rest of the record. Indeed, it's quite close the the Hadley adjustment for those years. (HadSST3 is probably better in the detail). Modifying 4/160 years on the basis of the remaining 156 years does not render the rest of the analysis ad hoc

        Figure 3 from the presentation linked by ArndB show that WWII north east Pacific SST does not show a war-time glitch. This would seem to cast doubt on the ERI assumption.
        http://www.oceanclimate.de/English/Pacific_SST_1997.pdf

        My fundamental criticism that you are removing the majority of the variation from the majority of the record using adjustments largely speculation and hypothesis seems to go unchallenged.

        On the basis of my analysis and our subsequent discussion I don't think I can conclude that hadSST3 is a more accurate record than ICOADS , with the exception of the WWII period.

        Best regards, Greg Goodman.

      • Greg

        Once again thank you for your excellent paper which I think that John-to his great credit- has tried to fairly answer. I think he has shown himsef to be a fine example of a scientist prepared to debate his work in an open and transparent manner

        I hope he will take up my challenge to issue a new chart according to the criteria I set out on 20th March at 8.41am. It follows much the same general guidelines as used by the land surface reord-which in itself isn’t a shining example of a scientifc database.

        As you can see from my contibutions and original article, I have grave concerns about the provenance of the raw material, believing that most of it was collected in so flawed a manner that using it as the basis for highly detailed analysis 150 years after the sailor first threw the bucket overboard to gather a sample, gives it a scientific credibilty it does not warrant. We see this scientfic certainty of certain proxies in too many walks of life-for example tree rings.

        John has made some relevannt points as have you, and I wondered if you were intending to revise and reissue your paper to take into account the comments made, thereby enabling us to have an up to date reference point.
        tonyb

      • Hi Greg,

        Yes, it’s unlikely that there would be a (true) bias that is creating a false appearance of homogeneity in the sampled data.

        You cannot say this when you have no idea what form the biases take. You assert that the biases have no structure. Everything we know about the biases suggests that they do.

        So if removing a supposed bias breaks the homogeneity of the data it is not formally wrong but has to be regarded as highly likely to be flawed.

        As you admit, there is no a priori reason to suppose that your analysis shows homogeneity either before or after any corrections are made to the data, therefore statements of probability are such as ‘likely’ are unfounded.

        I don’t see why I have to discount structure in the bias.

        You are saying there is structure in ICOADS and less structure (by some ad hoc measure) in HadSST3 therefore the HadSST3 bias estimates are wrong. I’m saying that, even if you do not think that the biases are correct, the fact remains that there is long-term structure in the form of the metadata. There are well documented biases between different measurement types. You need to show that these have no effect on the structure of the data before you can assert that the structure in ICOADS is ‘homogeneous’.

        “… but if we had got it right then B(had) would be correlated with T+B. It would be correlated with the raw ICOADS.”

        it would have been worth plotting the hadSST3 adjustment compared to the resulting HADSST3 time series.So let’s have a look: This is indeed revealing. There is a clearly a strong similarity <1910. Here the supposed bias correction is very similar to the residual "climate". This is precisely the scenario that I described earlier. That there should be true climate signal hiding behind a bias of the same form duration and amplitude over a period of 70 years seems improbable. It seems that the largely speculative and ad hoc nature of determining the bias has resulted in it neatly removing half variation in the original data. That is problematic.

        If you plot the whole series, the picture looks quite different. It’s already very different in the one half you did plot. As you note “There is no obvious similarity from 1910 -1940, so this observation does not apply there“. After the early 1940s, the evolution of SST and biases are unrelated.

        As I’ve said before, your assertions of improbability or just assertions. You can’t argue from your own personal disbelief. Two different methods (Folland and Parker and Smith and Reynolds) give very similar estimates of the biases over this period. The Folland and Parker bias estimates have been shown to compare well to coastal land temperature stations and used to drive atmosphere only GCMs that have reproduced the land temperature variations over large areas of the world (See Folland et al. 2005 for more details, copy here: http://www.metoffice.gov.uk/hadobs/hadsst3/references.html).

        They might have been based on a variety of assumptions, but the actual adjustments have been shown to be sound by a variety of analyses.

        There is some localised correlations in the post war adjustment but I regard that as a subsidiary issue in this context. So the correlation observation is also neutral for this period. Nonetheless, substantial changes are being made to the record and in this case we have the issue of rewriting the meta-data which are “presumed” to be wrong and using random numbers to arbitrarily assign others.

        The metadata are not rewritten. We test the sensitivity of the analysis to the very real possibility that the metadata are not perfect. This is included in the uncertainty analysis.

        Declaring the meta data unreliable implies an added *uncertainty* of the size of the ERI-bucket adjustment in the data. An uncertainty that is not removed by arbitrary reassignment. I do not see this reflected in your uncertainty estimates.

        The size of the uncertainty is a fraction of the ERI-bucket difference (if 10% of the metadata are wrong then we should see and effect that’s about 10% of the difference), not the whole amount and it is incorporated in the uncertainty analysis. If no adjustments were applied, then the uncertainty would be as large as you claim. Smith et al (I think) did this in their SST uncertainty analysis. Their uncertainty estimates are actually comparable to those in HadSST3, but their mean value is biased because they make no adjustments.

        Modifying 4/160 years on the basis of the remaining 156 years does not render the rest of the analysis ad hoc

        Agree. Your adjustment is ad hoc but this has no effect on the rest of your analysis. The rest of your analysis is ad hoc for other reasons as you freely admit.

        Figure 3 from the presentation linked by ArndB show that WWII north east Pacific SST does not show a war-time glitch. This would seem to cast doubt on the ERI assumption.

        Figure 3 from the ArndB analysis says it is according to Wright 1986 and it shows only a fraction of the whole record and it is based as far as I can tell on a much less complete database of observations. It certainly is not ICOADS 2.5.

        Given that you do not know what adjustments were applied in the northeast Pacific in HadSST3, nor what differences there are in the raw data between the Wright 1986 analysis and ICOADS 2.5, this cannot possibly be an argument against the effectiveness of our bias adjustments. In fact, the adjustments are relatively small in the north east Pacific particularly in comparison to the large variability (the scale in the Arndt paper runs from -2 to +2degC), the adjustments are around 0.2 to 0.4 degC (depending on season) up to 1940 and typically less the 0.1degC after that.

        My fundamental criticism that you are removing the majority of the variation from the majority of the record using adjustments largely speculation and hypothesis seems to go unchallenged.

        Your fundamental criticism has been challenged on every single point.

        Best regards,

        John

      • Hi Tonyb,

        I hope he will take up my challenge to issue a new chart according to the criteria I set out on 20th March at 8.41am. It follows much the same general guidelines as used by the land surface reord-which in itself isn’t a shining example of a scientifc database.

        Your complete list of criteria is impossible to fulfil. How does one at this remove assess the competence of a sailor from the late nineteenth century?

        We have to work within the constraints of the data that we do have – and work to understand its limitations – rather than wishing for better data that we don’t have.

        Best regards,

        John

      • And paintings created during medieval times are so much better evidence.

      • Webhub telescope.

        I suggest that your knowledge of-and appreciation of- historical climatology is somewhat lackimg. It is a perfectly well respected branch of climate science and there are vast databases of documentary evidence of past weather/climate held in a variety of locations. I am not inventing the field. .Did you think I was?

        The met office library themselves have books giving example of historical events which include the works of Bruegel.I will be delighted if you therfore want to label them ‘crackpots’.

        I do not claim for a second that a single painting provides evidence of temperatures that can be relied on to fractions of a degree, merely that it well descrbes the changes that were taking place at the time and such pieces have received a great deal of academic investigation, some of which has been quoted to you. I don’t begin to underatand your point and suggest you look at some of the archives of historical climatology instead of dismissing it out of hand

        . Undoubtedly it is not as fashionable a brach of climate science these days as modelling or using inappropiate proxies such as tree rings, but there are many top names who have been recently nvolved in the field such as Jones and Camuffo.

        In order to try to satisfy your totally illogical blind spot on this subject can I suggest you read ‘Climate history and the Modern World’ by Huber Lamb or ‘History and Climate’ by Phil Jones/A Ogilvie/T Davies and Keith Briffa?

        Why are you so scathing of using documented records of the ever changing climate of the past? Does it not fit in with your beliefs which appears to be that the hocket stick accurately represents the pinnacle of climate sciemce?
        tonyb

      • John

        It is always a pleasure to debate with you and your involvement here does you great credit, as I said to Greg in my message to him at 5.43am.

        He is interested in the analysis whereas I am interested in the provenance of the basic raw data and how it was obtained . My point is that the historical record is being used to create a degree of accuracy which is impossible to reconstruct, as there are so many variables and issues, which we have discussed at length privately. The listing I gave was long, but not unreasonable, bearing in mind the SST record is used as such a key matrix of climate change. As such it assumes an importance that warrants very detailed analysis and scepticism that the original basic data was of sufficient merit that it can be adjusted to come up with a meaningful and highly accurate ‘scientific’ figure.

        Ok, so what crieria do you think it would be reasonable to apply, bearing in mind consistency of depth and an immediate reading of the sample with reliable equipment is a basic, as is the need to have a meaningful number of readings within a grid square i.e at least one daily reading.
        all the best
        Tonyb .

      • Hi Tonyb,

        It’s a pleasure to discuss these things with you too.

        My point is that the historical record is being used to create a degree of accuracy which is impossible to reconstruct, as there are so many variables and issues

        Ok, so what crieria do you think it would be reasonable to apply, bearing in mind consistency of depth and an immediate reading of the sample with reliable equipment is a basic, as is the need to have a meaningful number of readings within a grid square i.e at least one daily reading.

        When you say that the “historical record is being used to create a degree of accuracy which is impossible to reconstruct” your statement isn’t based on any kind of numerical analysis. It isn’t quantified so it cannot be verified.

        There are variables and issues; that much is obvious to anyone who has looked at the raw data in ICOADS (freely available from http://icoads.noaa.gov/). At every step in our analysis of SST – or any serious analysis of SST – choices are made which minimise or quantify the effects of those ‘variables and issues’.

        I’m not sure there is much to be gained by me rehashing the arguments concerning uncertainties here again. They can be found here:
        http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html

        The criteria we apply to SST observations can be found in the papers describing the construction of the data sets. See:
        http://www.metoffice.gov.uk/hadobs/hadsst2/
        http://www.metoffice.gov.uk/hadobs/hadsst3/

        Best regards,
        John

      • Greg Goodman

        Hi John, you are starting to attribute things to me that I have not said. It would be helpful to the continuation of the reasonable discussion we have been able to have so far (which I sincerely appreciate ), if you would limit attributions to what I have actually said. There is some loose paraphrasing that is misrepresenting my position .

        “You assert that the biases have no structure. Everything we know about the biases suggests that they do.”

        That is twice that you make this claim despite my last reply restating that I have no objection to the idea that there is structure in that data. I think you are still failing to grasp my basic point but please don’t misrepresent me.

        “As you admit, there is no a priori reason to suppose that your analysis shows homogeneity either before or after any corrections are made to the data, therefore statements of probability are such as ‘likely’ are unfounded.”

        No. What I actually said was that there was no a priori reason that the raw data should be homogeneous in its frequency content. So please do not rewrite what I said and then attribute it to me with “as you admit”.

        If you wish to question the validity of the periodogram as a metric that is a separate issue. So far you have not done so.

        “You need to show that these have no effect on the structure of the data before you can assert that the structure in ICOADS is ‘homogeneous’.”

        It is not necessary to analyse the biases , real or hypothesised, before examining the data. The spectrum, or any other metric of the data is what it is, irrespective of any errors or bias we may wish to examine later.

        “If you plot the whole series, the picture looks quite different. It’s already very different in the one half you did plot. ”

        I zoomed in on 100y/150y , this is not half but the 2/3 of the record which I had highlighted as having more than half of its variation removed by Hadley processing. The text of my comment explicitly covered the remaining third and said that it bore little large-scale similarity to the variations in ICOADS. So please don’t pretend that I was somehow misrepresenting things by showing the section referred to.

        You avoid commenting on the striking similarity between the adjustment and the remaining variation you presumably conclude to be a climate signal. The initial point of discussion that you seem to have lost track of.

        “The metadata are not rewritten. We test the sensitivity of the analysis to the very real possibility that the metadata are not perfect. This is included in the uncertainty analysis.”

        You clearly state in your paper that you “reassign” data positively marked as bucket measurement to be “in fact” ERI measurements. Dress this up however you like if different language you are rewriting sections of record here. The number and location of records that you amend is based on conformity to some supposed norm of the average proportion of records that “should be” ERI according to your simplified hypothesis of the change over. That is about as ad hoc as you can get. (This is not the part of the adjustment I find the most worrying but the criticism of method still stands).

        “Agree. Your adjustment is ad hoc but this has no effect on the rest of your analysis. The rest of your analysis is ad hoc for other reasons as you freely admit.”

        Again I think you are putting works in my mouth here. I do not recall having “admitted” the rest of my analysis is ad hoc. Perhaps you could explain where you see that claimed “admission”.

        “The Folland and Parker bias estimates have been shown to compare well to coastal land temperature stations and used to drive atmosphere only GCMs that have reproduced the land temperature variations over large areas of the world”.

        I already pointed out when you brought this up last time that using climate models developed for their ability to reproduce the *adjusted* climate record as a means of “verifying” the adjustments was circular logic. The positive result is programmed into the method. I don’t recall your having replied to that.

        >>
        “My fundamental criticism that you are removing the majority of the variation from the majority of the record using adjustments largely speculation and hypothesis seems to go unchallenged.”

        Your fundamental criticism has been challenged on every single point.
        >>

        Sorry, there is a disconnection here. You seem to be replying to something else somewhere else. I’ll try again.

        Hadley SST3 adjustment is removing the majority (more than half and up to two thirds) of the variation from the majority (two thirds) of the record…..[ you have not contested this observation so far] ….using …. largely speculation and hypothesis. Neither have you disagreed with the my observation that despite the detailed study of buckets of water, the derivation of the duration, timing and magnitude of the adjustments is not soundly based on rigorous scientific findings but speculation and hypothesis. This is not new, it is clearly documented in the relative papers.

        So are you claiming to have already challenged my evaluation of how much of the record is affected? If so please point me to what I missed.

        Are you saying you have challenged that there is a large degree of speculation and hypothesis involved in arriving at the bias adjustments?

        If you can answer these two points we may be able to see where we are diverging.

        I thank you for giving these issues your serious consideration. The rather disrupted formatting of this thread is not helpful but I think we are managing to keep track of each other.

        Best regards, Greg Goodman.

      • Hi Greg,

        Apologies if you feel I misrepresented what you said.

        I noticed, in rereading the article, that you have corrected part of the text. Thanks for doing that. There remains some text that is incorrect however. I think that the portion from “Kennedy et al 2011c [3c] goes into some detail about how the duration of the change was determined.” to “where the cooling adjustment clearly starts as early as 1920 and has already achieved 2/3 of it’s final extend before 1954.” inaccurately reflects the paper. As I pointed out before, the switchover referred to is from one type of bucket to another and not from bucket to ERI.

        In re-reading your analysis I remembered something that I mentioned before. The ICOADS time series you have has been processed differently to the HadSST3 time series. I think the native ICOADS grid is 2 degrees whereas the HadSST3 grid is 5 degrees. Neither is necessarily better than the other, but when computing global averages from the two, you will get different answers from the same data because 2×2 and 5×5 grids give different weights to the data points. You will tend to see more variability due to geographic sampling in the 2×2 grid. In addition there will be differences between the climatologies used and between the exact quality control applied.

        When I try and replicate the diagrams in your analysis using ICOADS and HadSST3 processed in the same way, I find that Figure 4 and Figure 5 equivalents look very different from those that you have.

        I’m not sure if inline images work here, but a summary diagram is here: http://www.metoffice.gov.uk/hadobs/hadsst3/figures/fdiff.png

        I haven’t applied all the different filters or scalings that you applied, but presumably it is close to the starting point of your analysis and it should give you an idea of the size of the effect that the different processing chains might have on your comparison.

        If you email me, I can send you the global time series I calculated from ICOADS 2.5 before the HadSST3 adjustments have been applied.

        John

      • Greg Goodman

        Hi John,

        I think if you apply the techniques I described in the text you will get similar results, what you have done is quite different. I will give you more explicit help if you need it.

        However, before delving into other issues, I would appreciate it if you would reply to the two simple questions I asked in my last post that you have side-stepped.

        You made what I believe is an inaccurate claim to have challenged my basic criticism of the adjustments. I reiterate those two questions here so that you can clarify your position on this.

        Hadley SST3 adjustment is removing the majority (more than half and up to two thirds) of the variation from the majority (two thirds) of the record…..[ you have not contested this observation so far] ….using …. largely speculation and hypothesis. Neither have you disagreed with the my observation that despite the detailed study of buckets of water, the derivation of the duration, timing and magnitude of the adjustments is not soundly based on rigorous scientific findings but speculation and hypothesis. This is not new, it is clearly documented in the relative papers.

        Are you claiming to have already challenged my evaluation of how much of the record is affected? If so please point me to what I missed.

        Are you saying you have already challenged that there is a large degree of speculation and hypothesis involved in arriving at the bias adjustments?

      • Hi Greg,

        I think if you apply the techniques I described in the text you will get similar results, what you have done is quite different. I will give you more explicit help if you need it.

        I calculated the most basic starting diagrams from your analysis and they differed. The first panel of my plot contained the information from your Figures 4 and 5. Compare the green line in the top panel of my plot to your Figure 4. There’s a lot more variation in your Figure 4 than the variation due to biases. I have explained why – the ICOADS time series has been processed in a different way – and I offered to send you the series processed identically.

        These are my attempts to replicate your initial analysis with comparison to the correctly processed ICOADS2.5:
        http://www.metoffice.gov.uk/hadobs/hadsst3/figures/fdiff2.png

        And your derivative analysis:
        http://www.metoffice.gov.uk/hadobs/hadsst3/figures/diagno.png

        Red lines are ICOADS2.5 processed identically to HadSST3. Blue lines are the version of ICOADS available from the JISAO website you linked to. Black lines are HadSST3.

        Look at the dT/dt and second derivative curves (diagno.png). From the 1850s to the 1930s, the ICOADS2.5 data and HadSST3 agree better than do the JISAO curve and ICOADS2.5. From 1930 to 1955 the bias adjustments are large so HadSST3 differs from both series. After 1955, the differences are generally smaller. A part – a large part by the looks of things – of the differences you were seeing arose from basic differences in data preparation. I think you need to redo your analysis to see what difference that makes.

        Are you claiming to have already challenged my evaluation of how much of the record is affected? If so please point me to what I missed.

        Are you saying you have already challenged that there is a large degree of speculation and hypothesis involved in arriving at the bias adjustments?

        The first statement that the adjustments remove the majority of the variation from the majority of the record is not one I contest although I disagree with you about what that means. The variation in the ICOADS series you use arises from a mix of changing biases, random errors, geographical sampling problems and the climate signal. The HadSST3 adjustments minimise the effects of the biases and attempts to quantify uncertainties associated with the estimation of the biases and the other two confounding effects, in order to better understand the climate signal. The biases are large as can be readily seen by splitting the data into bucket and engine-room samples. That’s reason for caution, but I’m not sure why you think it is a criticism. (Note, I also dealt with the subsidiary criticism that the form of the biases and the form of the raw time series are similar in my comment on March 21, 2012 at 10:59 am)

        Your characterisation of the assumptions made in the analysis as “speculation and hypothesis” is your choice of words. I would say that hypothesis is a fair description. The analysis is based on hypotheses that come from examining the literature, the data and the metadata. As with any scientific hypotheses they ought to be open to criticism, but not, I should think, the criticism that they are hypotheses. If you think a particular hypothesis is unfounded you need to work through what the consequences of that would be. For example, in the HadSST3 paper we described the effects of not using the NMAT to constrain the SST adjustments. Such changes in assumptions have observable consequences.

        The Folland and Parker part of the adjustments have been verified using comparisons to land temperatures and in the Folland paper by using them to drive the atmosphere model. You have described this more than once as circular reasoning, but it isn’t. Your earliest explanation was that the land temperatures were used to bias adjust the SST data (Greg Goodman: March 20, 2012 at 4:06 am), but that was based on a misunderstanding concerning early adjustments by Jones that are no longer used. Your later explanation that the models have been tuned to fit the global temperature curve (reiterated in a comment by Greg Goodman on March 23, 2012 at 3:30 pm), is likewise incorrect. Firstly, it does not apply to the direct comparison with land temperature stations. Secondly, the Folland paper uses an atmosphere only model (rather than a fully coupled model) with a boundary forcing set by the SSTs before and after correction.

        The background to the hypotheses and the initial checks of the HadSST3 data set are given in the HadSST3 paper as is the uncertainty analysis associated with difficulties in estimating the biases.

        Best regards,

        John

      • Greg Goodman

        Good day John,

        “The first statement that the adjustments remove the majority of the variation from the majority of the record is not one I contest although I disagree with you about what that means. ”

        OK, so we are agreed on my first point about the extent of the changes.

        “Your characterisation of the assumptions made in the analysis as “speculation and hypothesis” is your choice of words. I would say that hypothesis is a fair description. ”

        We are also agreed that it is hypothesis and you chose not to agree on “speculation”.

        So your earlier claim that you *had* “challenged” my basic criticism was either totally inaccurate or referring to something other that text it was supposedly replying to. It now seems we are in general agreement on the basics which is a good start.

        “As with any scientific hypotheses they ought to be open to criticism, but not, I should think, the criticism that they are hypotheses.”

        That never was my criticism. My objection was the removal of the majority of the variation on the basis of such hypothesis and speculation.

        “Your later explanation that the models have been tuned to fit the global temperature curve (reiterated in a comment by Greg Goodman on March 23, 2012 at 3:30 pm), is likewise incorrect. Firstly, it does not apply to the direct comparison with land temperature stations. Secondly, the Folland paper uses an atmosphere only model (rather than a fully coupled model) with a boundary forcing set by the SSTs before and after correction.”

        This is interesting, I was surprised that you had not commented on the circular reasoning problem before.

        You are saying that the fully coupled models are tuned to reproduce the adjusted climate history but not the atmosphere only model. This would appear to be a false distinction.

        Presumably (please correct me if this is incorrect), physical model of the atmosphere is essentially the same in both the A.O. and coupled models, but the GCMs also model the oceans while AOs have SST imposed as boundary conditions.

        Since this “validation” seems to be given credence by yourself and others I think it is important to consider if it is well founded.

        Could you confirm two points please.
        1. The GCM are tuned to reproduce the bias-corrected historical climate.
        2. The A.O. models inherit the same physical model of the atmosphere but are driven by historical SST instead of modelled data.

        I will come back to the reproduction of the graphs separately so as not to try to run two discussions in parallel.

        best regards, Greg.

      • Hi Greg,

        So your earlier claim that you *had* “challenged” my basic criticism was either totally inaccurate or referring to something other that text it was supposedly replying to. It now seems we are in general agreement on the basics which is a good start.

        That never was my criticism. My objection was the removal of the majority of the variation on the basis of such hypothesis and speculation.

        The biases are large and assumptions need to be made to adjust for them. I’m still not sure why you think this is a criticism. I think we ought to agree to disagree on this point.

        You have made some criticisms (such as those based on your derivative and periodogram analysis) that we can perhaps fruitfully discuss in detailed terms and I look forward to that.

        This is interesting, I was surprised that you had not commented on the circular reasoning problem before.

        See my comment on March 20, 2012 at 5:37 am concerning the disconnect between land temperatures and SST in the ‘circle’.

        I hadn’t addressed your comment about climate models supposedly being tuned to observations because I hadn’t noticed it in your argument till recently. I’m sorry if I missed it before. I’m not an expert on climate models, I’m afraid. You’ll have to get someone else to help you out on that point. My understanding is that there is limited tuning to improve the climatological average and short-term variability, but not to ‘fit’ GCM output to global average temperatures. The references for the particular model used in the Folland paper can be found in the paper:

        Copy here: http://www.metoffice.gov.uk/hadobs/hadsst3/references.html

      • Greg Goodman

        OK, you said my interpretation was incorrect so I assumed you were familiar with the subject. Apparently not.

        “Tuned” was your term, it may not be the best.

        I said that models are developed and evolve on their ability to reproduce the adjusted historic climate record. So, as a direct result, whatever atmospheric model is used will reproduce conditions that are in accord with that version of the climate.

        Seeding an A.O. model with “corrected” SST such as those used in model development will, hopefully, induce land temps reasonably close to measured land station records if the models are successful. Indeed this is one test of their skill.

        It is direct corollary of this this if you constrain the model with SSTs that contain climate variations significantly different from the climate they are designed to replicate they will produce different output.

        As my analysis shows there are fundamental differences. We are agreed that the changes are large over the first 2/3 of the record and considerably less over the remainder. Thus they represent a different climate.

        Now, without diverting into whether the bias adjustments are “right” or not, it is clear that projected land temps when running the same model constrained with uncorrected SST *will* produce land temps that are significantly different to those produced when using bias adjusted SST.

        If the model is successful and the adjusted SST run produces something close to the actual land station data, then the unadjusted run *will* produce “worse” results.

        If it did not you could probably throw out the model and start again.

        This test does not prove anything except that you have made significant structural changes to the climate data in applying the bias correction (irrespective of whether it is a good or bad one).

        The idea that this test in some way “verifies” the data is simply mistaken and unfounded.

        regards, Greg.

        PS I was cc’d on the email you got from Todd at JISCOA, you have my address if you’d like to send on your reprocessing of icoads v2.5

      • Hi Greg,

        Your description is based on your understanding of the situation. My understanding differs from yours, but as it is not my area of expertise I do not intend to comment further.

        Best regards,

        John

      • Greg Goodman

        “I’m not an expert on climate models, I’m afraid. You’ll have to get someone else to help you out on that point. ”

        “My understanding differs from yours, but as it is not my area of expertise I do not intend to comment further.”

        So your earlier categorical statement that I was incorrect was rather misplaced.

        “Your later explanation that the models have been tuned to fit the global temperature curve (reiterated in a comment by Greg Goodman on March 23, 2012 at 3:30 pm), is likewise incorrect.”

        If you don’t feel that you have enough knowledge on the subject , it would perhaps be better not tell me I am wrong, based on your lack of knowledge.

        The other claimed validations of F&P are equally tenuous if one starts to dig the literature:

        Hanawa et al 2000 studies 9 coastal SST in Japan, four of which are largely coincident in area. They eliminate 4/9 as
        “unsuitable” for assessing F&P adjustments. Oddly these are the four where F&P does not work so well. 3/9 have a
        mean residual anomaly over 30y that is close to the mean anom in F&P; in 2/9 F&P adjustment leaves about half residual mean. The 4/9 “unsuitable” ones are much less impressive but not reported. Neither do they report on how well the F&P follow the coastal record, just the magnitude of the 30y mean anomaly. Three of the five retained records are geographically coincident. On examination, this agreement with F&P, selective as it is, is so localised it is merely anecdotal. It is not the “validation” it is claimed to be. Ironically, the japanese measurements were also made by buckets. Japanese buckets are implicitly assumed not to have any bias.

        Smith and Reynolds study was more rigorous and realistic in acknowledging it’s assumptions. They clearly state that
        they use the Bottomley deck height scheme D and adjusted NMAT before starting their study.

        “… our bias corrections will be influenced by the Folland and Parker (1995) corrections.”

        The also say that :
        “When we do not adjust NMAT as discussed in section 2, our computed 1854-1941 bias correction is about constant”.

        So S&R confirms that large scale adjustments made in hadSST3 have their origins in NMAT and deck height adjustments. They do not validate, or even examine the validity of, those adjustments. They simply confirm that if they start with
        the same assumptions they get similar results. Indeed they confirm the linkage between the bias adjustment and the
        underlying climate that I have drawn attention to here.

        These three non-validations are what supposedly validates the speculative assumptions of the Hadley adjustments and
        acclaimedly show they are “sound”. It seems these claims do not bear closer examination.

        None of this proves the assumed biases are totally wrong, there likely will be some biases in the data. However, the adjustments do remove the majority of the variation from the earlier 2/3 of the record and remain highly speculative and lacking any proper validation.

        It seems that much of this work has avoided _critical_ examination for the last 3 decades.

      • Greg Goodman

        John, thanks for sending your reworked ICOADS v2.5 , it will be useful to see how different steps in the Hadley processing affect the results.

        “These are my attempts to replicate your initial analysis with comparison to the correctly processed ICOADS2.5:
        http://www.metoffice.gov.uk/hadobs/hadsst3/figures/fdiff2.png

        Your links that you posted are getting a 404 now, so I’ve uploaded a copy I saved earlier here:
        http://i39.tinypic.com/3532ltz.png
        http://i42.tinypic.com/1zwn81d.png

        I don’t understand why you claim that these are “correctly processed” and I should somehow redo my analysis.

        If part of your processing is to rework icoads into a different grid format, do your own QC and smoothing fine, but you have to recognise this is part of your processing.

        It is possible that part of the differences I observed was due to remapping to 5×5 and the associated smoothing. I’ll see if I can find anything specific from what you sent.

        Best regards, Greg.

      • Hi Greg,

        I said it’s not my area of expertise. I did not say I am unfamiliar with the arguments for and against a process such as you described. I just have to draw a line somewhere, otherwise we could end up talking about anything under the climate sun and I’m not interested in doing that.

        For what it’s worth, I have talked to model developers at length about this and I’ve been involved in improving the way that observations are used to challenge models. The process of model development involves getting the climatological mean right and trying to improve the representation of short-term variability such as ENSO and monsoon processes. Once they are happy with this, the model is run as is.

        There is concern that models in general have been gently converging on the global average temperature series. But the global average temperature is rather a different beast from regional temperatures and from regional sea-surface temperatures. Folland showed that there is agreement between the AOGCM forced with bias adjusted SST right down to the regional scale, but not with the unadjusted SST. Furthermore, the AOGCM run by Folland did not include any time varying forcings so that the only forcing was coming from the SSTs.

        If you have proof that the process you described applied to the particular AOGCM used by Folland then cite it.

        Regarding Hanawa, you are right to bring up the local scale (the comparison is local) but wrong to dismiss it as anecdotal: think what would have happened if the Folland and Parker adjustments had not worked for any of the stations. In fact, the Hanawa et al. abstract says:

        As a result, it is found that the data of five CSST stations among nine stations are suitable for comparison. When Folland and Parker’s correction is adopted to the historical SST data, the systematic biases in monthly mean SST anomalies have been corrected almost perfectly at three stations, and the biases at the other two stations have been reduced by 40-50%.

        Regarding Smith and Reynolds, the first quote in full was
        Thus, for part of the nineteenth century our bias corrections will be influenced by the Folland and Parker (1995) corrections.

        The other quote “When we do not adjust NMAT as discussed in section 2, our computed 1854-1941 bias correction is about constant” is interesting and perhaps more interesting again in full:

        When we do not adjust NMAT as discussed in section 2, our computed 1854–1941 bias correction is about constant, and the annual and 60S and 60N average is similar to the FPK84 average

        FPK84 has a large and constant bias adjustment of about 0.3degC. In the HadSST3 paper, we ask the question, what happens if the NMAT data are not used to constrain the fractions of wooden and canvas buckets in the 19th century and early 20th century? We consider the limiting cases (all wooden and all canvas) and it’s an extra uncertainty which is discussed in the paper.

        In general I think that further tests of the bias adjustments are needed and always will be. The existing analyses show what they show. They are not perfect, but nothing is. One should not see them as proof that the bias adjustments are exactly correct only that significant discrepancies have not yet been found.

        Work in this area is ongoing and the reason I engaged on this thread was that my interest was piqued by your derivative and frequency analyses.

        You said “I don’t understand why you claim that these are “correctly processed” and I should somehow redo my analysis.

        There are a whole set of steps in the process of going from a set of point observations to a global average. If you want to draw conclusions about the effects of the bias adjustments (“Removal of the supposed biases has destroyed the homogeneity of the data“) it makes sense to isolate the differences that are due purely to the bias adjustments from those due to other factors. The observations are correctly processed in the sense that the only difference between them and HadSST3 is the bias adjustments. I thought that might help strengthen your analysis.

        One could argue separately about the merits of calculating the global average one particular way, or another, of one QC scheme vs another, but those really are separate arguments.

        Best regards,
        John

      • Greg Goodman

        Hi John, thanks for your comments again.

        It seemed from your earlier comments that you were saying my processing was incorrect and I needed to redo my analysis because of that.

        If you are saying that using icoads remapped to 5×5 grid is a good way to isolate the bias adjustment from the rest of the Hadley processing effects, we are in agreement. I said that in my last post and thanked you for sending the data.

        “The process of model development involves getting the climatological mean right and trying to improve the representation of short-term variability such as ENSO and monsoon processes. Once they are happy with this, the model is run as is.”

        Thanks for giving some background on this but this simple underlines my point. If the models are designed to mimic a climate defined by the bias adjusted record, right down to climatology level detail, it is a foregone conclusion that they will produce temperatures that are significantly different when forces by significantly different SSTs. Even more so if other forcings were kept constant.

        I’m not saying that this shows the biases are right or wrong, but that such a result is a foregone conclusion and can not be taken as a validation that the biases are correct.

        “If you have proof that the process you described applied to the particular AOGCM used by Folland then cite it.”

        The argument is totally general and mathematical, it does not depend on the model beyond that the model has been build to reproduce the adjusted climate. You have confirmed that is the case. Getting bogged down in the detail is unnecessary and likely to distract from the key issue. It is easy when working on complex and sophisticated models to miss such a logical flaw.

        The proof you ask for is in the result itself, you could also double the bias corrections (or invert then), in each case the model land temps will be further from the land station records.
        This will not show anything other than the fact that the climate build into the model is one that matches the bias corrected SSTs to its other reference input, the land record.

        Clearly any model developed to reproduce the adjusted climatology can not be used to verify the adjustment. It’s self-referential. I’m surprised that this is even a point of argument once it’s been pointed out.

        I would say it is incumbent on those claiming such a method is a validation to demonstrate this is not is not happening. I do not see that addressed in the paper. Indeed I do not see that possibility has even been considered.

        Re Hawana et al 2000, here is their graph showing all CSST stations, the excluded “unsuitable” ones are in the shaded portions: http://i39.tinypic.com/2lly1oj.png

        It can be seen that the “unsuitable” records were those where the 1912-41 mean anomaly was close to the reference period, ie CSST was similar in the two periods. This does not seem to indicate that they are bad, quite the contrary. They report that these were rejected because they have lower correlation (to anything) despite two of them not being substantially lower then the three, geographically coincident ones that were retained.

        It is not stated why the correlation was a valid criterion for rejection (They are also the ones where F&P gets nowhere near). This is quite simply a case of cherry picking. As is claiming that such a geographically limited study is of any relevance to validation without also looking at a large sample of other cases in different regions.

        In isolation, as it is, this becomes no more than anecdotal as I said (even if the result were not twisted by excluding the inconvenient records).

        I only dug into this because you specifically pointed out F&P claimed the bias adjustments were “validated”. Sadly, the deeper I dig the more fragile it all gets.

        “Work in this area is ongoing and the reason I engaged on this thread was that my interest was piqued by your derivative and frequency analyses.”

        Yes, as you noted earlier, looking at the problem from as many different angles as possible could bring up new information. I will see if I can pull anything more out of it now that you have helped separate the bias adjustment from the other processing effects.

        best regards, Greg.

      • Hi Greg,

        I hope I didn’t give you the impression that your processing was incorrect. As you can see from the diagrams I posted, I could replicate at least the first part of your analysis. The point is there are several steps in the processing from the underlying data set that occur prior to your analysis that might affect your conclusions. That was and is my concern.

        The climatological averages used for model development are typically from the modern period, or say 1961-1990 which is another widely used period. The SST climatology used is therefore likely to have been largely free from the biases studied in the Folland paper and it most likely wasn’t adjusted either. The ability to approximate an average field for one period is not the same as tuning to all possible variations of SST and land temperature so I don’t think your argument holds.

        It is not stated why the correlation was a valid criterion for rejection (They are also the ones where F&P gets nowhere near). This is quite simply a case of cherry picking.

        You could ask the authors. If you don’t what the criterion was, you don’t know if it was cherry picking. My understanding was that if they don’t have the same variability it’s not clear that they are a useful comparison with each other. I know that around the US there are coastal moorings and stations in river mouths and harbours which report water surface temperatures that aren’t comparable to nearby open ocean temperature measurements used in SST datasets.

        Best regards,

        John

      • Greg Goodman

        Re Hanawa et al:
        “You could ask the authors. If you don’t what the criterion was, you don’t know if it was cherry picking.”

        I _do_ know what the criterion was, correlation. The problem was they do not say why they saw it to be pertinent. That is not for you or I (the reader) to guess, it is incumbent on the authors to explain. Anyway, since this study is so localised, it can not be taken as “validation” without a large sample of similar studies as I have already stated. I see no merit in discussing the detail of H2000 any further. My criticism is F&P’s claim that this is a validation. It is not.

        “The point is there are several steps in the processing from the underlying data set that occur prior to your analysis that might affect your conclusions. That was and is my concern.”

        My conclusions were made on Hadley processing globally, not just the bias adjustment so I don’t see this really changing my conclusion. It seems more likely that the biases are responsible for the long term changes rather than the remapping but I have not had chance to investigate that distinction yet,

        Re. model based “validation”.
        “The climatological averages used for model development are typically from the modern period, or say 1961-1990 which is another widely used period. The SST climatology used is therefore likely to have been largely free from the biases studied in the Folland paper and it most likely wasn’t adjusted either. The ability to approximate an average field for one period is not the same as tuning to all possible variations of SST and land temperature so I don’t think your argument holds.”

        Thank you for the further details. This does not negate the principal of my argument but does change its implications quite a bit. In fact it makes them more interesting.

        This means that F&P is being judged on its similarity to the climate contained in a model optimised on 1961-90: a later, relatively short period. That is a considerable hind-cast for a model optimised to a short period.

        Indeed, such a model will almost by definition, not account for any century scale component (other than representing it as a linear trend) since it would not be sufficiently defined in the calibration period. Similarly, if there were a 60y periodic component in the true climate, it would be unlikely to be caught correctly if at all.

        So what the “validation” test is actually showing is that driving the AO model with an SST record with a reduced variation, produces a result that is closer to the land record than driving it with a larger variation. Such a result is perhaps not surprising for a model optimised on such a period so short it would be unlikely to reflect longer periodic changes, even if they were present.

        Calling this a validation of the bias adjustment raises two problems:

        There is an implicit assumption that the model, not only matches the 30y period temperatures but also correctly captures the climate that is producing those changes. The two are not necessarily that same.

        There is also an implicit assumption that the model captures long term climate sufficiently well for it to reliably project back to the much earlier period of the F&P bias adjustments and be used as an arbiter to determine whether F&P is “better” than the uncorrected data.

        To suggest that this may be a taken as a validation of F&P requires rigorous validation of these two assumptions and a formal error estimate for the uncertainty of the hindcast to 1850 showing it to be substantially smaller than F&P bias that is being evaluated.

        I think you will agree that is not the case, so I have to conclude that the “validation” is not valid.

        So what comes out of this test is that the AO climate model driven by a bias-adjusted SST record with reduced variations, produces better land temp estimates than when it is driven by “uncorrected” SST containing stronger long term variability.

        Irrespective of which of the historic records is the more accurate, this observation tells us something about the model: it does not work well with significant long term variability in the SST driving it. ie the climate contained in the model does not have significant long term variability.

        Now the circular logic comes back to bite us. Having supposedly validated the F&P adjustments with the 1961-90 model, they become the new “correct” historic climate reference against which model hind-casts are verified. The process becomes self referential again.

        There is a real methodological problem here. This has to be a one way process..

        PS which paper covers the detail regional level comparison you referred to ?

        Thanks, Greg.

      • Greg Goodman

        Hi John,

        I have tried running my analysis with the remapped 5×5 degree icoads v2.5 you sent me, however, the hadSST3-icoads difference is not as tight as the graph you posted. There is a fair bit more noise.

        could you detail how you derived the HadSST3 average that you compared to icoads 5×5?

        Thanks, Greg.

      • Greg Goodman

        John says:
        I’d also be grateful if you could expand a little on this comment: “Removal of the supposed biases has destroyed the homogeneity of the data.” On what physical basis would you expect the blue and green lines to agree with each other.

        The blue and green lines being similar shows the data has a homogeneous frequency content through out the record.

        I would not say. a priori. that spectrum of the latter half *should* be the same as the first half and hence the ensemble. However, if the true climate has a notably different spectrum in the last 80 years when compared to the whole, it would seem improbably that a series of arbitrary, non climatic biases would artificially create such a tidy result.

        It is not impossible but biases and errors tend to be disruptive rather than creating order where it was not.

        Equally, if I was removing some spurious errors and the data seemed to have less structure afterwards than it did before , I would wonder if I had hit the mark.

      • Greg Goodman

        John says: The modelled air temperatures over land were compared to land station data and the adjusted SST data were found to give a significantly better agreement with the observed land temperatures.

        NMAT is corrected , in part by comparing to coastal land stations, SST is corrected to conform to NMAT , then SST drives a model to recreate air temps … I think this fails as an independent test. The “significant” improvement would seems to be programmed into the method.

      • Greg Goodman

        P.S. A similar problem arises since the evaluation and development of the models is based in part by their ability to reproduce the historical data. If you then start to “correct” the historical data based on how well they agree with model output, there is a logical short circuit.

      • Latimer Alder

        You’ll perhaps forgive me if I note that, for many sceptics like myself, this seems to be the standard method of climatology….if models and observations disagree, then change the observations.

        I had a very fierce professor who took great delight in loudly denouncing such practice as cheating. But she was a chemist, not a climatologist.

      • The problem of less than perfect data is common to most areas of science. It’s known that some part of the data has errors, either random or systematic, but getting better data is either impossible or too costly.

        Leaving the errors uncorrected is certain to lead to wrong results while right corrections are usually not knowable. Assuming that the data is still good and extensive enough to have real value the best compromise is to make either some corrections to the data or to drop suspect data points, but both of these choices may lead to biases.

        The final conclusion is that something is done for the data, but all operations are documented and published, and the additional uncertainties due to the data manipulation are also estimated and reported.

      • PP,
        What does imperfect data have to do with arbitrary systematic adjustments of data to fit the desired goal?

      • Greg Goodman

        Pekka: “… the additional uncertainties due to the data manipulation are also estimated and reported.”

        One of the problems with the new Hadley method is that they have declared the meta-data unreliable to the point where it can be ignored or inverted, which invokes a change to that data point of the full magnitude of the supposed bucket-EIR difference: ~0.5K.

        That therefore ADDS an uncertainty of that magnitude to the data. I don’t see that uncertainty being accounted for in the papers.

      • Greg,

        I have tried to keep my comments an a general level avoiding anything specific to some particular issue, because I know too little for commenting on the details. Thus I have discussed more the issue of what might be possible based on fragmentary data where individual measurements are subject to many sources of error than what is the quality of the present knowledge.

        Judging the significance of the specific point that you make here would require such knowledge on the whole calculational process that I don’t have.

      • Hi Greg,

        A couple of points.

        We do not ignore or invert the metadata. What we do is consider the possibility that some of the metadata are not perfect. We account for that uncertainty in the papers, by showing the range of SST series created in this way.

        The HadSST3 papers are here:
        http://www.metoffice.gov.uk/hadobs/hadsst3/

        Best regards,

        John

      • Hi Greg,

        The threading here is getting complicated so I’ll try and quote the bits I’m responding to:

        NMAT is corrected , in part by comparing to coastal land stations, SST is corrected to conform to NMAT , then SST drives a model to recreate air temps … I think this fails as an independent test. The “significant” improvement would seems to be programmed into the method.

        The first thing to say is that if we had perfect data, we would expect there to be consistency between SST, NMAT and land temperatures for the simple reason that they are physically inter-related. When we get persistent easterly winds over the UK in winter, we see below-average land surface air temperature anomalies and a plume of below-average NMAT anomalies extending over the oceans which fades as it moves over the warmer waters of the north Atlantic. With some lag we also tend to see a drop in SST.

        The independence of SST, NMAT and land surface air temperature data sets is an interesting point. I raised the question of SST/NMAT independence in the HadSST3 paper because it largely depends on what aspect of the climate we want to understand.

        If we want to understand changes in marine temperature in general then combining information from SST and NMAT reduces the estimated uncertainty at the expense of interdependence. On the other hand, one could make a more NMAT-independent SST adjustment with a greater spread of uncertainties if that is required.

        If a set of SST bias adjustments was created entirely independently of land temperatures and NMAT, the first thing that people would do – quite reasonably – is compare it to land temperatures and NMAT and where there were obvious physical inconsistencies it would lead to questions about the reliability of one or the other or both of the components being compared.

        One way out of this loop is to make new comparisons or to find new ways to look at the data, which is why I find your analysis interesting.

        Another was is to try to find new data sources. One thing to do is to make better measurements now. Some marine observations are improving in quality and number – SST is particularly well served by a range of different satellite instruments, the moored buoy arrays and a large number of surface drifters – while others are declining or being maintained.

        There are also ongoing efforts to find, catalogue and digitise weather observations from around the world. It has been estimated that as much pre-1950 data remains undigitised as has already been digitised. ICOADS has been a focal point for gathering newly digitised marine data, but a lot of that work has been put on hold in the current financial situation. There are still projects ongoing, such as http://www.oldweather.org/ and there are still lots of data out there.

        Best regards,
        John

      • Greg Goodman

        Hi John, you quoted me a specific case of how the adjustments were not just made from hypothesis but back-calibrated by comparing to models.

        I criticised this method as being a logical error that was almost sure to return the result you (and F&P) suggests validates the adjustment.

        Your somewhat lengthy reply does not seem to address the point you chose to highlight nor my criticism of the method.

        Should I take it that you agree with my point about such a methodology?

        regards,

      • Hi Greg,

        Your description of the process being a paraphrase misses out some important details. It’s not as simple as you make out.

        Firstly, the degree of independence varies through time. In the nineteenth century there are exposure bias problems with the NMAT data which are adjusted by comparison with SST, and the SSTs are adjusted using NMAT information. Because of this, Folland and Parker were careful only to use NMAT data which hadn’t previously been adjusted using SST data in order to constrain the SST adjustments in the period 1856-1920 for their 1995 paper. From 1920-1941, the adjustments are based on the bucket models and not on strict conformity with NMAT. In HadSST2 and earlier Met Office data sets, that’s where the connection between NMAT and SST ended.

        [The SST bias adjustments after 1941 (from HadSST3) are based on intercomparisons of different SST measurements and on the metadata. NMAT isn’t involved.]

        After the 19th century, the NMAT adjustments are based on information about deck heights and vertical temperature profiles, or, during WWII, on comparisons between DMAT (day marine air temperatures) and NMAT. They are not forced to conform to land temperature stations, nor are they forced to conform to SST.

        So NMAT and SST data sets were strongly linked in the 19th Century, less so until 1920 and then almost completely independent thereafter. Given that biases in buckets measurements depend on the air-sea temperature difference any more detailed corrections would involve using both MAT and SST together.

        The adjusted SSTs were used to drive the atmosphere only GCM and this reproduced land temperature trends. The adjusted SSTs were compared to land temperature stations in certain regions, demonstrating their general soundness.

        Best regards,

        John

      • Greg Goodman

        John , thanks for the detailed reply.

        What I was referring to was this form F&P 1995:
        “This largely removed the jump from time series of global and hemispheric SST relative to corresponding series of night marine air temperatures (NMAT) measured by ships and independently corrected by Folland et al. (1984). Jones et al. (1986) used screen air temperatures in coastal locations to correct SST data up to 1945. This assumed that the land air temperatures were free from time-varying biases such as urbanization. ”

        You say “They [NMAT] are not forced to conform to land temperature stations, nor are they forced to conform to SST.”

        Is that just a difference of degree, that they are adjusted using coastal land stations but not “forced to conform” or is the Jones adjustment not used now?

        “Because of this, Folland and Parker were careful only to use NMAT data which hadn’t previously been adjusted using SST data in order to constrain the SST adjustments in the period 1856-1920 for their 1995 paper. From 1920-1941, the adjustments are based on the bucket models and not on strict conformity with NMAT. ”

        IIRC, the NMAT-SST discrepancy ran up to 1941 with a relatively constant magnitude. So why does the NMAT adjustment get dropped in 1920 and switched in to a bucket adjustment.

        This is exactly the kind of selective adjustment strategy that makes me twitch a bit. Especially noting that the adjustment changes direction at about this time, in keeping with change in direction in the original data.

        I re-ran the comparison of hadSST3 adjustment vs ICOADS using v2.5 (your comment noted) It remains essentially the same.

        http://i44.tinypic.com/149o081.png

        In view of the uncertainty of the timing and extent of all the changes making up the multiple adjustments (six I think I counted) I find the similarity in form surprising.

        Even if the biases were to be real and most of the historic variation was error and not climate, the chance of getting it to match that well on the basis of assumptions and speculation seems slim without using the historic record as guide as to what “needs correcting”. and when.

        The switch from NMAT to bucket that you brought up seems to highlight this kind of issue.

        Best regards. Greg.

      • Hi Greg,

        This threading continues to baffle me. With luck, this will end up in the right place.

        The Jones adjustment for SST isn’t used anymore. It was superseded by Folland and Parker 1995.

        IIRC, the NMAT-SST discrepancy ran up to 1941 with a relatively constant magnitude. So why does the NMAT adjustment get dropped in 1920 and switched in to a bucket adjustment.

        It’s buckets all the way from 1850 to 1941. The NMAT time series was used to estimate the fraction of different buckets types in the Folland and Parker 1995 adjustments in the period 1856-1920. In 1920, the buckets were assumed to be all canvas so the correction depends only on the climatological average NMAT used to estimate the heat loss from the buckets. So, from 1920, the SST adjustment is not influenced by the time varying evolution of NMAT.

        It is widely understood that there are deficiencies in the Folland and Parker 1995 approach. As you point out, assumptions had to be made to derive the model. Rayner et al. 2006 took that model and used a range of different, but nonetheless plausible, assumptions to estimate the uncertainty in the adjustments.

        However, there are limits to this approach and it doesn’t touch on the underlying assumptions that are harder to test without redoing the analysis from scratch. To this end, Smith and Reynolds 2002 came up with a very different method for adjusting the SST data, which also uses NMAT. The adjustments they derived were broadly similar to the Folland and Parker adjustments, but there are differences in detail: the geographical variations as well as the evolution of the long term trend.

        Even if the biases were to be real and most of the historic variation was error and not climate, the chance of getting it to match that well on the basis of assumptions and speculation seems slim without using the historic record as guide as to what “needs correcting”. and when.

        I’m not sure I understand this completely, so apologies if what follows misses the point.

        If you are saying that there are clues to the existence of bias in the SST data in the SST data themselves, then I agree. There are also clues in their relationship to other data sets and in the attendant metadata.

        I think there are biases throughout the SST record and that analysing the raw SST data without accounting for these can give misleading or incorrect results. I would add the caveat that we do not know precisely how large the biases are, nor do we have a perfect understanding of the uncertainties. In order to derive adjustments – as in any scientific analysis – assumptions need to be made. Where possible in our uncertainty analysis we have tested the effects of making different assumptions.

        Will this span the full uncertainty range? I remain hopeful, but experience suggests that it won’t. If one looks at analyses of other climatological variables, it is invariably the case that one team’s uncertainty range will not overlap with another’s.

        I have argued – in the HadSST3 papers, and others, as well as at meetings, workshops and conferences over the past few years – that more attention needs to be given to the SST biases so that we can really get to the bottom of these things.

        Best regards,

        John

      • Greg Goodman

        John, yes the threading of comments seems to have got badly mangled. I just search the page for “March 20” or whatever. So far we seem to be following each other.

        Thanks for you explanations about F&P1995. However, I think we need to avoid getting bogged down in detailed discussions of individual papers. There is 30 years of work and dozens of papers, The format of this sort of site is necessarily too terse to go into that. The reason I posted quotes in the article was to highlight the approximative nature of the adjustments, not to take issue with any particular method in detail.

        You earlier asked a couple of good questions about my interpretation of the analysis I presented. I provided answers here:
        http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-186660

        In this part of the discussion my main objection is the overall effect of the accumulated bias adjustments, no the individual papers.

        My original title for this article was “Is HaddSST3 removing climate variations”. That was the question I set out to examine.

        Could you comment on similarity in the time dependence of the original ICOADS data and the net adjustment being applied as a result of all the Hadley bias adjustments.

        As I noted elsewhere, if you are prepared to accept after all the corrections that the remaining variation is genuine climate variation. It seems a remarkable result that the TS of the adjustment is so similar to the TS of the uncorrected data.

        To try to summarise in one phrase, it seems you are concluding for the pre-1970 data that half the variation in the original data is bias , the other half is climate.

        It is the similarity of the two that raises a red flag for me.

      • Hi Greg,

        When putting the analysis together, most attention initially was given to understanding the metadata shown in Part 2 Figure 2 (http://www.metoffice.gov.uk/hadobs/hadsst3/diagrams.html). It’s clear from that figure that there are large recorded changes in measurement method and there are relatively large biases between measurement methods. The net effect of these two factors has been imprinted on the raw data, so I’m not completely surprised that the form of the adjustments can be discerned in them.

        I think too much emphasis gets placed on the global average temperature. Nothing measures ‘globalaveragetemperature’ it’s an aggregate of local measurements with local problems. If you use the metadata to separate out the bucket measurements from the ERI measurements and calculate global averages from each one (part 2 figure 7, 8 and 9), you can see this.

        In the raw ICOADS data, these two strands are blended in varying proportions, which leads to non-climatic variations in the global average.

        Best regards,
        John

      • Greg Goodman

        Hi John,

        you still seem to be grasping my basic point here. I’ll try to make it clearer.

        Let’s suppose I found there were some defects in the thermometers used during a particular period, estimated the duration and amplitude of the effect and came up with a correction. I find that indeed it matches a bump in the data so I apply the correction. It reduces the bump to about half what is was before. This raises three possibilities:

        1. I underestimated the effect , it should have been twice as big, leaving this part of the data flat.
        2. There just happened to be true bump in the climate at the same time and of the same duration , *as well as* the error that I corrected. Rather a coincidence but not impossible.
        3. I got my bias calculation wrong because of lack of accurate meta data it should have been applied 20 year earlier / not at all.

        It seems in the case of the Hadley bias ensemble we are seeing case 2, not for a bump, but over 2/3 of the record.

        Now it’s an over simplification to say HadSST3 adjustment is a tidy half or 2/3 of the variation in ICOADS but the similarity in form is remarkable. This seems to be pushing the bounds of coincidental similarity beyond what seems reasonable.

        The coincidence in time of each of the bias adjustments with a tendency in the data over the same period is worrying.

      • Greg Goodman

        That last phrase is not quite correct. I’m referring to the ensemble of the adjustments not each one in isolation. I also note that there is a brief period from 1910-1925 when the bias correction is in to opposite direction to the tendency in ICOADS.

        I would also agree that the Hadley Centre is making a more realistic effort to address the issues of uncertainty in the data. That is clearly a step in the right direction.

      • John Kennedy: Thanks for commenting. One of the topics raised by the post was World War II adjustments. About a year and a half ago, I started a post but never got around to finishing it. It included an animation of the HADSST2 SST anomalies for 1939 through 1947, using maps of 12-month average data to reduce the seasonal component and weather noise. One of the periods that looked very odd was the boreal winter of 1943/44. Example: the 12-month period, ending in June 1944.
        http://i41.tinypic.com/17sck7.jpg
        And here’s the HADSST3 map for the same period:
        http://i42.tinypic.com/f3dk4m.jpg

        There appears to be a strong La Niña pattern. The SPCZ and KOE have relatively high positive anomalies. The only thing missing is the strongly depressed SST anomalies in the cold tongue. It looks like there’s a cold spot in the eastern equatorial Pacific, about 120W, but I would expect a stronger signal in the NINO3.4 region with the SPCZ that high. Are we missing a La Niña in the equatorial Pacific data around that time?

        Here’s the HADSST2 animation in its entirety:
        http://i41.tinypic.com/34papf9.jpg

        Regards

      • Thanks Bob,

        It looks like the way that you have plotted the data is partly obscuring the grid boxes containing the lowest anomalies. The area of cooler than average temperatures is larger than it appears. Another factor is that there aren’t huge numbers of observations in that region at that time so I would expect a certain amount of noise from measurement errors.

        The time series for Nino 3.4 in the HadSST3 paper (which should average out some of the noise) shows that it was colder than average around then, thought not by a large amount.

        Best regards,
        John

      • John and Greg,

        Why don’t you two get together and collaborate on a paper that studies the issues raised by Greg and if/where he is wrong, put the issues to bed, and, conversely, if/where he is right, his criticism will have produced a valuable improvement to the data set.

        I don’t suppose that Greg is entirely wrong on all points or that John is right on all points–though one has to allow the possibility. Hashing it out in a properly done paper that really looks at each issue will advance the science — even if Greg is entirely wrong on everything at least then these issues won’t have to be looked at again in the future.

        This is the true test of scientific integrity, if two scientists with opposing/differing views can work together to improve our knowledge of a topic.

      • Kip,
        And this is a refreshing difference in comparison to the “team’s” position that data should not be shared with those who just want to criticize.

  30. Please ignore. This is a test of the WordPress login scheme that appears to have changed.

  31. Phil Jones in 2009 on SST:

    For much of the SH between 40 and 60S the normals are mostly made up as there is very little ship data there.

    http://www.ecowho.com/foia.php?file=2729.txt&search=made+up

  32. Paul Vaughan

    Ugly naivety & deception have NO place in a mature politics of deterrence. Sensible nature-respecting people needn’t spend another minute listening to goofy & creepy advocates of nonsensical record vandalism.


    Suggestions for Greg:
    1) Caution: Ignorance of spatial dimension structure leads to nonsensical musing about temporal chaos.
    2) With wavelets you could tighten presentation to 3 incising graphs & a few tactical sentences.
    3) Stern Caution: Ignorance of EOP coding leaves investigators with SEVERE climate misconceptions.


    Regards.

  33. Bucket presumption hypothesis , I like it.
    Tony b I also like your recorded conversation with someone who served in the British Navy in 1940’s/ 50’s when bucket readings were still common.

    Hmm…irreducible imprecisions, sort of like reading tea leaves?

    • Beth, don’t forget in the 40s and 50s there were still grog rations :)

      • I think they still get ’em in the Royal Navy. I had a student who was on a US frigate during the first gulf war, and I think he said they always enjoyed a little social visit to H.M.s ships, since it meant drinks.

      • they always enjoyed a little social visit to H.M.s ships, since it meant drinks.

        You could have blown me down with a feather, NW. I had no idea the USN never recognized repeal of the 18th amendment. ;)

  34. Who on the other camp is going to start to say:

    Regarding Man-made global warming, the “Emperor has no Clothes”?

    • @ selti1 | March 16, 2012 at 7:15 pm |

      Mate, I have being saying from day one that ”the king is naked” unfortunately, the ”fake Skeptics” cannot say that about the Warmist – because they would have exposed that: ” the Emperes / themselves” don’t even have a fig-leaf on. Warmist are lying that is: 90% possibility of a GLOBAL warming in 100years. The Fake Skeptics have being constantly lying that is 101% accuracy of smaller GLOBAL warming in 100 years ++ the localized warmings / ice ages in the past were all GLOBAL for them.

      Michael Mann was last night on Australian TV; he was asked: -what about avoiding ”the Fake’s GLOBAL warming in medieval ages?” His reply was: it was warming only in Europe’ we found after that, at that time in the tropics was cooler” Checkmate Fakes!!!!!!!!!! The biggest liar on the planet, Mann is using your tactics / lies, to cover up for his lies

      Truth: Mann doesn’t need to check in the tropics, to say that he did. It’s enough for him to know that both camps are lying – competing with each other who is going to say bigger lies. He knows that he is safe, because if the Fake Skeptics say: Warmist don’t have even 0,0000000000001% of the data ESSENTIAL, for knowing what is the temp; would have exposed that: ”their lies about past phony GLOBAL warmings have even less data”. That makes the Fake Skeptics bigger part of the crime. { if policeman assists the criminal – makes the cop in the eyes of the law, as bigger part of the crime}

      So, with confidence, Man’s conclusion was:” now is 1C warmer planet than 1000y ago” Ian Plimer’s Zombies cannot say that: -”data for 1012AD is almost non-existent – because at that time the earth was flat – 70% of the GLOBAL surface area didn’t exist”… Because the ”Fake Fundamentalist” have being constantly lying that ”at that time THE WHOLE PLANET was warmer, Mann and the rest of the Swindlers can rub Fake’s noses… on innocent people’s expenses. Thanks to Ian Plimer’s sick EGOTISM

      Secular Warmist / Skeptics believers on the street, which are 80% of the people; would like to know the truth – unfortunately the media get their informations from the Fundamentalist in both camps. They call me extreme, for saying and PROVING that: warmings and ice ages are never GLOBAL

    • Who on the other camp is going to start to say: Regarding Man-made global warming, the “Emperor has no Clothes”?

      Selti1, there are two such emperors, and both camps have been saying this for so long that it’s no longer meaningful to say it anymore. It’s like “you’re an idiot,” “no I’m not, you are,” “no, you’re the idiot,” ad infinitum.

      On this blog we get this all the time from Captain Kangaroo, Chief Hydrologist, Latimer Adler, hunter, and others who have nothing to contribute but their anger at those expressing concern about AGW. All we’re missing is Jack Nicholson to make this whole blog One Flew Over the Cuckoo’s Nest.

      Though to look on the bright side we don’t seem to have a Nurse Ratched.

  35. Capt.d @ 7.10 pm

    Guess life on deck could get pretty chilly, a tot of rum to keep out the cold?

    Man overboard! :-)

  36. Fascinating! I had been thinking about a random walk model of warming and this seems closely related.

    • Steven Mosher

      warming cannot be the result of a random walk since there are boundaries that cannot physically be crossed. So, scratch that Idea

      • Mosh is absolutely correct. The only thing that can grow with a pure random walk is the variance and higher order moments. The mean always stays put. For certain kinds of random walks such as the Ornstein-Uhlenbeck process, the variance goes to an asymptotic level. That is essentially the equivalent of a variance barrier that prevents large excursions, essentially forming a boundary the random walk can’t cross.

      • Yes – we know what you did Webby.

        ‘In mathematics, the Ornstein–Uhlenbeck process (named after Leonard Ornstein and George Eugene Uhlenbeck), is a stochastic process that, roughly speaking, describes the velocity of a massive Brownian particle under the influence of friction. The process is stationary, Gaussian, and Markovian, and is the only nontrivial process that satisfies these three conditions, up to allowing linear transformations of the space and time variables. Over time, the process tends to drift towards its long-term mean: such a process is called mean-reverting.

        The process can be considered to be a modification of the random walk in continuous time, or Wiener process, in which the properties of the process have been changed so that there is a tendency of the walk to move back towards a central location, with a greater attraction when the process is further away from the centre.’

        The problem is that climate is non-stationary and non-gaussian.

      • That should be non-Gaussian of course.

      • capt. dallas 0.8 +/-0.2 | March 18, 2012 at 9:18 pm |

        “David’s comment I thought was a bit of a joke. I was thinking David was referring to Doug’s random walk from reality.”

        David has one heck of a dry sense of humor.

      • Steven Mosher

        thanks WHT.
        It astounds me that people do not get this idea. They propose models for understanding data where the functional form of the model implies physical impossibilities. I’ve seen you avoid this type of error and it makes me smile

      • Instead he creates models where the functional form is conceptually irrelevant.

      • MattStat/MatthewRMarler

        Steven Mosher: warming cannot be the result of a random walk since there are boundaries that cannot physically be crossed. So, scratch that Idea

        That criticism applies equally to all applications of random walks, does it not, such as Einstein’s analysis of Brownian motion and Perrin’s experiments based on it?

        Presumably, you are going to tell us that Brownian motors also can not work?

        Put differently, a process observed over a finite time may be well-modeled by a random walk. It can’t be accurately forecast far into the future, but most models for most things can’t be trusted to make accurate forecasts far into the future.

    • David Wojick

      I acknowledge that you are a natural and accomplished expert in cataloguing observations.

      It’s what you choose to do with the observations afterwards that leaves me scratching my head.

      You have deep wells of expertise to draw on for legitimate learning about random walks, and you go instead to Doug Cotton? (No offense intended, Doug. I’m sure you’re as mystified by David’s choice as are we.)

      • David’s comment I thought was a bit of a joke. I was thinking David was referring to Doug’s random walk from reality.

        I have been waiting for Doug to post his reward for proof of a radiant effect after everyone started redesigning “Greenhouse Effect” experiments.

        One of the simplest the radiant barriers used in home construction.

        http://www.greenbuildingadvisor.com/blogs/dept/musings/radiant-barriers-solution-search-problem That is one Doug should read. Here is the money quote, “Fundamentals, a vertical 3/4-inch air space has an R-value of about R-1 — assuming that the heat-emitting surface adjacent to the air space has an emissivity of 0.82. If the same air space is faced with a radiant barrier with a emissivity of 0.05, the R-value of the air space is boosted from R-1 to about R-3.”

        And just like in our atmosphere, “Radiant barrier fanatics have also experimented with horizontal radiant barriers on the top side of attic floor insulation. There are two problems with such radiant barriers:

        – Once the radiant barrier gets dusty, it’s no longer a low-e surface. Radiant barriers have to stay shiny to work.

        -Unless the radiant barrier is perforated, it acts as a vapor barrier. During the winter, condensation will form on the underside of the radiant barrier.”

        CO2 does change the e-value, so it impacts the R-value so, “Adding CO2 to the atmosphere tends to increase temperature. But it’s a mistake to transmogrify tends into must.” to borrow a quote from Carrick.

        The question is how strong is that tendency :)

  37. In order to draw several trend lines using woodfortrees like the following:

    http://tinyurl.com/7nuyvac

    I wrote the following simple code to write the long URL that is to be entered in the URL address box at WFT. The code is in Visual Basic.

    Private Sub CmdURL_Click()
    Dim i As Integer
    Dim strDataSet As String
    Dim intFirstTrendPeriodStartYear As Integer
    Dim intLastTrendPeriodStartYear As Integer
    Dim intTrendPeriod As Integer
    Dim strURL As String

    strURL = “http://www.woodfortrees.org”
    strDataSet = “hadcrut3vgl”
    ‘strdataset = “gistemp”

    ‘First URL
    intFirstTrendPeriodStartYear = 1850
    intLastTrendPeriodStartYear = 1915

    ‘Second URL
    ‘intFirstTrendPeriodStartYear = 1916
    ‘intLastTrendPeriodStartYear = 1981

    intTrendPeriod = 30

    For i = intFirstTrendPeriodStartYear To intLastTrendPeriodStartYear
    strURL = strURL & “/plot/” & strDataSet & “/from:” & _
    CStr(i) & “/to:” & CStr(i + intTrendPeriod) & “/trend”
    Next i
    txtURL.Text = strURL

    End Sub

    • I have been unconvinced thus far by the climate denial arguments because none of them have been able to show a WoodForTrees graph with 66 series. Girma’s magnificent achievement has completely turned me around: his 66 series have completely convinced me that humans are not responsible, for global warming or anything else for that matter, with Girma as the most irresponsible of them all!

  38. Yes, infuriating it is to see rounding of daily temperatures to the next whole number and forecasts of global warming 30 years hence ‘accurate’ to the third decimal place.

  39. @ Mug Wump (@Wagathon) | March 17, 2012 at 2:05 am

    + stefan. They know to one thousandth of a degree; since the 18 hundredths, to compare that is ”warmer” today. When was on less places monitored than even today. They are ”the leading authority” Psychiatrists will make lots of money from the believers in B/S.

    • BS you two are clueless. There are error bars around the data, it isn’t claimed it is known to “one thousandth of a degree”

      • @ lolwot | March 17, 2012 at 4:14 am said: BS you two are clueless. There are error bars around the data, it isn’t claimed it is known to “one thousandth of a degree”

        lolwot, one stating that: temp was warmer by 0,136C, it means to a thousandth of a degree!!! which means; it wasn’t 0,137C, or 0,134C, but it was 0,136C. If you pretend that you don’t know that the; last digit represents ”the thousandths of a degree” is same as pedophile pretending not to know that pedophilia is illegal / because children are under 18years old. Same as the digits behind the comma (,)

        Constantly is claimed on this and other blogs / by fundamentalists from both camps; with three / two digits for less than a degree. Gentleman should apologies for lying – Marxist will demand moderating / censoring when people tell the truth… If you people are scared from the truth; go to British BBC, or Australian ABC &SBS. They are faithfully censoring any truth regarding climate; which is: 1] misappropriation of taxpayer’s funds. 2} abuse of trust. 3] abuse of privileged positions. 4] doctrine of ” Doctrine of separation of power between the national broadcaster and political party is clinically dead!!!”

        Those idiots in the respective National Broadcasters have being risking long jail therms; just to promote the ”green lunacy” still you are losing ground ”as SS in 44 – 45.”

  40. Vaughan Pratt

    Are they CLIMATE TURNING POINTS that we see in the above graph in the 1880s and 1910s?

    • How could anybody say anything about that, when the data has be processed as you have done.

      The “more convincing” plots you create the less one can learn from them.

      • Pekka

        So don’t I have to believe what you see => http://tinyurl.com/7nuyvac ?

        i.e climate turning points from warming to cooling in the 1880s and from cooling to warming in the 1910s.

      • You see best in the original data what there’s to see. That’s true when the data is as simple as one temperature time series is. In case of more complex data some manipulation may be needed, but even then one must be very careful on doing it in a way that keeps the real content without distorting it.

        Certainly you see there the minima and maxima, but in the original data you see them in relation to the real variability. Thus you may get some feeling on the significance of the extrema and looking at the original data you see also many other features. Your manipulation makes it totally impossible to tell anything about the significance of the plot. You hide important information and overemphasize misleadingly what is left after manipulation.

      • Girma | March 17, 2012 at 5:37 am said: Vaughan Pratt, Are they CLIMATE TURNING POINTS that we see in the above graph in the 1880s and 1910s?

        Girma!!! Vaughn, wasn’t born in 1880’s, he is a young B/S artist. 2] they are just proving that: DATA FROM THE PAST IS COMPLETELY IRRELEVANT; because is not correct = it’s just a joke. Vaughn is not programmed to tell the truth; he is strictly programmed; to tell lies and to criticize if somebody tells the truth.

        Vaughn’s crystal ball is made in East Anglia university / it’s made from
        mud. He is not even good for muddying the truth; but he is trying his best.
        Vaughn has being duped by smarter people than him – he is looking for dumber people than him – to pas the lies… don’t be the volunteer, Girma

    • Greg Goodman

      Girma, the OLS linear fit is little different to mean slope. If you plot the slope value rather than plotting all the lines you will be plotting something close to the running mean of the slope (dT/dt). This will plot better than 66 lines which is pretty but not very clear.

      Two problems here: R-M is about the most crappy filter to use for smoothing data and secondly the result (the profile that you see on macramé plot) depends heavily on the period chosen. 30y in your case. I have gone through this in detail using windows from 4 to 80y and it is astounding how much the profile changes just from say 30 to 33 y.

      My motivation for this was to see to what extent IPCC’s banal sounding “over the last 50 years” type comments are in fact representative of falsely promoting a period that predetermines the conclusion. It’s the latter.

      This is also a point which came out of the fourier analysis where it was seen that looking at a shorter window of more recent data obscures the long cycles and leaves a large non cyclic residual term.

      Oh, and the third problem as I pointed out in your first post is that this data has had the long cycles messed with so until that is addressed even a valid and rigorous examination seems pretty pointless.

      • Thanks Greg.


        If you plot the slope value rather than plotting all the lines you will be plotting something close to the running mean of the slope (dT/dt). This will plot better than 66 lines which is pretty but not very clear.

        Is there an easy application that you recommend to do the above?

        At the moment, I use the Mean value of the trend’s end values to determine the temperature curve that corresponds to the trend.

      • Greg Goodman

        Get to know gnuplot. A very powerful plotting tool with good function fitting capabilities. I don’t want to clutter this thread, so I’ve just emailed you the gnuplot script I used.

      • I used gnuplot on the fit I referenced elsewhere in this thread. It does 3d contours really well. I will definitely explore the fitting and scripting capabilities further. Thanks. (Send the script my way too, if you get a chance)

      • Greg Goodman

        WebHubTelescope , make it easy , I’m not going spend long searching for you email. At least point me to it.

  41. Reblogged this on Climate Ponderings and commented:
    Cookin da books

  42. Greg Goodman

    Error estimation is a whole subject on it’s own that Judith has given a lot of attention to in the past. It is also an area that is examined in quite a lot of detail in the Kennedy et al 2011* papers that I linked at the bottom of the article. Though, like Judith, I think they are still being seriously under estimated.

    Please feel free to discuss the article rather than climate science in general.

  43. Web Hub T: You said:

    “I think this is a great model because it appears to require a significant GHG feedback effect from the CO2. If that CO2-induced positive feedback on temperature is not in the model, then the temperature and CO2 would not show as much of a variation. If this model is correct, and it should be as it only relies on well-understood physical principles, it is another Holy Grail signature of CO2 on climate variation”

    You are making a very bold assumption between correlation and causation. There are other potential driving forces for temperature at Vostok quite appart from CO2 and CO2 related feedback. There are obliquity related meridional heat transport mechanisms that have a stong temperature effect throughout Antarctica. These mechanisms cause a powerful cross equatorial heat piracy. The timing of this cross-equatorial heat transfer almost perfectly matches all of the major features of the Antarctic Temperature record (for Vostok, Dome C, and Dronning Maudland) going back more than 500,000 years. Heat is forced into /blocked out of Antarctica by changes in the insolation gradient between 30 deg Sth and 70 deg Sth. There is negligable time lag.

    The details have not been published yet. If/when that occurs there will immediately be major problems for Hansen’s use of the Antarctic temp/CO2 correlation as an indicator of “climate sensitivity”.

    Just a head-up that you shouldnt get over confident that the intersting analysis you have conducted actually means very much.

    • “The details have not been published yet. If/when that occurs there will immediately be major problems for Hansen’s use of the Antarctic temp/CO2 correlation as an indicator of “climate sensitivity”.”

      and Greenland?

    • “The details have not been published yet. If/when that occurs there will immediately be major problems for Hansen’s use of the Antarctic temp/CO2 correlation as an indicator of “climate sensitivity”.”

      You have included a modifier there “if”.

      Are you worried that the details won’t be published, and that it won’t pass peer-review?

      What you can always do is simply publish the results on the internet. People can always form their own opinion.

      From what you are saying though, is that meridional heat transport mechanisms will force cooler air or warmer air over the antarctic and drag along the appropriate CO2 concentration along with it.

      Something has to affect the CO2 concentration with temperature.

      That is what my calculation tries to demonstrate.

      “Just a head-up that you shouldnt get over confident that the intersting analysis you have conducted actually means very much.”

      “Interesting” is all that I can ask for. Thanks.

  44. A data tortured to fit a hypothesis is useless for proper scientific analysis.
    The historical temperature data is of poor quality and all these adjustment to the past do not help to improve it.
    In such case for proper scientific analysis it is better to rely on good quality data where available and proxy data for historical reconstructions – like for instance satellite data & ice core data.
    It is interesting to see that there is a big effort to reconstruct past historical temperature data based on the available measurements whilst this is totally ignored for CO2 concentration which is done based on ice core data.
    I would more trust the ice core temperature data to reflect the temperature then the CO2 concentration.
    Data tortured to fit a certain hypothesis is a big handicap for proper scientific research and other hypothesis analyses.

  45. „The average run of ‚freak’ data gives an average run of ‚freak’ results“, that is my conclusion in a paper from 1997 on SST data taken in the Pacific during WWII, which tonyb | March 17, 2012 at 12:50 pm already mentioned: http://www.oceanclimate.de/English/Pacific_SST_1997.pdf kindly mentioning also my experience in SST data collection (1955-1964), and my opinion that: “Correcting” this data for ‘climate-change research’ is dubious to say the least.

    Presumably any efforts in this respect is doomed to fail. TonyB and other have mentioned that
    ____”Historic SST’s are in large part little more than vague guesses using water drawn at all sorts of different depths, at different times of the day in different locations, samples often left in the sun, using un calibrated equipment and then the results interpolated in order to come up with data supposedly accurate to fractions of a degree.”
    Maybe one could take into account another dozen “difference”, which not necessarily have something to do with the immediate measurement (properly) taken. Since the mid 1950s the measurements were taken with equipment that allowed an accuracy of +/- 1°C.

    Let me add other anecdotal account. Once one of the weather-man receiving the daily data visited our master course and I asked him what happen to data which “looked a bit strange”? He said that they would be discarded, which annoyed me, because it happened that the temperature deviated considerably within a very short period of time, although the measurements had been taken very carefully. There came never a request to do more measurements. It seems they threw away the data which did not fit. At that time the value of SST data was short lived, and lasted hardly longer than for the next few days weather forecasting.

    I agree fully with Lars P. | March 17, 2012 at 8:09 am | :
    “A data tortured to fit a hypothesis is useless for proper scientific analysis.”, and TonyB, March 16, 2012 at 12:48 pm: “How is that any sort of scientific measure?”

  46. Greg Goodman

    “Since the mid 1950s the measurements were taken with equipment that allowed an accuracy of +/- 1°C. ”
    What kind of equipment is that?

    • The equipment was a pot, round, diameter and height about 15cm, well isolated, surrounded by a thick rubber coat, and in the middle, secured by metal springs, a thermometer (a rough discription, but ‘more or less’)

      • Greg Goodman

        Thanks for the detail, all such info is useful. My question was why +/- 1°C ? A thermometer with one degree graduations allows a reading with and uncertainty of +/- 0.5 °C not +/- 1°C. Also that is the uncertainty of taking the reading and should not be confused with “accuracy”.

      • Agreed! ‘Accuracy’ should be reserved to the quality of the data. I meant the reading could be correct in the range +/- 0.5 °C in calm weather at noon, but at night, with rain, heavy wind, and the ship rolling, any precision in the range of +/- 1°C would have been excellent.

      • Greg Goodman

        That sort of reading error will be statistically rubbed out by the large number of readings. sqrt(N) etc.

        It may cause residual errors if a small number of readings are projected over large areas. This may well be the case in some areas especially further back. This is seen in the variance of the early data.

        Read that Kennedy papers that are linked in the article if you want to better understand these issues.

      • Greg

        They are only going to be ‘statistically rubbed out’ in a meaningful fashion if there are suffficient other readings made in a like for like situation. That is to say in the same grid box and at the same time scale.

        A bunch of readings taken thousands of miles away during a different month tells us nothing useful as to what happened in the original grid at a different time of year under the circumstances Arnd describes.

        The need for readings being taken all at a similar depth also needs to be factored in. One reading in a year being taken in a grid box is no sort of measure whatsoever, and there should be some basic crietria set down before we accept that the huge variabilty of readings and the paucity of them, can go towards making a statistically meaningful data base with all the importance that is attached to it.

        I previuously suggested that the war year measurements shoud be forgotten and that we then have a pre war and post war data base of differing character. Some elements of the post war data base may have some meaning especiially in well travelled regions. The pre war record is decidedly suspect for a large number of reasons and becomes more so the further back in time one travels or the more remote the area..
        tonyb

      • Re: tonyb | March 18, 2012 at 12:08 pm | “I previuously suggested that the war year measurements should be forgotten”.

        Indeed, the WWII measurements in the Pacific and in the Atlantic should be ignored completely and thoroughly for the full time period from 1939 to 1945. They can not be corrected. Any effort in this respect is only a waste of time and money, or their correction shall proof something that these measurements, in my opinion, are not fit to prove.

        Extract from a paper (1988) concerning WWII North Atlantic data:
        __” SST data series for WWII were taken under circumstances widely different to what one
        would generally regard as voluntary merchant ship observation. These
        observations were anything but on a homogenous footing, making it difficult, if not
        impossible, to identify particular deficiencies and to define corrective figures.” at: http://www.oceanclimate.de/English/Atlantic_SST_1998.pdf ; see also above: tonyb | March 17, 2012 at 12:50 pm

  47. Greg Goodman

    ArndB, thanks for that pdf link. Very useful to have some detailed analysis of the impact of the war on shipping practices rather than the usual “lets assume… and probably … buckets. ”

    Disruption rather than sampling method is exactly the way I interpreted the problem as can be seen in the article. You have provided some concrete evidence of that.

    In particular the lack of any noticeable glitch in the north-east Pacific casts a serious doubt on the “all American shipping used EIR” hypothesis.

  48. Thorough and devastating. A small correction:
    “The magnitude of the adjustment is comparable in size to the total warming of the 20th century, ie. the “correction” deemed necessary is almost as big as most of the effect being observed.

    FIFY. ;)

    An alternate title, perhaps:
    “HADSST adjustments: systematic error”

    >:)

  49. Greg Goodman

    Just been looking at ICOADS v2.5 on KNMI (true temp not seasonal anomalies)
    http://i40.tinypic.com/10zo83k.png

    Interesting visual observation : there is a clear difference in the variability on a circa 60y cycle. I have not had time to plot this locally and find out exactly what the nature of the difference is but confirms my comment in the article that the amplitude of seasonal differences should be viewed as climatic , not as a need (a la Folland and Parker) for more bucket related corrections.

    Clearly the late 19th c. and post WWII cooling were marked by smaller variability in temperature. The same pattern has clearly reasserted since just before y2k.

    The split may be more like 25/35 than a nice cosine but the pattern is clear. Last cycle was smaller than 1880-1940 and current cooling looks less marked.

    The deeper I dig this, the more I’m inclined to favour study of ICOADS rather then Hadley to find climate signals.

    Before starting this study I thought there was little chance of extracting any useful climate signals from this mountain of badly sampled weather data. What became figure 6 started to make me change my mind. That sort of order is not an accident.

  50. Greg Goodman

    I have just been looking at ICOADS v2.5 on KNMI (true temps not anomalies).
    http://i40.tinypic.com/10zo83k.png

    I have not had time to analyse this locally yet but there is a marked difference in annual scale variability that has a clear 60y scale variation. The late 19th c. and post-WWII cooling periods were marked by notably different patterns than the early and late 20th c. warming periods.

    A similar “cooling” pattern was clearly established since just before y2k.

    Variation looks like 25/35y rather than a nice cosine but its a significant indication that such changes are of climatic origin (as I suggested in the article) not a need for yet more bucket related adjustments as was presumed by Folland and Parker.

  51. MattStat/MatthewRMarler

    I finally read through this analysis, and I find that I like it.

    However complex the methodology claims to be, the result is surprisingly simple. HadSST3 selectively removes the majority of the long term variations from the pre-1960 part of the record. ie. it removes the majority of the climate variation from the majority of the climate record.

    This won’t be the last word, but the results of your analysis, along with your documentation of the weak bases of the adjustment methods, should stimulate more analyses of the adjustment process and the results.

    For estimating the function (smoothed mean temperature series) and its derivatives, I prefer piecewise polynomial smoothing: Nonparametric Regression Methods for Longitudinal Data Analysis. Wiley Series in Probability and Statistics, by Hulin Wu, published by Wiley.

  52. Thanks Greg.

    I have also bought the book:

    Gnuplot in Action
    Understanding Data with Graphs
    by Philipp K. Janert

    • Greg Goodman

      An excellent book, if that had been available when I was learning to use gnuplot I would be three months younger today.

      • MattStat/MatthewRMarler

        Greg Goodman,

        What would you think about using weighted least squares in your model fitting, with smaller weights for the WWII data in consideration of their greater uncertainty?

        This is good work. I have finally read all the comments, after reading the paper yesterday. I would like to second the suggestion above that you submit this to a journal such as Annals of Applied Statistics (my favorite), Journal of the American Statistical Association, or any journal where it seems likely that the reviewers would be familiar with estimation of derivatives and nonlinear least squares.

      • Greg Goodman

        Matt, if you reread the text you see that is exactly what I did ;) I used zero weight for war-time period, that was documented just above the triple plots. Thanks for you comments.

      • MattStat/MatthewRMarler

        Greg Goodman: Matt, if you reread the text you see that is exactly what I did

        As soon as I read that, I remembered.

        Are you going to write that up for submission, or just leave it here?

  53. Greg

    You have not answered my question directly.

    Assuming the data is valid,, is my graph shown below valid? If not, why?

    http://tinyurl.com/7p963ez

    • Greg Goodman

      I have answered, you were not paying attention. That kind of result depends heavily on the arbitrary choice of 30y. That’s why I sent you the script. Here’s what you get if you use 11 years.
      http://i39.tinypic.com/2nvar1e.png

      • GG

        Would you mind terribly adding the zero line and original curve, for clarity?

        Also, do you plot by midpoint of trend line? Endpoint?

        If I’m reading this correctly, the last time there was a negative global land temperature trend was the late 1960’s, with only two or three nearly flat trends very briefly since then? Over 45 years of unbroken warming for the first time since 1800? Over double the previous longest unbroken warming phase in the BEST record?

        Vaguely remarkable, but of course inconclusive of anything on its own, do you think?

      • Greg Goodman

        Would you mind terribly stating what you are referring to ?

      • Greg Goodman | March 19, 2012 at 12:10 pm |

        I recognize the frustrations, after your very kind consent to post and given your incredibly responsible and attentive service in supporting your thought-provoking and informative research, of off topic questions.

        Some irksome interlocutors of past ages imposed on their more famous peers, either out of mental laziness or lack of talent, or to make their reputation by seeking to stump better-trained minds. Newton wreaked epithets on Bernoulli for this practice.

        That’s not my hope here.

        However, I do disagree with part of your premise philosophically.

        It clearly isn’t entirely invalid to explore advanced application of graphical methods on datasets and fits, even — perhaps especially — when so much is suspected and suspect about the limitations and flaws of the data.

        Wring out every ounce of information from observations, test what validity it may have, verify if you can by forensic methods, and find out if even seeming invalid data can yield meaning within the limits of its utility.

        You may get something that will not be useful in your lifetime, or for some centuries to come (at the slow rate we’re coming to collect data, likely about six by my reckoning), but who knows?

        Einstein revolutionised the world with data from a teacup.

        However, on the whole, impossible to argue that you’re wrong. There’s much more appropriate analysis, the limits of the data must be acknowledged, and Girma’s approach is very unpromising.

      • Greg Goodman

        Taking into account the clear indications in the data that it is not very nice, acknowledging the limits of the tool I’m using, but hoping to clarify a bit, and I hope bring me back to topic.

        Picture. Thousand words.

        http://www.woodfortrees.org/plot/best/scale:0.000001/plot/best/mean:11/mean:13/scale:0.01/offset:0.01/plot/best/mean:59/mean:61/derivative/mean:11/mean:13

        Presents a smoothed approximation of Girma’s derivative curve by taking an actual derivative curve of 5-year smoothed data(plot/best/mean:59/mean:61/derivative/mean:11/mean:13) in blue.

        Adds a reference line representing zero (plot/best/scale:0.000001) in red to help the eye discriminate time ranges of rising and falling temperature trends.

        Adds a reference plot, scaled and offset to not detract readability of the derivative curve (plot/best/mean:11/mean:13/scale:0.01/offset:0.01).

        Looking at the derivative compared to zero, we see a clear change between the last half-century and all that went before on BEST.
        The tendency to fall below zero drops dramatically.
        This effect detected by eye warrants better formal analysis.
        It is marked.
        It is of a type that BEST has removed bias for.
        From it emerges an indication there may be a real mechanism.
        The most obvious reading is that the tendency of temperatures to fall for any appreciable span of time has dropped dramatically in the past fifty years.
        Is it real, or spurious, or significant or not?
        Does it show a bias in BEST’s method?
        That’s up to proper analysis to determine, such as you call for on SST.

        Speaking of..
        http://www.woodfortrees.org/plot/hadsst2gl/scale:0.000001/plot/hadsst2gl/mean:11/mean:13/scale:0.01/offset:0.01/plot/hadsst2gl/mean:59/mean:61/derivative/mean:11/mean:13

        The HadSST(2) derivative curve tells a similar story.. and whoa, look at that wartime glitch!

      • Greg Goodman

        Bart , your plots are showing you used “mean” for smoothing. This actually means “running mean”. This is about the crappiest filter possible. I pointed this out to WFT over a year ago, provided code for him to add a real filter. Sadly he has chosen not to provide a proper filter on his site. RM badly distorts and is really best avoided.

        In general you should do you “smoothing” after other processing otherwise you end up processing the filter distortions and well as the data and confound the result.

        Other than that what you are doing is similar to what I have posted here in panel (b) of the triple plots, just without the fitted curves.

        I don’t want to deviate here into discussion of BEST, though that surely merits study.

      • Greg Goodman | March 23, 2012 at 1:16 pm |

        This is about the crappiest filter possible..In general you should do you “smoothing” after other processing otherwise you end up processing the filter distortions and well as the data and confound the result.

        Absolutely agreed on this.

        And again, apologies for off-topicness. My intention is more to help with Girma’s questions than to distract you, although I find your correspondence clarifying and informative, and think it expands on context for the topic.

        Graphical analysis is an optimization problem between seeking what information is in the data of value, and seeking to recognize when there’s nothing but self-deception there.

        It’s always good for me to learn more of how people who do this well do it at all.

      • http://www.woodfortrees.org/plot/best/scale:0.000001/plot/best/mean:113/mean:119/derivative/plot/best/derivative/mean:113/mean:119

        Picture. Words. (Both filters produce same curve, which isn’t surprising considering how aggregated the monthly data already is.)

        Every year that blue line remains above the red line, the odds the climate after the middle of the last century is the same as climate as before the middle of the last century drops.

        Not that the climate is shifting or changing. That it may as well not be the same planet at all. This isn’t meant to be dramatic or alarming.. only to suggest we might as well throw away our data before 1950 when considering climate trends since, or treat the two periods as entirely unrelated datasets.

      • Greg Goodman

        Bart,
        the point about doing filtering as late a possible is it’s good practice for the reason stated. Since R-M is a straight sum operation it likely does not matter if you do it before or after in this case.

        The problem is that of using running mean as a filter. It is that which produces the variations in the plot that change shape as you change the filter period. Either you get a wrong result and differentiate or you differentiate and get the wrong result later. As you demonstrate it’s the same wrong result’.

        If you want to try a filter that does not let large lumps stop band frequencies get past it you could try the following awk script.

        #!/bin/awk -f

        # pass input through 3 sigma gaussian filter where sigma, if not given, is 2 data points wide
        # usage : ./gauss.awk filename
        # optional scale_factor simply scales the output
        # sigma can be compared to the period of the -3dB point of the filter
        # result is centred, ie not shift. dataset shortened by half window each end
        # data must be continuous and equally spaced

        # jan2011 , up to 8.6f for month precision consistency and better FFT

        BEGIN{ OFMT = “%8.6f”
        # ARGV[1]=filename; argv[0] is script name, hence ARGC>=1
        pi= 3.14159265359811668006

        if ( ARGC >3 ) {scaleby=ARGV[3];ARGV[3]=””} else {scaleby=1};
        if ( ARGC >2 ) {sigma=ARGV[2];ARGV[2]=””} else {sigma=2};

        print “filtering “ARGV[1]” with gaussian of sigma= “,sigma
        root2pi_sigma=sqrt(2*pi)*sigma;
        two_sig_sqr=2.0*sigma*sigma;

        gw=3*sigma-1; # gauss is approx zero at 3 sigma, use 3 sig window
        # eg. window=2*gw-1 – 5 pts for sigma=1; 11pts for sig=2; 3 sig=17

        # calculate normalised gaussian coeffs
        for (tot_wt=j=0;j<=gw;j++) {tot_wt+=gwt[-j]=gwt[j]=exp(-j*j/two_sig_sqr)/ root2pi_sigma};
        tot_wt=2*tot_wt-gwt[0];
        tot_wt/=scaleby;
        for (j=-gw;jgsfile;
        ln=-1;
        }

        {
        xdata[++ln]=$1;
        ydata[ln]=$2;

        if (ln>2*gw)
        {
        gauss=0
        for (j=-2*gw;j> gsfile;
        }
        else
        {
        # print $1,$2;

        }
        }

        END {
        print “#gaussian window width = “gw+gw+1″,done”
        print “#output file = “gsfile

        }

        [/sourcecode]

      • Greg Goodman | March 24, 2012 at 12:12 pm |

        Doesn’t a simple 12 month average work as a filter, averaged over the 12 months of a single year? It eliminates seasonal variation, but does not introduce distortion to the shape of the annual curve as does a 12 month running mean?

      • Greg Goodman

        Jim , what you are suggesting is also known as decimation. You would end up with 1/12 of the number of data. Here some are talking about 25 and 33 year filters , you would be left with 6 or 8 data points.

        The distortion is nothing to do with the annual cycle it’s that whatever variations you have r-m will fiter them badly and give the idea of spurious variations that are an odd mix of whatever it let through that you naively though you had filtered out.

        Running mean must die !

      • Maybe you misuinderstood what I was saying. A 12 month average, not a 12 year average. So, a 33 year record would be reduced to 33 points.

      • Greg Goodman

        No Jim , it’s you who is not understanding. Try rereading what I posted.

        The Met Office tends to use a binomial filter. This also is a *real* filter with a decent frequency response. It is calculated in a similar way to the gaussian I posted, just with slightly different coeffs.

        running mean must die!

      • Running mean, useless though it is, is so pretty!

        See how it undulates and makes us imagine beautiful waves that don’t exist.

        Why wouldn’t any scientist trade significant amounts of information for useless pretty nonexistent things?

      • I didn’t say anything about a running mean, Greg.

      • Greg Goodman

        Jim, indeed you did not suggest running mean. You suggested a year long mean. That is not particularly relevant to a discussion on how to filter 25 or 30 year cycles. As I explained the 12m variation was not the issue.

      • GG

        Apologies if I was a bit imprecise.

        Your graph (http://i39.tinypic.com/2nvar1e.png) is missing a few things that might improve its readability. A zero line, as the principle point of discussion of trends on this particular topic is flat or negative trend lines, would help indicate when the temperature plot falls from rising, for instance. The original (smoothed) curve of temperature would help show a viewer what the trends are revealing the trend of, again to improve readability.

        My observations from your graph line are that there has been a break in the 200-year trend of rising and falling, as of the 1960’s becoming rising only, which had not happened on any span even half so long previously, and which does not resemble the past pattern of variability of 11-year trends.

        This isn’t particularly meaningful in and of itself, but bears remarking on.

        What do you think?

        And what would you recommend to improve interpretation of its meaning, if anything?

      • Greg Goodman

        Bart, I replied to a few posts on this OFF TOPIC, to show to Girma how this sort of idea is highly dependant on what period you use since this is a general trick of the IPCC in concentrating on “the last 50 years”.

        My point is that this is invalid. I do not wish to go into detailed discussion about what is warm or less warm in an invalid method.

        The subject of this thread is the study I posted. With the exception of John Kennedy I see precious little comment relating that subject.

      • Greg Goodman

        “And what would you recommend to improve interpretation of its meaning, if anything?”

        I would recommend to stop fitting OLS slopes to climate whatever the time scale. It is nothing but misleading.

        That is why I did this in depth look at derivatives and frequency spectra.

      • Greg

        Wiki:


        Climate (from Ancient Greek klima, meaning inclination) is commonly defined as the weather averaged over a long period.[3] The standard averaging period is 30 years,[4] but other periods may be used depending on the purpose.

        So don’t you think we should consider 30-years for the climate signal and a smaller period will include some noise?

      • Greg Goodman

        So is the 11/22y solar influence climate or weather ? Noise or signal?

        I think it’s an artificial distinction, often chosen to support including/excluding certain events that don’t fit one’s worldview.

        eg If there’s not significant warming for 15y , you decide climate starts as 17y. (Santer?)

  54. Greg Goodman

    Following John Kennedy’s question about which version of ICOADS was provided at JISAO project, it would appear indeed that this is v2 although they do not say what they are providing.

    As a result I have recovered ICOADS v2.5 from KNMI climate explorer and have re-run that analysis.

    There are some small but interesting differences in the new version, in particular the war-time period glitch has changed. This will require re-calculaton of the simple adjustment I applied that cannot be done in 5 min.

    There are small differences in the FFT that tend to reinforce the points I made rather than change them.

    The unfortunate similarity between the HadSST3 adjustment and the variation in the original data still shows the main effect to be removal of the majority of the long term variation from the majority of the climate record (the pre-1970 part). The later warming is given a gentle helping hand.

    http://i44.tinypic.com/149o081.png

  55. Greg

    Could you please plot the 30-years trends for hadcrut3?

    I would love to compare your graph with mine

    Thanks in advance!

  56. Why do my posts relocate to somewhere in the middle instead of at the bottom?

  57. Greg

    Could you please plot the 30-years trends for hadcrut3?

    I would love to compare your graph with mine

    Thanks in advance!

    • Greg Goodman

      Hey, I’ve recommend the software to use , you’ve (allegedly) bought the book , I took the time to send you the script personally but I ain’t going to come round to your house make you a hot drink and tuck you up in bed.
      (Even if you put is capitals, bold underlined).

      • Greg

        What I am looking for is an independent verification. Does it take a long time to do? You may have the plot sitting somewhere.

        Greg, please do it. Please.

      • Greg

        you’ve (allegedly) bought the book

        I must defend myself => http://bit.ly/FS1Zza

      • Greg Goodman

        Good lad. Phillip Janert put a huge amount of work into that book. A sound investment.

        Now you have the tools, the book and my script you are well equipt to do all your own graphs. The script I sent you make full use of gnuplot’s capabilities so you should learn some advanced techniques by studying it.

        The reason I said “alleged” was because you still seem to want me to do your donkey work rather than reading it.

        Good luck and happy plotting.

      • You know that I have to wait two weeks to receive the book. Could you please validate that graph for me as I would like to submit an article to WUWT? You have done it for 11 years. Why not for 30?

    • Steven Mosher

      Greg aint your data monkey.

      You also need to update your data.

  58. Rob Crawford (http://bit.ly/yAokrS) wrote in other place:

    “So, really, it’s not that the planet’s getting warmer, it’s just that history keeps getting colder.”

    Creepy, but true.

  59. GREG

    I hope you make your interpretations and conclusions widely known without fear and I wish you good luck. Please write opinion pieces in newspapers, ring radio stations and make presentations to inform the public.

    Greg, I have one point I would like to make regarding the global mean temperature anomaly. You know these temperatures are relative to the 1961-1990 BASE PERIOD average of 14 deg C. Is it not incorrect to choose a horizontal base period line in a linearly warming globe? Is it not more appropriate to choose a warming reference line that is given by the long-term trend line to define the global mean temperature anomaly?

    It is just like saying the boy at the front walking up a ramp is always taller than those behind him irrespective of their true heights. You measure the boys’ heights from the ramp, not from an arbitrary horizontal line.

    With respect to the overall warming trend line, here is what the TRUE global mean temperature anomaly looks like.

    http://bit.ly/wBoiKo

    And it shows the previous decade is not the warmest decade of the data.

    What do you think? Is my point flawed?

    • Steven Mosher

      Yes, your point is flawed. You can choose any period you like and the trends dont change. The 1961-1990 period is selected by CRU because it Maximizes the number of stations that can be used with their method and that minimizes the variance in the mean monthly figure that is then subtracted from all monthly figures. Picking different periods merely changes the offset of the line. However, if you pick a sparse period, one with fewer stations, then you could introduce unwanted noise in the final answer.
      The other period that could make sense is 1954-1983 as that maximizes the number of stations in the NH, however to maximize the number of stations in the SH youd pick 1961-1990 which is precisely what CRU have done. A long while ago I studied this in some detail because like you I was skeptical of the period selected. I tested. I found my worries to be UNFOUNDED.
      I suggest you do likewise and stop asking others to do your homework

      • It is just like saying the boy at the front walking up a ramp is always taller than those behind him irrespective of their true heights. You measure the boys’ heights from the ramp, not from an arbitrary horizontal line.

      • Steven Mosher

        do some algebra. you do not know what you are talking about. you probably dont even know how to calculate an anomaly. Here is a clue.
        when I calculate an anomaly because I use a least squares approach, I can use the entire period as the baseline. Guess what?
        answer doesnt change. The only way it COULD change is if you pick a period where the STRUCTURE in the seasonal differences is different from all other periods. Not the trend, but the structure of seasonality.

      • Steven

        Are you saying the anomaly will not be affected whether you choose as the reference base period a horizontal or an inclined line?

  60. Tomas Milanovic

    Put differently, a process observed over a finite time may be well-modeled by a random walk.

    Of course you are right, this is trivial and is done all the time.
    Any physical process can be modelled as a random walk if it satisfies the requirement for a random walk in a FINITE time.
    Physics is full of models that theoretically diverge at time in infinity but as nobody has the intention to predict eternity, it is not a practical problem.

    It also works in the other direction – potentials in 1/r can’t theoretically be used for point-like sources because they diverge at 0. Yet we do it all the time and it works nicely because we don’t intend to put r at 0. Etc.

  61. Greg,

    In a reply to John Kennedy you say ‘Does it not seem odd that the bias of buckets, deck heights etc would produce a variation similar to half the climate signal over the same period?’

    I don’t understand the significance of it being similar to half. There will always be some numerical relationship between measurement biases and the “truth”. What’s special about this particular magnitude?

    Also, I’m not sure how to interpret the acceleration statistics you’ve presented. The HadSST3 data certainly doesn’t look like it’s accelerating at over 6K/c/c. I wonder if this result doesn’t account for the direction of the acceleration? In other words, HadSST3 exhibits greater acceleration under these terms simply because there is greater low frequency variability in that record(?)

    On a minor housekeeping note, I think the labels for Figure 8 and 10 are the wrong way round (or the images are).

    • Greg Goodman

      Hi Paul,

      first , yes there is an inversion of those images. WP is not the most flexible tool and there was some considerable confusion when Judith was trying to post my article. I only just spotted this morning that these two got inverted. Since they are both there, anyone who is paying attention (like you) will probably suss it. Thanks for flagging it thought.

      For the 50% , it was not supposed to be the magnitude that I was remarking on. It was the fact that the multitude of adjustments for supposed biases just happened to generate a correction that was very close in form to the long term variation in the signal.

      In principal, there would not be any noticeable similarity unless, perhaps all the variation was due to bias and error, in which case it would all be removed, not half.

      That the cumulated biases that are based on rather gross estimations, supposition and speculation, due to lack of documentation of the changes, just happen to follow a similar time dependency as the uncorrected SST seemed to be on odd and notable result.

      When I realised this I started to examine what structural changes were being made to the data.

      6K/c/c : this was not intended to be a full model of climate, more a study of cyclic variation. I highlighted this feature as a notable difference in structure rather than a climate projection.

      However, if you look at dT/dt of the same plot you will note that it is initially changing negative and after 1950 ramps up. This would seem to be coherent with such an acceleration. There is nothing that suggests it starts off flat. The long cycles will mask the underlying accel in the time series. This is one of the ways where looking at the derivatives can help see things that are not obvious in a time series.

      Good questions though, thanks.
      Greg.

  62. Here is the comparison of the 25, 30 & 35-years trend period plot of the global mean temperature.

    http://tinyurl.com/8535ut2

  63. Julian Flood

    But why the WWII blip? It would be nice if climate science tried to explain things, not explain them away.

    The blip matches an FAO document’s graphs of windspeed changes — around 7 m/s in the NA, progressively less in SA, NP and SP.

    Just back from Madeira: there was a lovely smooth about three hundred miles long from abeam Portugal, and on the way back the stratocu in that area was riven by great clefts of clear sky. My initial delight was damped when I remembered that the Med surface current goes in and not out. So where did the pollution come from? Don’t know, unless sometimes the Med leaks surface water downwind. If only we had aerosol samples over features like that. I wonder what the satellite records show.

    JF
    Kriegesmarine Effect, mumble, mumble…

    • Julian

      If you look at other places in the thread you will see quite a lot about the war years.

      Basically there was a war on! This meant many readings were never taken as allied ships were not present in large chunks of the world or had more important things to do than take SSt’s.

      Basically it would be useful if we split the record into pre1939 and post 1945 i.e a historic and modern record leaving aside the war years, and stop pretending we can create a data base that males any sort of sense for that period.

      tonyb

      • “that males any sense”
        Sexist! MCP! etc.
        ;)

      • Greg Goodman

        “more important things to do than take SSt’s.”
        Not at all , SST became a military secret during the war years, they were of life and death importance. You just show your lack of knowledge by that comment.

        Your suggestion of splitting the record in two is based on what exactly? The war-time period is a special case and does require some special treatment, that does not mean we have to end up with two separate records. My simple adjustment shows that identifying the step in the data and making a simple adjustment can restore patterns that are coherent with the surrounding data. Closer examination of the record could enable this approach on a local scale.

        Accepting that the war disrupted many aspects of data collection and using surrounding data to recover the probably bias would be an alternative to trying create some universal bucket field theory that can deal with long term bias and war-time disruption in the same equation.

        I think it is impeding the ability to address either issue to attempt such a unified approach.

  64. Gregg, I was messing around and came up with something that might give you a grin. http://i122.photobucket.com/albums/o252/captdallas2/JustforgrinsPaleoandSatellitestotheSSTrescue.png

    I was attempting to find a reasonable way to splice instrumental to paleo in celebration of the Global Medieval Warm Period’s rebirth for a little post on my blog.

    http://redneckphysics.blogspot.com/2012/03/welcome-back-medieval-warm-period.html

  65. To John Kennedy,

    I am merely a literacy teacher. Forgive me. But are you telling me that the “literature” is not fraught with hypotheses?

    You wrote: “The analysis is based on hypotheses that come from examining the literature, the data and the metadata. As with any scientific hypotheses they ought to be open to criticism, but not, I should think, the criticism that they are hypotheses.”

    If you hadn’t read or written anything else for weeks – and then looked back at this from a longer perspective, what would your reaction be?

    • Hi Kate,

      I’m merely a scientist, so I’m not sure what you are asking. What was your reaction to it?

      I tend to get stuck in a scientific way of speaking and writing. I was using hypothesis in a technical sense, which is different from the way most folks use it. Googling scientific hypothesis turns up a whole bunch of descriptions of the term e.g.

      http://chemistry.about.com/od/chemistry101/a/lawtheory.htm

      The scientific literature on any scientific subject is chock full of hypotheses in the scientific sense. Fraught is the wrong word.

      I’m also using ‘criticism’ in a more technical sense. As well as testing an hypothesis by comparing observations to the observable consequences of the hypothesis, it’s also possible to ‘criticise’ the hypothesis by questioning its internal logic, its consistency with other better understood theories or its underlying assumptions.

      Best regards,
      John

      • As Kip Hansen wrote at judithcurry.com, long ago:
        I must be shown the following:
        1) The hypothesis, which must be falsifiable.
        2) What experiment has been done and that it has been carefully laid out well enough to falsify your null hypothesis (and thus support your original hypothesis).
        3) What you did exactly, all the nasty details, how you controlled for every possible confounding factor (or didn’t control for this one and that…and why not, and how that might affect your findings).
        4) Your conclusion and how it follows from your data (and not your beliefs, feelings, hunches, or desire to please your funding agency or university tenure board).
        Then, and only then, will I listen to your opinion about what it might mean.

        If you are a scientist, why don’t you become the First to take on this challenge.

  66. Note that all of Australia’s high temperature records were set in the 1940s, during the WWII drought:

    Question – how many of those American warships which returned the anomalous temperature readings during the war years were based in the western Pacific?

    “…As in the Federation drought, dry conditions were more or less endemic during the period 1937 through 1945 over eastern Australia….”

  67. Just a note on Greg’s update, which was dated 12 April 2012, but which I hadn’t seen until 8th October 2012.

    The first thing to note is that I do NOT agree with Greg’s two statements. He said (typos are his):

    “As a result we were able to agree on the main points raised in the article:”
    “1. That HadSST3 removed the majority of the variation from the majority of the record.
    “2. The these adjustments are based on hypothesis rather than being scientifically proven.”

    These statements describe Greg’s understanding of our discussion.

    The first statement I failed to contest at some point in the lengthy discussion. That’s not the same as agreeing with it. The range of the global average SST series is narrower in HadSST3 than in the version of ICOADS that Greg used. That’s obvious from a quick eyeballing of the data (and it’s shown clearly in Figure 4 of part 2 of the HadSST3 paper) but Greg’s statement is too vague and general and fails, crucially, to differentiate between real SST variability and variability due to measurement biases.

    The second statement is not one I would ever consider worth making. In a very strict sense nothing is “scientifically proven” so that part of the statement could be applied to anything. The first part refers to any systematic adjustment applied to any measurement ever made. In his article, Greg uses the words speculation, assumption and hypothesis as if they were interchangeable. They are not.

    As I’ve said (and as can be seen in the HadSST3 papers) there are good reasons for making the adjustments. The adjustments are based on documentary evidence, direct empirical comparisons of different measurement methods and wind tunnel tests amongst other things. Where there are identified uncertainties they were acknowledged and estimated. The adjustments are a set of hypotheses concerning the nature and magnitude of the biases in the data and, as with any scientific hypothesis, they are both based on evidence and liable to disproof by evidence. Greg never presented any evidence that the adjustments were incorrect and the contention that they were seems to have been based on his analysis of the time derivatives of the data and on the frequency analysis.

    Greg’s time derivative analysis combined various filters and time differencing steps. Referring to Figures 8, 9 and 10 in his article, Greg said “It seems unlikely that any error due to sampling methods and unrelated to climate would introduce a cyclic variation that is consistently found in the time derivative.” In fact, his choice of filters and differencing generates similar ‘cyclic variation’ when fed with simple AR(1) red-noise time series. Five examples are shown here:

    http://www.metoffice.gov.uk/hadobs/hadsst2/charts_files/fdiff_random.png

    Furthermore, some of the differences seen between Greg’s global average ICOADS series and the global average HadSST3 series were due to data selection, quality control and the choice of whether to grid the data at 2degree latitude/longitude resolution (as Greg’s ICOADS series was) or 5degree resolution (as HadSST3 was) before calculating the global average. The differences between the red and blue lines in this diagram show the differences due to data selection, gridding and quality control.

    http://www.metoffice.gov.uk/hadobs/hadsst3/figures/diagno.png

    Notice that for much of the record the red and black lines, representing HadSST3 without and with adjustment respectively, follow each other more closely than either follows Greg’s version of ICOADS from JISAO (blue).

    I know that Greg thinks there are flaws in the studies that have evaluated the bias adjustments – no study is perfect – but that does not mean that their conclusions are incorrect or constitute proof that the bias adjustments are incorrect. In fact, the authors of the studies concluded that the adjustments were sound. Since the discussion, there has been a comparison of SST with other near-surface ocean temperature measurements which show once again that the adjustments are not unreasonable.

    http://www.agu.org/pubs/crossref/2012/2012GL052975.shtml

    Finally, an error remains in Greg’s original article. He says

    “Kennedy et al 2011c [3c] goes into some detail about how the duration of the change was determined.”

    “ If a linear switchover is assumed which started in 1954and was 95% complete in 1969, the middle of the James and Fox study period, then the switchover would have been completed by 1970. Based on the literature reviewed here, the start of the general transition is likely to have occurred between 1954 and 1957 and the end between 1970 and 1980.”

    “However, this assumption seems at odds with figure 1 one from the same paper that shows a significant proportion of buckets readings in 1970. A proportion that rose from 1955-1970 and only declined from then to the end of the record. Figure 3 reproduces figure 1 from K2011c [3c]”

    “Neither does this hypothesised linear change-over from 1954 onward correspond to the bulk of the adjustment actually applied, as seen in figure 2b above, where the cooling adjustment clearly starts as early as 1920 and has already achieved 2/3 of it’s final extend before 1954.”

    Greg has misunderstood the paper. The linear switchover refers to a switch from one type of bucket to another and not from buckets to engine room measurements.

  68. John Kennedy

    I have seen your comment but I am not sure who else will. You know my opinion on buckets-of whatever type-so I have no need to revisit that aspect of the discussion. :)

    I was at the Met office library on Tuesday collecting information for an article I am wtiting on the historic variations in arctic ice . To that end I was also at the Scott Polar institute archives in Cambridge last week.

    Are there specifically any historic Arctic SST measurements or reports you can reference to me-1940 and earlier? I have some measurements from Scoresby’s expedition in the 1820’s but they tend to be fairly ad hoc.

    All the best
    Tonyb

  69. “Post-Nice. C’est vraiment un très bon article. J’ai remarqué que tous les points importants. Merci “

  70. I am truly grateful to the owner of this web page who has shared this enormous paragraph at here.