Site icon Climate Etc.

Does the Aliasing Beast Feed the Uncertainty Monster?

by Richard Saumarez

Many continuous signals are sampled so that they can be manipulated digitally.  We assume that the train of samples in the time domain gives a true picture of what the underlying signal is doing, but can we be sure that this is true and the signal isn’t doing something wildly different between samples?  Can we reconstruct the signal between samples and, more important, can we tell if the signal has been incorrectly sampled and is not a true representation of the signal?

This is one of the most basic ideas in signal processing and is worth discussing because it is regularly abused. A number of us have been caught out by this problem, when it occurs rather subtly, so this is cautionary tale. I will illustrate it using temperature records and suggest that it is not a trivial problem.

A common method of expressing climate temperature data is to take monthly averages.  This seems a perfectly reasonable thing to do, after all an average is simply the average.  If we wanted to compare the mean July temperature in Anchorage with that in Las Vegas using conventional statistics, it presents no problems.  But, when the averages are treated as time series or the representation of the underlying continuous signal, problems can occur.

The number of samples required to describe a varying signal is determined by the Nyquist Sampling theorem, which states that the sampling frequency must be at least twice the highest frequency in the  (Fourier series representation) of the signal.  If this done, the signal is correctly sampled and in principle the intermediate signal between samples can be reconstructed perfectly. (In practice there are some limitations and trade-offs that stem from estimating the Fourier series of an arbitrary signal).

However, if the signal is under-sampled, it is irretrievably corrupted and is said to be aliased.  This is well understood in many fields and is absolutely basic (Signals 101).  If one has, say, an audio signal and we wish to record it digitally, we would wish to resolve ~20KHz, the highest audible frequency and so we would record at a sampling frequency of at least 40KHz.  There may be higher frequencies around during the recording, although we can’t hear them, or artefacts such as clicks, so the analogue signal is filtered with an anti-aliasing, low pass filter to remove high frequency components before sampling or is sampled at a very high frequency, filtered digitally to simulate the anti-aliasing filter and finally re-sampled, or “decimated”, at 40KHz. Although this discussion is based on signals in time, the concept applies to any sampled system: images, computed tomographic imaging, temperature over the surface of the Earth and so on.

Difficulties arise when either one can’t do this because the measurement system doesn’t allow it or the problem of aliasing isn’t recognised. In this case one can be led, unsuspecting, up a long and tortuous garden path.

Sampling a signal is equivalent to multiplying a continuous function by an equally spaced train of impulses (figure 1).

Since, the Nyquist theorem is stated in terms of a frequency, we have to consider what sampling does to the spectrum of the signal and I am taking a short cut by going straight to the discussion of a sampled signal, rather than through the long winded formal theory.

 Figure 1 Sampling a signal

The spectrum of a signal is represented in both positive and negative frequency. Computation of Fourier series coefficients is a correlation with a sine wave and a cosine wave of a particular frequency and

cos (wt) = 1/2 [exp(iwt) + exp(-jwt)]  And sin (wt) = 1/2 [exp(iwt) – exp(-jwt)]/j

Therefore the spectrum of a cosine wave with an amplitude of 1 and frequency wf, is (½,0) at a frequency of wf and (½,0) at frequency of -wf. Similarly, a sine wave of the same frequency has a spectrum of (0, ½j) at wf and (0,-½j) at -wf, i.e.: the negative frequency component of the spectrum the complex conjugate of the positive frequency component.

The spectrum of the sampling process itself is an infinite train of impulses in frequency domain spaced at intervals equal to the frequency of the sampling process (figure 1).

Since sampling is multiplication of a continuous signal by a train of impulses, we can obtain the sampled signal spectrum by convolving their two spectra.  The spectra of a correctly sampled signal and an aliased signal are shown below in figure 2:

The highest frequency that can be determined is half the sampling frequency, known as the Nyquist frequency. In the case of monthly temperature series, the highest frequency that can be resolved is 2 months-1

Although this is the theoretical minimum, in practice one generally samples at a higher rate than the theoretical minimum. Therefore in most correctly sampled signals, there will be gap in the spectrum around the Nyquist frequency, as in the upper drawing of figure 2. Therefore the spectrum is resolvable because there is no overlap between its representation at w=0, and its reflection about the sampling frequency.  In an aliased signal, high frequency components of the signal overlap and are summed (as complex numbers) with low frequency components and so the spectrum become irresolvable and the signal is corrupted.  This has two rather important implications:

a)     One cannot interpolate between samples to get the original signal.

b)    A high frequency’ aliased component in the signal will appear as a lower frequency component in the sampled signal.

Figure 2 Spectrum of a correctly sampled signal (upper) and an aliased signal (lower).

Figure 3. Mean amplitude spectrum of 10 yearly records from Bergen, Norway. The 1-year component is suppressed, as it is huge compared to the other components

 Are Temperature records aliased?

Out of curiosity I looked at the HADCRUT series.  I extracted intact 10-year records from the series that contained no missing data and calculated the amplitude spectrum (after trend removal and windowing), for individual stations (Figure 3) and the mean of all valid records, shown in figure 4.

Figure 4 Ensemble amplitude spectrum of 5585 10 year monthly temperature records.

These certainly look aliased at first sight, but without access to daily records, it is impossible to be sure that aliasing is occurring, the temperature might be fortuitously sampled at exactly the correct frequency.

This begs two questions:

a)     Can one explain how the temperature record has become aliased?

b)    Does it really matter?

[Or, in translation: is this simply a load of pretentious, flatulent, obfuscating, pseudo-academic navel-gazing? I will show that, yes, aliasing is likely to be present and it may matter.]

Why are temperature records aliased?

Temperature records are constructed by taking daily, or more frequent, observations and taking their average over each month.  In signal processing terms this is a grotesque operation.

Forming the average of a train of samples is a filter.  One has convolved the signal with an impulse response that consists of 30 equal weight impulses and this filter has an easily calculable frequency response, shown in figure 5:

Note that the first zero is at 1/month.  If this filter were run over every daily observation, one would get a low pass filtered version of the daily signal.  Clearly, this does not attenuate all the high frequency components in the daily signal.

What is then done, is that filtered daily signal is sampled at monthly intervals, so the Nyquist frequency is 1/6 of a year.  If components of the signal with higher frequencies than the Nyquist frequency exist and are not attenuated by the averaging filter, these will become aliased components.

Figure 5 Frequency response of 30-day average.

To investigate this I have modelled a daily temperature record with the following components:

1)    A basic sinusoid, -cos(2 pi t), where t is years starting on January 1st, to model the yearly cycle with an amplitude of 30oC.

2)    This modified by a random, amplitude modulation of 2.0 oC to simulate variability of peak summer and minimum winter temperatures.

3)    A 15-day random modulation of phase to that spring and autumn can come a bit early or late.

4)    A heat wave during summer that can occur randomly from the beginning of June until the end of August and last a random length of between 5 and 15 days.  Its amplitude is random, between 0.5 and 5oC.  A similar “cold snap” is added during December and January.

5)    A random, normally distributed measurement error with a standard deviation of 0.25oC.

6)    Rounding of the temperature reading to the nearest 0.1oC.

One would not claim this is an exact model of a temperature station, but it contains features that are slowly and rapidly varying that would give the signal some of its properties.  Any rapidly moving components, for example short temperature excursions, will generate high frequencies, while the modulations will generate harmonics of the yearly cycle.

The spectrum of a 10-year record is shown in figure 6, with the frequency response of the averaging filter.

Figure 6 Spectrum of a 10-year record of simulated temperature with the amplitude response of the averaging process superimposed upon it.

Figure 7 The spectrum after applying a 30-day averaging filter.  Note: the spectrum of this signal sampled at 1/month is obtained by reflecting this spectrum around 1/month (red line) and adding it (as a complex number) to the original.

Once this spectrum is filtered, as shown in figure 7, the spectrum of the monthly signal is obtained by convolving it with the spectrum of the monthly sampling process, which is a train of impulses spaced in the frequency axis of one month apart.

This results in a severely aliased spectrum which is shown in figure 8 and this suggests that the HADCRUT data is aliased.

Figure 8 The calculated spectrum after sampling at 1-month intervals.  Note that this is aliased. The simple yearly cycle has been subtracted; the components shown around 1 year-1 are due to modulation.

Does it matter?

One important feature of aliasing is that it creates spurious low frequency signals that can become trends.  Using 150-year model records, this can be examined by creating yearly anomaly signals.

Figure 9 Spurious trends in the error between the true integrated yearly signal and the value obtained by taking monthly and then yearly averages.

The daily, simulated signals (I have constructed them carefully to ensure that they aren’t aliased) are reduced to monthly averages and then used to form yearly errors between the processed signal and the integral of the daily signal, which is trend free.  A particularly bad example of this is shown in Figure 9.

Note that the magnitudes of these trends are significant in terms of the variability of temperature records and that they are artefacts created by processing records where there are no trends.

Repeating this process for 100000 records, the distribution of the magnitude and duration of these spurious trends can easily be estimated and is shown in a 2d histogram in figure 10.  The green area is the empirical 95% limit.  The marginal distributions, shown in red are not on the same vertical (probability) scale but merely indicate the shape of the distribution.

Figure 10 Histogram of trend magnitude and duration.

Therefore one can conclude that aliasing in temperature records may be important.

Given daily records, it is preferable to integrate them in order to get average temperatures, paying careful attention to the effects of the finite precision of a thermometer, which is probably a maximum of 1:150 over most temperature ranges, but is more typically 1:100.  To get a monthly time series, one should filter the daily record with a carefully designed filter, which will inevitably have a long impulse response, to obtain a smoothed daily record.  This can then be sampled at monthly intervals.

There is a further problem in using data that may aliased as inputs to models. The effects will clearly depend on the model, but as an example, I made a simple linear model of a system with three negative feedbacks with different feedback gains and time constants.  This is driven with low pass filtered broadband noise (not aliased), and also the input signal, which is decimated to create an aliased input.  The results are shown below.  The true output is shown in black, the green is the input sampled at 25% below, and red 50%, below the Nyquist frequency.  If you wanted to extract the parameters of the model from the aliased input, they would of course be wrong.

Predicting what would happen in more complex situations, for example using principle component analysis, when some components were aliased and some were not, is difficult (even a nightmare).  However, any model that is constructed using sampled data should be viewed critically.

Figure 11 Output of a simple feedback model driven with a correctly sampled signal (black) and increasingly aliased versions as inputs (green and red).

Aliasing is a problem that rears its ugly head in many different fields.  In terms of pure analogue time-domain signal conversion, the procedure to prevent it is well-known and straightforward – anti-aliasing filters.  Problems occur when you can’t simply filter out high frequency components.  For example, this was a problem in early generation CT scanners where abrupt transitions in bone and soft tissue radio density caused aliasing because they could not be sampled adequately by the x-ray beams used to form the projections.

The key to dealing with aliasing is to recognise it and given any time series one’s first question should be “Is it aliased?”

JC comment:  My concerns regarding aliasing relate particularly to the surface temperature data, especially how missing data is filled in for the oceans.  Further, I have find that running mean/moving average approaches  can introduce aliases, I have been using a Hamming filter when I need one for a graphical display.  This whole issue of aliasing in the data sets seems to me to be an under appreciated issue.

Exit mobile version