Happy Feet – Filtermatics
Posted by Jeff Id on March 30, 2013
Something interesting at WUWT happened today. This isn’t a typical issue as of late and requires a bit of math skill. A post by Willis Eschenbach brought up some old memories of days where skeptic blogs like this one, were math centric. Fortunately the math which Willis discusses this time, is relatively lightweight stuff, and it happens to involve the fortuitous filtering activities of Mannian filter-matics.
I highlighted an email on the topic a few weeks ago here which contains a quote that I thing belongs in Willis’s article. Michael Mann has long been interested in filtering methods which promote the “Cause”, I have to say that Willis’s example puts a spotlight on how awkward the team has been at promoting fortuitous filters.
5 PM 10/14/2003 -0400, Michael E. Mann wrote:
To those I thought might be interested, I’ve provided an example for discussion of
smoothing conventions. Its based on a simple matlab script which I’ve written (and
attached) that uses any one of 3 possible boundary constraints [minimum norm, minimum
slope, and minimum roughness] on the ‘late’ end of a time series (it uses the default
‘minimum norm’ constraint on the ‘early’ end of the series). Warming: you needs some
matlab toolboxes for this to run…
The routines uses a simple butterworth lowpass filter, and applies the 3 lowest order
constraints in the following way:
1) minimum norm: sets mean equal to zero beyond the available data (often the default
constraint in smoothing routines)
2) minimum slope: reflects the data in x (but not y) after the last available data
point. This tends to impose a local minimum or maximum at the edge of the data.
3) minimum roughness: reflects the data in both x and y (the latter w.r.t. to the y
value of the last available data point) after the last available data point. This tends
to impose a point of inflection at the edge of the data—this is most likely to
preserve a trend late in the series and is mathematically similar, though not identical,
to the more ad hoc approach of padding the series with a continuation of the trend over
the past 1/2 filter width.
The routine returns the mean square error of the smooth with respect to the raw data. It
is reasonable to argue that the minimum mse solution is the preferable one. In the
particular example I have chosen (attached), a 40 year lowpass filtering of the CRU NH
annual mean series 1856-2003, the preference is indicated for the “minimum roughness”
solution as indicated in the plot (though the minimum slope solution is a close 2nd)…
By the way, you may notice that the smooth is effected beyond a single filter width of
the boundary. That’s because of spectral leakage, which is unavoidable (though minimized
by e.g. multiple-taper methods).
I’m hoping this provides some food for thought/discussion, esp. for purposes of IPCC…
It never seems to end, and the “happy” filtering nonsense started being noticed by Willis Eschenbach some time ago.