Posted by Jeff Id on May 19, 2010
I was lucky enough to get some time to spend at the ICCC today. Realizing I’m president, I simply left work and drove to Chicago, turns out nobody fired me. I had an amazing conversation with Lucia about her PhD work, which surprisingly enough I had some background in, I met Craig Loehle for the first time (he doesn’t look like I’d expected, far younger 😉 ), saw Anthony Watts briefly and I got to spend about an hour with Steve McIntyre discussing hockey stick math – how fun is that! Cool day all in all.
We don’t talk about hockey sticks enough here lately, but after my conversations over the last couple of days I think a brief discussion of the math of hockey sticks – aka paleoclimate reconstructions is in order. I’m afraid I’m not planning anything with enough written math for some of you but rather another attempt a generic explanation of how hockey stick paleo reconstructions are made and go wrong. Perhaps those who already know this topic can help explain it to the rest as simplifying the subject is important. Not everyone knows, or cares to know, how to perform a multivariate regression – not that it’s impossibly hard. Most of this article has to do with tree proxy’s but it applies to different varieties as well.
First, the methods of paleoclimate temperature reconstructions are very similar in one aspect. They take items which are assumed (blindly) to have a temperature signal. Blindly is a criticism but it’s also completely real, nobody has tested whether a bristle-cone pine responds to temperature in a linear fashion, or even if it responds by growing faster, it just makes sense but it’s completely unproven. Growing 1.2 times as fast per year in a 0.1C warmer temp and 1.4 times in a 0.2C temp. Nobody has demonstrated this, but it is assumed.
Second, nobody knows how much faster a tree will grow per degree C. So if 100 trees of the same species have their ring widths measured nobody knows whether X millimeters equals Y degrees C and nobody knows whether each tree is different from each other. In paleo-climate a different weight for each tree is the preferred solution.
Third, experts know that the standard growth rate of an unmolested (same temp, same humidity, same everything) tree, is non-linear. Very much ad-hoc methods are used to flatten the growth curves, RCS flattens the general shape by fitting curves like exponential decays to tree widths, yet nobody has determined what the unmolested growth curve of a tree would be. Lake sediments, mollusk shells, on and on, other proxies are no better.
So we have unknown proxy curves with unknown calibrations having unknown linearity, the solution is to combine them with misunderstood statistics. I mean why not, none of the data is verified, NONE OF IT, so why not use whatever method might mash it together.
Let’s talk math.
Proxy reconstructions that I’ve been exposed to, a considerable number at this point, all consist of linearly weighted combinations of the data. I’m talking about this
temperature = Weight(1) * proxy (1) + Weight(2) * proxy (2)……..Weight(n) * proxy(n)
There are complex versions of this weighting where the weights change on a data availability basis, but it doesn’t change the concept.
What happens though when one of these complex methods creates a negative weight?
First, the regressions are all trying to match a positive temperature trend. If the proxy has a downslope, or an inconvenient anti-correlation to short term fluctuations in temperature, it can receive a negative weight from methods such as EIV. A point Mann has made in the past.
The claim that ‘‘upside down’’ data were used is bizarre. Multivariate regression methods are insensitive to the sign of predictors.
This reads very clearly to most here, but the predictors are the proxies, multivariate regression is the method which really doesn’t care if the sign is upside right or upside down. The point of the methods are to match the measured temperature curve using noisy data, so for every proxy which is used upside down, a different proxy must counter the effect. They’re all temperature after all, and they all should be warming over decadal time scales — but they don’t.
So when a temp curve is inverted with a negative weight, another temp curve must compensate.
Steve McIntyre made the point this weekend that in multivariate regression you want the predictors to be orthogonal, if they are the not your matrix is singular (or near singular) and your weights can shift to large and opposite values very quickly as they work to cancel the noise and match the temperature signal. In the case of proxy multivariate regressions, if all the proxies contain the same signal (or nearly the same), it’s only the noise which provides any orthogonality, the rest of the matrix is nearly singular. It’s a nice way to think of the problem faced by these regressions.
If they are near singular matrices straight mv regression will create weights which are both extreme positive and extreme negative which then combine to match the signal in the calibration range very well. However, temperature is temperature and negative weights mean that we’re reading temperature – upside down. A no no in most circles.
Let’s take a moment to consider what the best possible weighting would be for near singular proxies. Proxies with the same signal underlying the noise. If they were all scaled to a reasonably similar variance, and they all contain some temperature signal plus a bit of red noise (Mann assumes a very high 0.4 in his 07 paper – Signal to noise), then your best possible mathematical weighting would be equal weights or an average. The resulting average curve could then be scaled to temperature.
It’s hard to beat an average.
But when you begin regressing the same near singular matrix, the weights go all over the place, one side compensating for the other until values are pushed to extremes. These weights will match the temperature a reconstruction is regressing to, better than a simple average – by definition, but as they do, the weights quickly become physically meaningless.
Why would you take two trees of equal variance and weight one tree 2 times and another -1. Physically it defies the definition of the temperature proxy, it preferentially chooses the pretty shape of the 2 times tree rings and leaves us with a worse result in the all-important reconstruction period. Of course, the two series produce a far better match to actual temperature in the calibration period, but the weights are based entirely on the orthogonal information of the proxies — the noise, and any spatial information recovered from these reweighted proxy thermometers is horribly corrupted. If the historic signals were clean temperature signals, one proxy would show a historic warming of 2X and the other would show a -1x temperature. A clearly non-physical result, but Mann sophist answer is right, multivariate methods are insensitive to sign.
This is not what causes the historic variance loss so often discussed here. The loss or de-amplificcation of signal in the historic portion of reconstructions that guarantees unprecedentedness in a reconstruction is created through preferential noise selection. We’ll talk about that at a later time though, this is about achieving a physically meaningful result from a regression.
Truncated least squares methods are one way that near singular data can be constrained to a more reasonable result. These methods limit the amount of information used in the regression to K PC’s, giving enough DOF to achieve some variable weighting without allowing too much freedom to achieve the full multivariate result. SteveM has repeatedly written that as more PC’s are added we get further from the true result (equal or near equal positive weights) allowing more of the +/- weighting to creep into the algorithm resulting in overfitting of the remaining data. In the upcoming Antarctic paper- still pending review, a considerable amount of effort was put into finding the correct truncation parameters to prevent overfitting while insuring that relevant temperature information was not extended across the continent.
The methods are complex enough that any lack of care results in non-physical results such as negative weights or equally bad– extreme weights. Current methods such as CPS and various regressions are too often being incorrectly applied in paleo-science with no concern for the physical meaning of the result. As an engineer, I have a hard time understanding why there isn’t more concern about the physicality of meaning of the weights as well as the unverified data. I mean consider that since the proxies are unverified thermometers, extreme, negative, or near zero weights go against the original assumption that these proxies are responding ‘linearly’ to temperature. The scientists ignore their own assumption.
I’m a little tired of writing now, but will try to continue this in another post. This was part of the discussion I had ad the ICCC conference with SteveM, I bet you wish you could have been there 😀