Hockey Stick Explanation
Posted by Jeff Id on July 6, 2010
There is quite a bit of confusion about the nature of hockey stick temperature constructions. Currently, many non-paleo climate scientists seem to want to avoid the discussion altogether. However, these studies are still freely passed through review, which seems to me a very biased point of view. I reported here, on an open review of a paper written by Ammann on a different method for scaling proxies to correct for variance loss in proxies. As I read it, it looks like a method which will get closer to a proper solution but not fix the problems. It seems that some climate scientists have fully recognized the problem of Mannian style reconstructions and are interested in improving the results.
This probably has come about after the NAS panel’s report on Mann’s hockey stick but whatever the reason it is good news.
In blogland, people tend to see the hockey stick as a temperature graph that Steve McIntyre debunked. What is a commonly missed point is that the first hockey stick was the result of mathematical error causing the preferential selection of a certain group of high variance proxies. Since that time, that particular error has been corrected but current hockey sticks are created by different proxies and methods. All of these global temperature methods that I have read – and there have been many – try to linearly re-weight multiple proxies to provide the best match to measured temperature. Since the proxies are noisy – very noisy – this reweighting process preferentially selects noise which happens to create better agreement to measured temperatures and deweights the noise which doesn’t agree. The result is that the signal in the measured temperature region becomes a good match (because of the noise) while the historic noise is unsorted and randomly combined. The near guaranteed result when the measured temperature you are matching is an upslope is a flat pre-measurement handle and an unprecedented blade. Again, this is due to the noise and has nothing to do with the signal. Nobody even knows if there is a temperature signal in trees.
Now in the link above ordinary least squares (OLS) is compared to the new method which regresses one proxy at a time using the OLS method to estimate residuals. It doesn’t matter if you don’t get that part because it’s just another way to calculate what to multiply times each series before adding them together.
Items which match better to temperature still get more heavily weighted than those which don’t.
Climate scientists like to call it variance loss to the low frequency signal, I prefer to refer to it as variance amplification of the noise. The OLS method shown is, like many methods, completely insensitive to the sign of the proxies. In other words a downslope proxy with an inverted temperature profile will be flipped upside down and weighted heavily. Of course the physical meaning of reading a thermometer upside down (because you like the fit better) is nonsense. I found the discussion of these effects by climate scientists posting replies to the paper by Ammann to be interesting in that they acknowledge and understand that Mannian’s latest reconstructions will likely exhibit these characteristics, but find that many seem to have failed to understand the reasons for this variance amplification problem. They discuss testing various methods against different types of noise and this sort of thing, an excellent idea, but really seem to skirt around the root cause of the AUTOMATIC AND GUARANTEED variance differential between the calibration range and the historic range. — See hockey stick posts linked above for more.
If you have interest in these things, the link above and the replies in the interactive discussion are quite interesting and informative.
Here is a comment which I noted in my previous post, made by one of the reviews in the interactive discussion.
However, it is well established in the statistical literature that traditional regression parameter estimation can lead to substantial amplitude attenuation if the predictors carry significant amounts of noise.
This has been an endless point made here but still many people have failed to understand the difference between these methods and the Mannian original hockey stick method. It’s also worth noting that these methods are applied throughout proxy based climatology and I’ve not seen a single good one. Dr. Loehle made the best real effort by averaging pre-calibrated curves but his source data could very well be nonsense as nobody has any proof that these proxies are in any way temperature. Other papers using multi-proxy methods include rainfall estimates, sea ice extent and even one in coral that was run at Climate Audit. They all seem to contain the same kinds of regressions, they keep finding unprecedented results, and they keep being passed through review despite these known MAJOR issues.
Anyway, it seemed worthwhile to call attention to this paper again and to try again and explain what is creating so many unprecedented paleoclimatology curves. I hope that climate scientists continue their progress toward being honest about the horrible state of paleoclimatology and step back from the “unprecedented” language. So far, there has been very little change.