# Id Goes Mythbuster on Hockey Sticks- CPS

This post was prompted by a comment by Steve McIntyre at Climate audit. He suggested the use of CPS in my calcs and explained how it works. It is of course important to use the same methods as previous papers to demonstrate the distortion of historic temperatures created by these methods. So I did.

I used the same Northern Hemisphere reconstruction as in my previous post.

Demo of Flawed Hockey Stick Math Using Actual NH Data

A graph of the signal and the calibration range is below.

The red portion in the graph above is the calibration range used. The total graph is the signal stated by Mann08 to be the real NH historic temp within a margin of error. In this example we assume that M08 got the temperature exactly right, insert it into 10,000 red noise proxies and like hound dogs we go look for it with the same methods Mann08 used.

I generated 10,000 series of random red noise data 2008 years long which averages to zero just by its nature. I then added the above northern hemisphere reconstruction to the data so the average made the graph above. The graph is in fact an average of the 10000 series rather than a direct graph of the original data, its accuracy can be confirmed by the 0 to 200AD year tail which has a zero signal because the M08 paper didn’t have any data for this period.

In my last post, the proxy data was calibrated by matching the slope and intercept of a line fitted to each proxy to the slope and intercept of a line fitted to the calibration period. The CPS method matches the standard deviation and mean of each proxy in the calibration period to the standard deviation and mean of the targeted temperature series. This method is employed in this post.

Astute readers will note that this is also scaling and offsetting the data according to a linear method just as my last post did. I don’t care what method you use, eiv, cps, pca whatever these guys can think of. Sorting and discarding data according to a specific period guarantees distortion of the result. In the case of CPS, the distortion is the of the same shape as the other varieties of linear calibration examples I have shown.

The method for cps is

Calculate Standard deviaton and mean for (red) calibration period in the above graph which represents in this case known temperature.

Calculate r correlation value for all series. Higher r means better correlation – new readers.

Calculate and match all series standard deviation and mean in the same calibration period to the standard deviatoin and mean of the calibration range data in the abover graph.

Average the proxies with r values greater than the threshold together.

The first graph uses the above process to sort for r values greater than 0.1 — 3399 series had r >0.1.

The blue line is the original signal which is in this case perfectly known to be the signal we are looking for. By sorting for all proxies with an r > 0.1 and calibrating using the CPS method we get the purple/pink line above. The graphs are not the same clearly, but what is more important to notice is the 1850 to 1995 calibration data fits very well between the graphs, while there is clear demagnification of older historic data. Since the true signal in the data is the blue line we can say that the historic data is not on the same temperature scale the calibration data.

Ok, for those familiar with correlation your first thought might be to increase the r value. After all better correlation matches the calibration period better. Let’s see what happens when we increase the r value to 0.8.

As you can see, the data in the 1850 to 1995 calibration range fits very closely. What happens to the historic data pre-calibration range data is an even more pronounced demagnification and offset of the data. The true representation of temperature actually got worse.

For those who have read my earlier posts, the true zero temperature is shown in the yellow line from 0 to 200 AD. In addition to the strong demagnification of the signal, the graph has an offset of about -0.1 degrees.

What happens if we go extreme and sort to a correlation of r> 0.9.

Again while the fit to the 1850 – 1995 calibration range improved, the historic signal distortion is even worse than before. Both the offset and deamplification are greater. Higher correlation is not the answer.

The next graph shows rather dramatically what happens if I use the same CPS methods on data with no signal whatsoever. There is no real temperature added to the red noise.

After CPS, the purple line below has a clear uprise in temperature in recent times. There is no real temperature in this purple graph, it is 100% distortion.

Conclusions:

I again put a known exact signal in the data, applied reasonable noise and employed the CPS method to recover the data. I was again unsuccessful in creating the original graph.

It doesn’t matter which method you use to sort data by a trend. It is simply not possible to perform sorting based calibration without creating distortions. This is a fundamental problem in the methodology used in paleoclimatology and is a basic understanding of serious scientists around the world.

If these guys really invented multiple magic methods for extracting signal cleanly from noisy data why isn’t it used throughout other sciences? Also, why don’t these geniuses try their methodologies on a known set of data before they publish?.

I don’t think I need to answer that.

## 11 thoughts on “Id Goes Mythbuster on Hockey Sticks- CPS”

1. Eric Anderson says:

Jeff, if your analysis is correct, this is extremely significant. I hope this issue can get the exposure it deserves. It seems like the analysis you’ve done would certainly be of interest to some journals for publication (with the caveat that your approach should be checked and rechecked and rechecked again to make sure it is correct before publishing).

Keep up the good work.

Eric

2. Chris H says:

Nice! The only argument I can think against Jeff’s work so far is the magnitude & frequency (parameters) of the red noise he uses.

What someone needs to do is analyse all the real proxies to find the closest red noise parameters, and then either use the average of those parameters (which is easy but imperfect) – or else come-up with statistical measures for how the red noise parameters vary between real proxies, and then use those statistics to generate red noise with parameters that vary (between fake proxies) in the same way as they actually do.

The first (easy) method should be sufficient for most reasonable people, and I’d love to see it done here, as it’d squash any nagging doubts that I have.

3. Eric,

I am working on replicating this work in r. Right now it is in C++ so it is bulky. It may take me a week or two to put it in r because I don’t know the language. When I get it done, I will put the scripts up so interested people can try different things.

Chris,
You are right about the frequency and magnitude. I think you have read many of my other posts, so you know I have tried a dozen different frequencies and magnitudes. The red noise level does affect the magnitude of the overall distortion but it doesn’t affect whether there is a distortion.

Also after so many hours working on this, my eyes (and some early calculation) tell me that actual proxy noise is more than sufficient to make a substantial impact on the final graph shape. I believe a 0.6+/-0.2 magnification will be about right for the M08 paper. CA is close to replicating the coefficients. Should be fun.

4. Hello, Jeff,

Nice to come across you! Clearly you are deeply into this stuff in the mathematical way that Steve McIntyre is. As I wrote yesterday, I can’t emulate (or copy or check!)your work because R is outside my orbit. However, I can readily check up what my type of analysis would do with the actual time series numbers that you have generated or collected, and this I am keen to do.

Do you have a link to the data? I would /much/ prefer just a few time series rather than a whole bunch of them – and the data that gave rise to the final plots you’ve displayed would be ideal. Some sort of text or CSV file would make it simplest for me. I would put the numbers into my stats package and look at what is signalled.

The question raised by correlation of proxies to real temperatures is something I often contemplate. The temperatures have to be taken as the “gold standard”, but even they are known not to be the “correct” ones, being subject to possible systematic errors various as well as to random ones. When I was reading the literature on calibration closely (back in 1979 I think) it became clear to me that even a “simple” linear calibration method generated considerable heat in the statistical community, so I opted for a fairly straightforward technique described by Davies and Goldsmith in their book “Statistical Methods in Research and Production” (Longman, 1976). However, by the standards of climatological statistics to use anything other than the simplest regression approach would seem to be very “picky”. However, I am always a bit worried by the use of r as a criterion for anything to do with prediction. It is I suspect not really very suited to such complex systems as those that govern climate, where numerous factors might affect quite substantially the relationships between data that come from the same (supposed) year. What if the proxy in question has some sort of lagged relationship to the temperature data, for example? Just throwing these ideas in, of course!

I’m full of admiration for your amazing command of C++, and seemingly now of R. I am a simple soul, I have to confess, and tend to think in fewer dimensions than a mathematician would. Anyway, congratulations on your expositions and the graphics, both first class and fascinating in my estimation

Hope to read something further from you ere long.

Cheers, Robin Bromsgrove, UK

5. Hello again Mr Id.

It will not surprise you to know that I did not understand a single word of all the technical stuff. But someone will, so keep up the good work!

6. Chris H says:

Jeff,

It is the magnitude of the distortion that I am concerned about – it would be quite easy to claim that your red noise exaggerates the real effect. That’s why I’d like to see red noise that approximately matches the proxies used.

7. Chris,

I wasn’t very clear. I completely agree with you, one of the steps will have to be to matching the noise of different proxy types. As you said, this is easy enough and has been done before. What I am thinking is to model the proxies to noise and frequency values greater and less than actual just to show it doesn’t save the result.

8. Robin #4,

I generated 10000 series of red noise for this demo and inserted a single fixed signal. I would be happy to email some of the data to you but I am unsure what it will accomplish. If you want me to still send something I am happy to.

9. Demesure says:

“with the caveat that your approach should be checked and rechecked and rechecked again to make sure it is correct before publishing”

If the standard for checks is the same than what has been applied to Mann2008, no recheck would be necessary for Jeff to be published.
Hey, that’s climate “science”, after all !

10. I have seen the tutorial and actually have made a bit of progress. No time last night cause I got a bit ticked about an article I read from Mann.