Simple Statistical Evidence Why Hockey Stick Temp Graphs are Bent!!

This is a continuation of my posts on selective sorting of data and why it absolutely is a false representation of temperature. If this is your first time on this subject start at this link The Flaw in the Math Behind Every Hockey Stick

Paleoclimatology uses a statistical sorting technique to generate hockey stick graphs which demonstrate to the world that our current temperature has a higher upslope than any time in recent history. This is yet another demonstration of how that conclusion is false!! The data used in this post is random, a known temperature signal has been physically added with a + sign to the random data

As promised before but a bit late, I have worked on some red noise examples to explore how variations in signal noise affect the overall weighting of temperature in hockey stick style calibrations.

The first graph is similar to the weightings in my previous examples. Two random red noise series are plotted as grey and dark grey in the background they both quite randomly started at about -5. The blue line is actual temperature and has zero signal between 1900 and 2000 with a 1 degre hump at year 1250. The green and orange lines are the distortion in the temperature scale spaced 1 degree apart. The bottom orange line represents the true zero degree point on the graph the top orange is true 1 degree. The vertical scale compression is 42%.

The black line is the actual modifications which are created due to the sorting process whcih is as followsl.

1. Correlate all proxies to a linear uprise in temperature discarding any negative slopes

2. Scale and match all slopes to an assumed 0 to 1 degreee temperature rise between 1900 and 2000. (output = slope * original data + offset) — Slope and offset are modified for 0 to 1 degree.

3. Average high r^2 value proxies to 0.8 correlation.

The smoothness of the line results from a high number of red noise sereis, even the green and orange lines are just averages of noise values with offset temperatures included. So I clearly have enough data.

What happens when higher frequency noise series are used?

Higher noise means the red noise can shift more frequently across the graph and a key point is that this increases the average slope of the series. So I performed the same calculations with higher red noise.

This is what I expected to see. a very high compression of historic values compared to actual. The blue line is again 1 degree tall and represents the true temperature signal. I was again able to extract a strong hockey stick signal simply by sorting for my ‘favorite’ high correlation series. What is different though is the way the green and orange lines shift upward in recovery (from right to left behind the 1900 – 2000 calibration period) created by selective correlation. I refer to this as the recovery rate. The true signal ZERO temperature signal is perturbed and it recovers the further away from the disturbance.

This graph is very interesting to me because the black line settled on an exact value of 0.5. After the slope and offsets produced a very high compression of 13% of full scale. This is very high slope data however.

I decided to try very low slope data just for fun.

The green and orange lines are highly spaced out but what many of you might find interesting is that while this calculation again revealed a perfect 0 to 1 degree temperature rise in 1900 to 2000 it also produced an amplification of the historic signal of 242% !!

The recovery rate of the signal compared to the slope as well as the overall amplification of the historic pre-1900 data are dependant on the frequency of the data used!!

Not only is the temperature scale of every hockey stick proven completely wrong by this amazingly simple demonstration, the Mann 08 paper incorrectly combines proxies with different frequency (noise levels) to create historic temperature. EVERY PROXY TYPE MODIFIES THE AMPLIFICATION DIFFERENTLY.

What happens to the signal when there is an actual temperature we are looking for.

First lets see a placebo graph with a medium high r value and a compression of 18.2% with a historic offset of 0.5C and no temperature signal. This graph had no temperature signal in the last 100 years as indicated by the blue line. Typical proxies are shown in the background.

The 1250 year 1 degree signal we all know exists because I added it was added is flattened to nearly nothing in the same manner as above, a strong hockey stick upslope is created where we know none exists and a temperature offset at year 1000 of 0.5 degrees was also produced.

The blue line representing the added temperature signal to the random proxies has a 1 degree upslope from 1900 to 2000. Again the compression of historic values is exactly the same at 18.2% — very good news for correcting temperature scale. This value is easy to calculate for temperature proxies so future HS papers can utilize it to their advantage.

The offset of the zero degree line at year 1000 shifted downward from 0.5 degrees C to a 0.39 degree point.

I added a negative temperature to the graph at 1550 just to show the scale works. This has a scale factor slightly less than scale .5 and an offset from actual temp at 1000AD of 0.34 degrees C.

Finally due to popular request, I ran the exact settings of the above curve with a negative slope in my fake temperature proxies while still looking for a positive slope. This graph has the same settings as above but even I haven’t seen it yet.

.

The blue line is again the signal inserted into the data. Still I was able to create a hockey stick in the black line . The magnification of the historic data was 0.55% with an offset of 0.8 at year 1000.

Same graph with high freq red noise equal to graph 2.


Again, I created a hockey stick where there was none.

There are a few more series I can play with but one thing I can say for certain.

Selective sorting of data distorts the temperature scale of the historic temperature range. Until this effect is compensated for sorting based paleoclimatology reconstruction can not be accepted.

5 thoughts on “Simple Statistical Evidence Why Hockey Stick Temp Graphs are Bent!!

  1. Jeff,

    This is amazing – this sort of reminds me of some modelling I did during the middle 1970’s trying to figure out how the Geostatistical types generated the “Variogram”. I was working for a mining company and we just could not see how they arrived at their final curve. And computing was pretty simple too – we had a Cyber76 at head office, and a PDP11 at the mine lab.

    I started off with a simple 2D shape and used Matheron’s formula for computing the variogram. End result from lots of iterations and similar to your methodology was that variables were linearly related at sample spaces less than the variogram lag, and random at sample spaces greater than the lag.

    But you would never have worked this out from the dense, jargon laden papers these guys published.

    I concur with Bishop Hill – write it up as a paper.

  2. Jeff

    Isn’t your work here related to the Science 2004 paper of von Storch et al.

    http://www.sciencemag.org/cgi/content/abstract/306/5696/679

    In this paper they showed that “the method used in MBH98 would inherently underestimate large variations had they occurred” (quote from Wikipedia).

    Abstract:

    “Empirical reconstructions of the Northern Hemisphere (NH) temperature in the past millennium based on multiproxy records depict small-amplitude variations followed by a clear warming trend in the past two centuries. We use a coupled atmosphere-ocean model simulation of the past 1000 years as a surrogate climate to test the skill of these methods, particularly at multidecadal and centennial time scales. Idealized proxy records are represented by simulated grid-point temperature, degraded with statistical noise. The centennial variability of the NH temperature is underestimated by the regression-based methods applied here, suggesting that past variations may have been at least a factor of 2 larger than indicated by empirical reconstructions”.

    It appears Mann et al repeat the same error in 2008.

    BTW your analysis is interesting; a real eye-opener.

    I stongly recommend you publish.

  3. Geoff,

    This paper you referenced is fantastic. It is exactly what I found above. Instead of looking at a wide range of red noise, they took the time to match the temperature to existing proxies and calculated the offset. I was working on that same thing myself. They demonstrated the same shapes of curve and everything, they just didn’t spend much time exploring other red noise frequencies for their effects.

    I have found some argument against matching the data with red noise along the lines that the red noise doesn’t match the real data and has some imparted signal. Total BS in my opinion, simply because ANY noise causes the same problem just to different degrees. This is the argument M04 used against McIntyre’s red noise reconstructions.

    After reading this link it makes me think that what I need to do is focus strongly on the general sense of the math and see if I can find a relationship of noise characterization factors to response which allows the calculation of the offset and demagnification.

    BTW this effect applies to every single proxy reconstruction I have read not just M08. Also, I calculated a historic (pre-calibraion) amplification value of 0.62 for the Mann08 series. I just haven’t figured out a good way to calculate the offset yet.

    Really big thanks.

  4. Jeff,

    Good work. As I understand it, what you’re saying is this:

    Take a random set of proxies and select those proxies that show a positive correlation with the instrumental temperature data. Since the ITD increases with time this will select proxies which have a terminal portion (most recent) which increases with time. If selected by just positive or negative correlation, the selected sample should be about 50% of the original set. But if you set a value or r as a hurdle that the correlation must clear to be included, less than 50% would make the cut; presumably the proportion that makes the cut would decrease with increasing r (I always prefer r^2 but obviously it doesn’t matter here). Now take some kind of average of the selected proxies. Obviously it goes up during the ITD period since all proxies have been selected to have that feature. Before that period all the proxies are random sequences (with any kind of noise; I can’t see that it matters)so with any reasonable kind of averaging the noise over that period cancels and you are left with a flat line – which becomes the hockey stick handle. Now, “calibrate” the proxy average by multiplying by a constant so that the terminal slope of the proxy average is the same as the slope of the ITD. Next, apply the “offset” by moving the calibrated proxy average up or down to connect with the beginning of the ITD. Which gives a hockey stick shaped average proxy with excellent agreement with the ITD over the relevant period.

    Have I understood correctly?

    To me the process is equivalent to having a calibration curve for each proxy Pi of the common form Ti = Ai+Bi*Pi where Ai and Bi are constants, different for each proxy, and Ti is the temperature predicted by proxy Pi. Doing it this way would make it appear more conventional and harder to see what is really going on.

    dc

Leave a comment