Good work. As I understand it, what you’re saying is this:

Take a random set of proxies and select those proxies that show a positive correlation with the instrumental temperature data. Since the ITD increases with time this will select proxies which have a terminal portion (most recent) which increases with time. If selected by just positive or negative correlation, the selected sample should be about 50% of the original set. But if you set a value or r as a hurdle that the correlation must clear to be included, less than 50% would make the cut; presumably the proportion that makes the cut would decrease with increasing r (I always prefer r^2 but obviously it doesn’t matter here). Now take some kind of average of the selected proxies. Obviously it goes up during the ITD period since all proxies have been selected to have that feature. Before that period all the proxies are random sequences (with any kind of noise; I can’t see that it matters)so with any reasonable kind of averaging the noise over that period cancels and you are left with a flat line – which becomes the hockey stick handle. Now, “calibrate” the proxy average by multiplying by a constant so that the terminal slope of the proxy average is the same as the slope of the ITD. Next, apply the “offset” by moving the calibrated proxy average up or down to connect with the beginning of the ITD. Which gives a hockey stick shaped average proxy with excellent agreement with the ITD over the relevant period.

Have I understood correctly?

To me the process is equivalent to having a calibration curve for each proxy Pi of the common form Ti = Ai+Bi*Pi where Ai and Bi are constants, different for each proxy, and Ti is the temperature predicted by proxy Pi. Doing it this way would make it appear more conventional and harder to see what is really going on.

dc

]]>This paper you referenced is fantastic. It is exactly what I found above. Instead of looking at a wide range of red noise, they took the time to match the temperature to existing proxies and calculated the offset. I was working on that same thing myself. They demonstrated the same shapes of curve and everything, they just didn’t spend much time exploring other red noise frequencies for their effects.

I have found some argument against matching the data with red noise along the lines that the red noise doesn’t match the real data and has some imparted signal. Total BS in my opinion, simply because ANY noise causes the same problem just to different degrees. This is the argument M04 used against McIntyre’s red noise reconstructions.

After reading this link it makes me think that what I need to do is focus strongly on the general sense of the math and see if I can find a relationship of noise characterization factors to response which allows the calculation of the offset and demagnification.

BTW this effect applies to every single proxy reconstruction I have read not just M08. Also, I calculated a historic (pre-calibraion) amplification value of 0.62 for the Mann08 series. I just haven’t figured out a good way to calculate the offset yet.

Really big thanks.

]]>Isn’t your work here related to the Science 2004 paper of von Storch et al.

http://www.sciencemag.org/cgi/content/abstract/306/5696/679

In this paper they showed that “the method used in MBH98 would inherently underestimate large variations had they occurred” (quote from Wikipedia).

Abstract:

“Empirical reconstructions of the Northern Hemisphere (NH) temperature in the past millennium based on multiproxy records depict small-amplitude variations followed by a clear warming trend in the past two centuries. We use a coupled atmosphere-ocean model simulation of the past 1000 years as a surrogate climate to test the skill of these methods, particularly at multidecadal and centennial time scales. Idealized proxy records are represented by simulated grid-point temperature, degraded with statistical noise. The centennial variability of the NH temperature is underestimated by the regression-based methods applied here, suggesting that past variations may have been at least a factor of 2 larger than indicated by empirical reconstructions”.

It appears Mann et al repeat the same error in 2008.

BTW your analysis is interesting; a real eye-opener.

I stongly recommend you publish.

]]>This is amazing – this sort of reminds me of some modelling I did during the middle 1970’s trying to figure out how the Geostatistical types generated the “Variogram”. I was working for a mining company and we just could not see how they arrived at their final curve. And computing was pretty simple too – we had a Cyber76 at head office, and a PDP11 at the mine lab.

I started off with a simple 2D shape and used Matheron’s formula for computing the variogram. End result from lots of iterations and similar to your methodology was that variables were linearly related at sample spaces less than the variogram lag, and random at sample spaces greater than the lag.

But you would never have worked this out from the dense, jargon laden papers these guys published.

I concur with Bishop Hill – write it up as a paper.

]]>