## Mann 08 Series Weight Per Year

Posted by Jeff Id on September 28, 2008

As many of you are aware I was able to reconstruct Mann 08 by back caclulation tachnique. I used the known output graph for the northern hemisphere as provided by mann and the proxies provided by Mann to determine weighting for each proxy — no easy task. But after many hours of work I came up with this graph.

It’s not a perfect reconstruction but I expect that is due to filtering and possible mismatches between the proxies provided and the ones actually used to make the above graph. The Blue line is my calculation.

It isn’t perfect but it is close. From the data behind this graph we can determine some interesting things. First, how many curves were used in recent times and historic times. The graph peaks at 138 proxies which is slightly different from my previous post due to a few tweaks to the iteration. From 0 to 1000 years have under 20 proxies which when I look at the scaling magnitude is really about 4 to 6.

This graph is pretty interesting considering Mann’s claim in his paper that the large numbers of proxies allow such good temperature reconstruciton.

The next graph is the most telling though. Which proxies were actually used to create the M08 curve. The vertical scale on this graph is percentage contribution of each series group. The groups were chosen according to M08 SD1 where Mann divided the proxies using a number i.e. 9000 = tree ring width.

This plot therefore depicts the contribution of each year from each type of data. The first thing I noticed is the light blue Luterbacher series which sits at the very bottom right corner of this graph. There were 71 proxies used in the M08 paper, every one was accepted by correlation to temperature because they actually are created using temperature. Skeptics like myself and others had expected that this insturmental data would be very heavily weighted to provide the best correlation to temp. The fun thing about science though is you often get to be wrong. It’s like being married. The Luterbacher group is actually weighted at less than 10 percent of the graph through most of its length. The very tip which isn’t visible near 2000 the graph spikes upward creating 70% of the total output at that point.

Even more interesting is the compiled tree ring width data. This is by far the most prevalent type of data in the record. Mann cut the tips off of many of these proxy series and pasted on a temperature curve. In other words he worked very hard to keep a bunch of data which didn’t appear to be temperature. But after the weightings are performed, all the tree ring data only affected the most recent 800 years.

Our results extend previous

conclusions that recent Northern Hemisphere surface temperature increases are likely anomalous in a long-term context. Recent

warmth appears anomalous for at least the past 1,300 years whether or not tree-ring data are used. If tree-ring data are used,

the conclusion can be extended to at least the past 1,700 years, but with additional strong caveats.

I find this interesting considering hat the trees influence on the final result tapers off by about 1300 AD.

On another interesting item, the contribution of Punta Laguna proxies which are visible on the left side of the graph in light blue placed directly above an ever so slightly darker blue. The vertical scale of Punta laguna in these times is quite large. The graph itself made little contribution to recent times because its values were overwhelmed by the numbers of recent data. The yellow section directly above punta laguna is cave precip records and some other mixed proxies which were type 6001 on the Mann dataset.

I averaged the contribution of these two proxies for the 0-1000 year period and found that they represent no less than 74 percent of the total weight of the reconstruction for this 1000 year period.

Anyway many of my readers are probably more familiar with the individual proxies than I am. I just want to add, every time I look closer at this paper I find more reasons to doubt its conclusions.

## Wolfgang Flamme said

Jeff, no doubt your results are impressive. However I am not convinced that your solution actually ist the one outstanding global optimum (given there is one at all) ecpecially for more recent times when there are so many proxies available.

I’m afraid one might obtain solutions that perform similarly well but give a different picture of proxy type loading.

It would be interesting to watch a simulated annealing optomization working here. Their struggling often provides valuable hints about the robustness and straightforwardness of possible solutions.

## vivendi said

Although I don’t understand all of the underlying maths, I find it interesting to apply this kind of analysis to Mann’s methods. It gives another point of view on how to interpret Mann’s claims.

## Chris H said

Although very interesting, I am finding it difficult to believe that you’ve really managed to reconstruct the real weighting used by Mann: As you yourself say, your algorithm/process tends to reduce weighting to zero (if there is no correlation), when it is more likely that Mann used many low-correlation proxies that cancelled themselves out. (Bearing in mind we are talking correlation with the output graph, rather than correlations with modern temperatures.)

Still, the maths for your next up-coming post (correcting temp scale based upon weighting) should be REALLY USEFUL when Steve McIntyre manages to recreate Mann’s actual weightings. I am thinking of this as a trial run 🙂

## Jeff Id said

Chris and Wolfgang,

You are right that my weightings aren’t perfect. This is cause because I don’t believe I have the perfect proxy data or the perfect NH data. The algorithm works to balance positive and negative errors, since I did it on a digital derrivative which is highly noisy if you have ever tried, any minor filtering differences can cause the back calculation to have errors.

However, in interpreting my result we need to remember that the proxy data is very noisy. The fact that I am reproducing many of the features of the original curve means that the algorithm is selecting the correct proxies ahead of the others.

Another point is that each colored section of the above graph is averages from groups of proxies. Proxies with similar appearance are more likely to fight with each other. You might expect that two Schwiengruber Bristlecone series have a similar appearance. I haven’t seen similar looking ones, they all look like noise with temp data pasted on the end, but it sounds reasonable. If one is overweighted it might have been given greater priority than another, but all the Schweingrubers are grouped in the tree ring widths category above. Therefore, through averaging processes, I would say that the above graph is likely more accurate than my reconstruction.

If the reconstruction makes the shape from this amount of noise, it can’t be too far out.

## BDAABAT said

Question: Excel Stinks??? In combo graph above.

Bruce

## Jeff Id said

I couldn’t make the graph give actual years, it was done in excel. So (Years+1) excel stinks. Sorry about that, I worked on it for a long time and got mad when I couldn’t change x number label.

## Chris H said

Jeff,

I don’t intend to argue it, since I think it’s quite difficult to prove one way or another – SteveM’s eventual reconstruction will be the final arbiter. But since I try to doubt everything, my thinking was this:

If you had two proxies that were sine waves (i.e. no correlation to Mann’s graph), but one was 180 degrees out of phase with the other, then their summed values (if given equal weighting) would be zero at every point.

In that case Mann could give both of those proxies very high weighting (say 20% each for sake of argument), without affecting the shape of his graph. But I believe your algorithm/process would tend to give both proxies a weighting of zero.

This consequence of this would be you think Mann has given much higher weighting to a few (other) proxies than is really the case. Of course, in reality no proxies are sine waves, but if you combine enough bad proxies (white/red noise generators), then they will surely cancel out in the same way that two out-of-phase sine waves would – and with exactly the same consequences for your algorithm/process.

BTW, I assume that getting the weightings wrong like this would skew you up-coming attempt to correct the temperature scale. Don’t let that deter you though, since at least this should give an upper (lower?) bound the true temperature reconstruction.

## Jeff Id said

Chris,

Your comment is an interesting one. There are several features I found unique. It’s a good thought experiment.

First, you are correct that the weightings are not perfect. It is the reason I put all the remarks in my article about it.

In the concept of two perfectly cancelling sine waves with a high degree of weighting, Mann’s paper would assign a high positive r to one and the end and a high negative r to the other. Most negative r values were thrown out at the beginning but for certain proxy types a very few negative r’s were allowed. If the negative sine was accepted the EIV process would then flip the negative curve before averaging and we would get a 2X sine wave with an upslope at the end.

I compensated for this by looking up the r values and flipping any with a negative r before back calculation.

Still if we look at another example of your sine waves with a fake temperature rise pasted on the end by some unnecessarily complex arbitrary method say RegEM or something.

If this were the case the average would be a flat line with a strong upslope. In this case my algorithm would back calculate and give an upslope at the end of each registering an equal positive and negative value behind the slope resulting in an error value of zero for the majority of the sine wave. The curve would then fit the upslope perfectly with no resistance from the handle (no error in the handle), again resolving correct weightings.

The problem which your are correct about occurs when the result data doesn’t match the input data exactly. Errors in the addition which cannot be corrected for are pushed out of the system as it searches for a minimum total energy (error) level. Minimum energy works great for optics, for this though it leaves something to be desired.

I do want to say that I won’t be surprised if my above graph is closer than people might expect. It revealed several real features which I don’t believe will change.

1. All of the data types had a strong influence in the 1850-2000 calibration range. Not surprising (now that I see it) since that is how the data was sorted.

2. Tree rings above had a large effect back until 1300.

3. The furthest back end of the graph is dominated by punta laguna mollusks and cave precip records (which seemed a a bit of a mixed bag)

4. Luterbacher had very little influence in the history of the graph except a contribution to the ever important sharp spike at the very end years which isn’t visible due to scale in the above graph.

I doubt any of these features will change substantially from my above graph to the final coefficients. But time will tell.

Thanks for the great comment.

## Chris H said

Jeff,

As you may have guess from my previous post, I had not considered how Mann would calculate r for my thought experiment – I simply assumed that it was possible for two out-of-phase sine waves to be given equal weighting (i.e. equal r values, with the same sign). You are quite correct that one sine wave would have +r and the other -r, if they were pure sine waves.

However, as you ingeniously suggest, one can put a fake temperature rise at the end of both waves (over the calibration period) to force both to have the same r value. In that case I can see your algorithm/process will have something to “grab on to” in trying to back-calculate the weighting (r value), *** and so will do better than I had thought. ***

I still *suspect* that it may underestimate the weighting (r value) for such proxies, but since I do not know how your algorithm works, it does now seem possible that I could be wrong. Therefore I look forward to your follow-up articles with interest 🙂

## Jeff Id said

I don’t think you are wrong at all. I found your comment to be so interesting I spent about 3 hours yesterday working through different things to understand the effect. I only stopped after my wife yelled at me to stop working on my blog all day.

I really don’t know how accurate my reconstruction is. Watching it converge a hundred different times, I got the feeling that some of the data was missing or sorted differently than the reported r values suggest. Leading me to try to use 100% of the data in the reconstruction (not finished with that yet).

Certain fairly substantial features in the graph were impossible to reproduce. From my really endless experience working this type of calculation it was a bit surprising. It meant to me that some of the data might not be the same shape as actual.

If the data is missing or different my software definitely will flatten the peaks so again you are correct. Imagine the two sine waves with fake temperature – The true average would be a flat line with a rise at the end. Now if mann were to provide the final result of the true average yet accidentally leave out one of the sine waves in his report. My algorithm would fight the errors in the handle portion while trying to amplify the end point. The net result would be an energy minimum value with a muted end point. This kind of looks like the result I got where every peak is somewhat muted.

The only reason I think the percent contribution graph might still be very close is because the muting effect would average out over a bunch of noise.

I will be very interested to see the real coefficients produced when the M08 software is replicated.

It was really quite an entertaining comment because it made me think, very nice.