I really don’t know how accurate my reconstruction is. Watching it converge a hundred different times, I got the feeling that some of the data was missing or sorted differently than the reported r values suggest. Leading me to try to use 100% of the data in the reconstruction (not finished with that yet).

Certain fairly substantial features in the graph were impossible to reproduce. From my really endless experience working this type of calculation it was a bit surprising. It meant to me that some of the data might not be the same shape as actual.

If the data is missing or different my software definitely will flatten the peaks so again you are correct. Imagine the two sine waves with fake temperature – The true average would be a flat line with a rise at the end. Now if mann were to provide the final result of the true average yet accidentally leave out one of the sine waves in his report. My algorithm would fight the errors in the handle portion while trying to amplify the end point. The net result would be an energy minimum value with a muted end point. This kind of looks like the result I got where every peak is somewhat muted.

The only reason I think the percent contribution graph might still be very close is because the muting effect would average out over a bunch of noise.

I will be very interested to see the real coefficients produced when the M08 software is replicated.

It was really quite an entertaining comment because it made me think, very nice.

]]>As you may have guess from my previous post, I had not considered how Mann would calculate r for my thought experiment – I simply assumed that it was possible for two out-of-phase sine waves to be given equal weighting (i.e. equal r values, with the same sign). You are quite correct that one sine wave would have +r and the other -r, if they were pure sine waves.

However, as you ingeniously suggest, one can put a fake temperature rise at the end of both waves (over the calibration period) to force both to have the same r value. In that case I can see your algorithm/process will have something to “grab on to” in trying to back-calculate the weighting (r value), *** and so will do better than I had thought. ***

I still *suspect* that it may underestimate the weighting (r value) for such proxies, but since I do not know how your algorithm works, it does now seem possible that I could be wrong. Therefore I look forward to your follow-up articles with interest đź™‚

]]>Your comment is an interesting one. There are several features I found unique. Itâ€™s a good thought experiment.

First, you are correct that the weightings are not perfect. It is the reason I put all the remarks in my article about it.

In the concept of two perfectly cancelling sine waves with a high degree of weighting, Mannâ€™s paper would assign a high positive r to one and the end and a high negative r to the other. Most negative r values were thrown out at the beginning but for certain proxy types a very few negative râ€™s were allowed. If the negative sine was accepted the EIV process would then flip the negative curve before averaging and we would get a 2X sine wave with an upslope at the end.

I compensated for this by looking up the r values and flipping any with a negative r before back calculation.

Still if we look at another example of your sine waves with a fake temperature rise pasted on the end by some unnecessarily complex arbitrary method say RegEM or something.

If this were the case the average would be a flat line with a strong upslope. In this case my algorithm would back calculate and give an upslope at the end of each registering an equal positive and negative value behind the slope resulting in an error value of zero for the majority of the sine wave. The curve would then fit the upslope perfectly with no resistance from the handle (no error in the handle), again resolving correct weightings.

The problem which your are correct about occurs when the result data doesnâ€™t match the input data exactly. Errors in the addition which cannot be corrected for are pushed out of the system as it searches for a minimum total energy (error) level. Minimum energy works great for optics, for this though it leaves something to be desired.

I do want to say that I wonâ€™t be surprised if my above graph is closer than people might expect. It revealed several real features which I don’t believe will change.

1. All of the data types had a strong influence in the 1850-2000 calibration range. Not surprising (now that I see it) since that is how the data was sorted.

2. Tree rings above had a large effect back until 1300.

3. The furthest back end of the graph is dominated by punta laguna mollusks and cave precip records (which seemed a a bit of a mixed bag)

4. Luterbacher had very little influence in the history of the graph except a contribution to the ever important sharp spike at the very end years which isnâ€™t visible due to scale in the above graph.

I doubt any of these features will change substantially from my above graph to the final coefficients. But time will tell.

Thanks for the great comment.

]]>I don’t intend to argue it, since I think it’s quite difficult to prove one way or another – SteveM’s eventual reconstruction will be the final arbiter. But since I try to doubt everything, my thinking was this:

If you had two proxies that were sine waves (i.e. no correlation to Mann’s graph), but one was 180 degrees out of phase with the other, then their summed values (if given equal weighting) would be zero at every point.

In that case Mann could give both of those proxies very high weighting (say 20% each for sake of argument), without affecting the shape of his graph. But I believe your algorithm/process would tend to give both proxies a weighting of zero.

This consequence of this would be you think Mann has given much higher weighting to a few (other) proxies than is really the case. Of course, in reality no proxies are sine waves, but if you combine enough bad proxies (white/red noise generators), then they will surely cancel out in the same way that two out-of-phase sine waves would – and with exactly the same consequences for your algorithm/process.

BTW, I assume that getting the weightings wrong like this would skew you up-coming attempt to correct the temperature scale. Don’t let that deter you though, since at least this should give an upper (lower?) bound the true temperature reconstruction.

]]>Bruce

]]>You are right that my weightings aren’t perfect. This is cause because I don’t believe I have the perfect proxy data or the perfect NH data. The algorithm works to balance positive and negative errors, since I did it on a digital derrivative which is highly noisy if you have ever tried, any minor filtering differences can cause the back calculation to have errors.

However, in interpreting my result we need to remember that the proxy data is very noisy. The fact that I am reproducing many of the features of the original curve means that the algorithm is selecting the correct proxies ahead of the others.

Another point is that each colored section of the above graph is averages from groups of proxies. Proxies with similar appearance are more likely to fight with each other. You might expect that two Schwiengruber Bristlecone series have a similar appearance. I haven’t seen similar looking ones, they all look like noise with temp data pasted on the end, but it sounds reasonable. If one is overweighted it might have been given greater priority than another, but all the Schweingrubers are grouped in the tree ring widths category above. Therefore, through averaging processes, I would say that the above graph is likely more accurate than my reconstruction.

If the reconstruction makes the shape from this amount of noise, it can’t be too far out.

]]>Still, the maths for your next up-coming post (correcting temp scale based upon weighting) should be REALLY USEFUL when Steve McIntyre manages to recreate Mann’s actual weightings. I am thinking of this as a trial run đź™‚

]]>I’m afraid one might obtain solutions that perform similarly well but give a different picture of proxy type loading.

It would be interesting to watch a simulated annealing optomization working here. Their struggling often provides valuable hints about the robustness and straightforwardness of possible solutions.

]]>