the Air Vent

Because the world needs another opinion

Mann 08 Temperature Reconstruction Using Less Than 11% of the Data

Posted by Jeff Id on September 27, 2008

I’m sorry for the delay in posting. In my previous articles, I found out that not only does selecting proxies based on correlation in noisy data create a distortion in temperature scale, the distortion can be calculated!

If you’re like I was and you need to catch up on what is going on here are some links.

Ten Things Everyone Should Know About the Global Warming Hockey Stick

The Flaw in the Math Behind Every Hockey Stick

Temperature Scale Distortion in Hockey Sticks

It became a bit of an obsession to me to prove out this mathematical distortion of temperature scale but I couldn’t do it to any graphs without knowing the weights applied to different proxies in a reconstruciton. Climate Audit will eventually be able to reconstruct Mann08 using the information and software provided generously by Mann. I am not being facetious about the generosity, I say that because Climate Audit was started and fueled by the lack of transparency in this science. In the past almost no information was given out. This is certainly due to the efforts of Steve McI and group at climate audit. Whether you agree with them or not, you have to give credit for an improvement in transparency in this field. Still, in Mann 08 there are many things which have been left out and mixed up. This makes reconstruction of his work and comparison of some of the decisions he made with the statistics extremely difficult to verify.

For my work, I couldn’t wait for Climate Audit to replicate the results and provide weightings so I took a back door. My company is an optics company which uses certain proprietary algorithms to calculate lens shapes. My partner and I developed these algorithms over years for solving complex simultaneous equations. The theory applies quite well to back calculation of climate reconstruction curves.

M08 started with 1209 proxy curves which were provided. One is shown below.

It doesn’t look much like temperature but this is an example of the data used to make a hockey stick.

There were 1209 of these graphs used to assemble Mann’s latest hockey stick temperature graph. He works through a correlation process which scales and magnifies each curve to fit measured temperature in the best possible fashion. After correlation more than 60% of the data are thrown out with the preferred data averaged and weighted again according to the location on Earth, local temperature correlation and whatever else is deemed reasonable are applied to create the final hockey stick.

Reproducing this effort is a daunting task even when the software is provided. (The software doesn’t run out of the box. It’s not even close.)

Still we know (or think we do) the proxies which were used. We also know that each series was magnified linearly. For the non-scientific this means multiply times a magnification value and add an offset.

So for every series there is an equation which magnifies and offsets it according to the familiar equation for a line y = mx+b; y is the magnified graph, x is the original, m is the multiplier and b is the offset for the graph. There is an additional multiplication which is created according to the area of the earth the proxy represents but it is also linear which means it alters the final value of m. If we have the coefficients m and b for every curve we can multiply them by the proxies, add them up and we get the hockey stick. It’s that simple, —- Yet we don’t have the m and b coefficients for the proxies.

I want coefficients!

I can’t reproduce Mann’s software. I can read it and understand it, but reproduction of the software seems tedious to an extreme. I need the coefficients to work my proxy temperature correction calculations to remove the distorted proxy temperature curves. I was able to work the math backwards.

Here’s the math:

484 series added together makes a hockey stick after magnificaiton by y = mx +b. In the northern hemisphere there were only 401 values used. (I’m reasonably sure about the number but I don’t feel like checking it out right now because I AM TIRED!)

401 * 2 values m and b = 801 unknowns.

The final output for the northern hemisphere extended from 200AD to 1995AD so we have 1795 known values.

So we have 802 unknowns and 1795 known values we should be able to lay out a matrix to solve this. But it isn’t that easy. Our m values cannot change sign and are nearly 100% are positive because it is assumed (incorrectly) that tree ring widths will not shrink when temperature rises they only grow. A negative m will flip the graph upside down which was against the assumptions so we cannot have any negative values for coefficients.

This is the same problem in our optical calculations in my company so we need to use different math techniques to solve the equations.

The Steps

Skip ahead if you don’t like math.

Linear matricies need to have the same order within the matrix. No squared values next to constant values. I am pretty proud of the first step, occasionally even a blind squirrel finds a nut.

1 -Take derrivative of all proxies and target hockey stick graph from the northern hemisphere

2 – Use taylor series iterative process to adjust m coefficients to minimize total error for each proxy weighting error according to proxy derrivative magnitude and direction.

3 – Don’t allow multipliers to change sign. Algorithmically limit coefficients to their sign.

4 – Integrate series result and calculate error from original hockey stick

5 – Use second taylor series iterative process to adjust b coefficients to minimize proxy offset errors

The Result

After a few billion calculations the series converged on a minimum error solution with the given proxies and target output graph. The reconstructed graph is not perfect due to filtering and possibly a non original proxy series but the result is reasonably close. Definitely enough to work on the rest of the temperature correction mathematics.

There were some amazing discoveries.

From my extensive experience with this method, one drawback of the math I used is that it tends to reject non-correlating proxies completely. Running the m coefficients right to zero in favor of high correlation graphs. The result was revealing in this case.

The graph at the bottom shown in pink matches the blue graph closely, yet of 1209 series only 131 were required to make the hockey stick. Only 11% were used in the end, the rest of the series had a zero amplitude (m).

Let’s look at the latest hockey stick temperature graph.

Now remember the red line is not part of the temperature reconstruction but is rather temperature data measured from ground stations around the world. It looks pretty bad for earth, but the real question is what was the true temperature in history and can we believe this graph.

So below we have my reconstructed temperature reconstruction (couldn’t resist). My modified software based on many years of work between myself and my partner was able to find the coefficients to produce the pink graph below.

The blue line is supposed to be the same as the EIV land plus ocean from the graph above. I will check tomorrow to make sure, but the data was provided by mann for the blue curve on his own website. Either way you notice a couple of things, while the pink line is similar to the blue there are some errors. These are due to the mathematics used and the fact that various filterings were performed by Mann on the proxies.

Still the overall magnitude is quite similar and it uses 131 proxies or less than 11%.

Tomorrow, I will show an interesting plot which will tell us what percentage of each proxy added to the graph at each year, after that I will attempt to correct the temperature for the statistical distortions created by the Mann sorting process.

To make a graph with only 11% of the series which correlates so well to the final NH Mann paper was quite amazing.

I wonder what percentage of which series make this graph?


7 Responses to “Mann 08 Temperature Reconstruction Using Less Than 11% of the Data”

  1. Demesure said

    Wow, your original approches to problem solving are absolutely amazing Jeff.
    Keep on with the nice work.

  2. Dave Dardinger said

    Couple of points.

    1. It seems filtering tends to lower peaks and valleys. This is one desired result for the team since it makes present temperatures seem more unusual.

    2. If a scientist, like Mann, wanted to/needed to hide his research tools, it’d make a lot more sense to release the proxy weights than the code. Then there wouldn’t be much of a problem with a claim of proprietorship of the method used. Of course the problem would be that it’d have been obvious, even without Steve M, which proxies were important. And then things like the problem with the bristlecone pines would have been obvious, even to peer reviewers.

  3. John F. Pittman said

    Jeff. perhaps you have stated this. I noticed, and if I remember my stats, since you defined your curves in your posts The Flaw in the Math Behind Every Hockey Stick and Temperature Scale Distortion in Hockey Sticks, the area that increases asymtopically from about 1400 to the dip in the stick should be the same area as the resulting maginifcation of the blade. If that is true, then your proposal of skewness of the MWP should be a ratio of the R=0.xx selection. Thus if true, you can obtain a really good approximation of the deflection of the “shaft” from the area of the blade. With normalizing your graphs to the delta T of the Mann graph, computing a table of different R’s including the one claimed you could get a good approximation for the deflection, the shinkage of the MWP, and correct the skew in the shaft. These would be good approximations based on the assumption of red-noise IIRC.

  4. Jeff Id said

    John,

    I haven’t said it yet, but I’m so interested in finishing what you are describing, it is keeping me from sleeping well. If what you say is correct it could be applied to every selective data temperature graph and would require the correction of all of the hockey sticks I am aware of. I am still pretty new to paleoclimatology.

    I need to play around with the next step some before I can pin down the math in my mind completely but it seems fairly simple right now. I found in my test data that the magnification of the temperature curve was equal to the average delta slope applied to the pca during my least squares calibration.

    Dave,

    I wish mann would provide the coefficients, perhaps I will ask but it’s like you said I don’t think he really wants those to be known.

    I was too tired to finish the big news from the graph above which is which series are used to make it. Another reason I can’t sleep.

  5. John F. Pittman said

    The other thing you might want to do is to move the MWP on your first series (flatline, no modern trend) closer and further away and see if the compression of the MWP changes, or the blade as well. There are some interesting facets to this with respect to the timing of the MWP in the northern versus southern hemisphere if the compression depends on not only the R you choose, but the number of years between the MWP and the blade. This would be interesting if true, since it would likely show that some of the discrepency of the MWP times are due to an incorrect methodology applied to different (fundamental aspect) proxies, rather than a distinct difference due to warming or lack thereof.

  6. John F. Pittman said

    What may also be a trivial but necessary computation is to take the flatline series but include a distinct LIA with a known area and amplitude, redo and see if and how much compression occurs. Looking at the “hockey stick” criticisms, the underestimation of the delta T of the low termperatures compared to historical accounts is a point of contention. If this procedure compresses the delta T of the LIA, then your potential paper would resolve the contentions of the historical MWP and the LIA versus the reconstructions.

  7. John F. Pittman said

    Re #6 I would compare the red noise to an equal density filter along the time axis. If we choose one point at the very end, and choose it as the basis, as one went backwards in time it would form a perfect > shape. The methodology gets rid of the equal density. I think you will find that with your analysis if you include a second warm period closer to time=0, that you define the same as the one for the MWP, you will see that it also is compressed. It will be less compressed than the period closest to the time end point. Thus you can show that the difference in the roman warm period and the MWP of Mann08 could be an artifact of his methodology. As you pointed out it will be decrease as a ratio of the slope of the filtering with respect to the distance from the end point.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 142 other followers

%d bloggers like this: