Mann09 Analog vs Digital
Posted by Jeff Id on November 28, 2009
In Mann08 a correlation screening process was used to eliminate offending series for a composite plus scale reconstruction. It provided an easy method for the scientists to choose which data makes the best hockey stick. It was very simple to demonstrate the completely bogus and biased selective choice of information which was in my opinion done with the intent of creating a false signal. This blog has been quite vocal about the intent issue win Mann08 and now we are faced with the same kind of result in Mann09. Yet they didn’t use screening this time.
The last time around it was easy to demonstrate the chucking of data that didn’t fit the pre-determined conclusion. The practice was so crystal clear that most in the public could figure it out. Unfortunately the nearly useless media completely refused to pick up on it and advocate scientists at Real Climate continue to defend the practice. Now with the disclosure of these emails, we’ve seen how Mann and Jones operate and perhaps this will get some attention. In the meantime, the team has moved on to a new style of reconstruction. It’s a bit more clever and far more difficult to explain.
Here are the northern hemisphere reconstructions as presented in Mann09:
Fig. 1. Decadal surface temperature reconstructions.
Surface temperature reconstructions have been averaged
over (A) the entire Northern Hemisphere (NH), (B) North
Atlantic AMO region [sea surface temperature (SST) averaged
over the North Atlantic ocean as defined by (30)], (C) North
Pacific PDO (Pacific DecadalOscillation) region (SST averaged
over the central North Pacific region 22.5°N–57.5°N,
152.5°E–132.5°Was defined by (31)], and (D) Niño3 region
(2.5°S–2.5°N, 92.5°W–147.5°W). Shading indicates 95%
confidence intervals, based on uncertainty estimates discussed
in the text. The intervals best defining the MCA and
LIA based on the NH hemispheric mean series are shown by
red and blue boxes, respectively. For comparison, results are
also shown for parallel (“screened”) reconstructions that are
based on a subset of the proxy data that pass screening for a
local temperature signal [see (13) for details]. The Northern
Hemisphere mean Errors in Variables (EIV) reconstruction
(13) is also shown for comparison.
This is what Mann09 has to say about ‘screening’.
Separate experiments were performed using a “screened” subset of the full proxy data set in which proxy records were screened for a local temperature signal based on their correlations with co-located instrumental data. These and other details, including sources, of the proxy data are provided in ref. S1 [note: a recent correction was made to the details of the screening as described in ref. S1. Due to an “off-by-one” error in the degrees of freedom employed in the original screening that has been brought to our attention, the critical p values used for screening decadally-resolved proxy data are actually in the range p=0.11-0.12 rather than the nominal p=0.10 critical value cited. This brings the critical p value closer to the effective p value used for annually-resolved proxies (nominal value of p=0.10, but effective value actually closer to p=0.13 owing to the existence of significant serial correlation in many of the annual proxy data). It is worth noting that the precise thresholds used in the screening are subjective and therefore somewhat immaterial—our use of statistical validation exercises provides the best test of the reliability of any data screening exercises.
In this study, the use of the full “all proxy” data set is emphasized, as this yields considerably longer-term evidence of reconstruction skill. “Screened proxy” results are only provided for comparison. All data used in this study are available in “SOM Data.”
It seems that perhaps Michael Mann paid a little attention to the ease of criticism of the previous 08 bogus methods. So how did they get the same results as those that use screening? (click on Figure 1 pane 1 to see the difference).
The reason for the same results is simple actually but will be more difficult to show. Before we get too far, notice that the instrumental portion on the far right of Figure 1 has no information from the proxies displayed. It’s HadCRU data only. This makes it difficult to visually grasp the quality of fit of the trees to the line. I checked the online results to see if the proxy info for the instrumental period was available and it is also not presented. Therefore the entire blade in this case is the instrument series and it’s up to the reader to have faith that the correlation numbers prove trees and various stuff are temperature. It’s also worth mentioning that the Luterbacher series which was actually instrumental data was left out of this paper, although it’s left in the 1209allnames.xls data file from the SI. (all correlation screening numbers are the same as the original). — Luterbacher was another McIntyre criticism which apparently was accepted by the team without acknowledgment.
The original Mann et al (S1) proxy dataset also included 71 European composite surface
temperature reconstructions back to AD 1500 based on a composite of proxy, historical,
and early instrumental data (S5). These data were not used in the present study, so that
gridbox level assessments of skill would be entirely independent of information from the
I’m getting off track a bit though. From the Steig et al. paper we have more than a passing familiarity with RegEM which was again used in this paper. In the original 08 paper RegEM was used to paste information (blades) on each series and infill missing data up until 2006. This is again the case here. We have all the same goofy proxies which look as ridiculous as this:
The Briffa hide the decline data are all present as well. So in this paper these proxies which have been infilled with a hockey stick blade are then run in RegEM which becomes a multivariate regression against gridded temperature to create the new hockey sticks above. Multivariate regression is a fancy word to describe an attempt to weight all of the series and create the best possible match to the temperature data (an upslope).
The math (simply) can be described like this.
Output = c1* proxy1 + c2 *proxy2 + C3 * proxy3……..C1138*proxy1138
The regression determines the best values of c. C can be positive or negative and is determined based on a best possible fit to temp. This is the important bit now, in original Mann08 as well as ‘screened’ version here, correlation was used to eliminate data. The elimination is equivalent to setting c =0 for the screened out proxy. In multivariate regression, the correlation with the gridded data determines the weighting of that series to provide best shape of gridded data. In other words data with a poor match will receive very low weighting or even negative weighting as in the case of Tiljander.
The math is the same thing except that it’s analog screening rather than digital.
In the 08 case data is eliminated a-priori through screening (Digital data scrapping) in this case it’s deweighted with a multiplier based on its shape (analog data scrapping).
It’s more difficult to demonstrate the preferential selection in this Analog case, but the nice thing about this reconstruction is that in the instrumental calibration period there are no missing values in the series. So all series weighting will be like the equation above — i.e. a single constant times each proxy all added together. We should be able to replicate the process, provide the weights from the B weighting matrix and create some plots which show that the process is another version of preferential selection of hockey sticks.
My sincerest thanks to Michael Mann and team for again providing a unique crossword puzzle which will provide much entertainment for the coming weeks.
I would like to know who reviewed this, there are some reveiws in the emails which if they can be attached to this Mann paper — we’ve found some new team members. The reason I question is simply a matter of taking the time to check again.