Posted by Jeff Id on September 23, 2010
People send me stuff. This link is to a critique of the McShane and Wyner paper by Schmidt, Mann and Rutherford. Now I’m not a big fan of MW10, it is just another method for crushing historic variance in favor of present time – hockeystickinator. There was an additional critique by Tingley. Unfortunately, I’m required to work for a living and won’t be able to do complete justice to these comments right now and my laptop, and internet died last night. Both – seriously both, unprecedented for sure. I’ll do my best to post some things but it is going to cut into my time on line dramatically. Anyway, the team likes to pretend that all the demonstrations that Mann’s hockeysticks are junk math, simply don’t exist.
The first point I have to highlight is from SMR comment:
We deal first with the issue of data quality. In the frozen 1000 AD network of 95
proxy records used by MW, 36 tree-ring records were not used by M08 due to their
failure to meet objective standards of reliability. These records did not meet the minimal
replication requirement of at least 8 independent contributing tree cores (as described in
the Supplemental Information of M08). That requirement yields a smaller dataset of 59
proxy records back to AD 1000 as clearly indicated in M08. MW’s inclusion of the
additional poor quality proxies has a material affect on the reconstructions, inflating the
level of peak apparent Medieval warmth, particularly in their featured “OLS PC10”
[K=10 PCs of the proxy data used as predictors of instrumental mean NH land
I have no clue why more cores need to be used to ‘accept’ data. It has all got a very weak signal anyway (if at all) why not use more of it? So Mann uses an arbitrary 8 core minimum for his selection – it’s still arbitrary and has no statistical meaning, I think it just makes them mad that the medieval period isn’t as flattened as the demonstrably very lossy Mann08 methods of picking preferred data.
Later they write
The MW “OLS PC10” reconstruction has greater peak apparent Medieval warmth
in comparison with M08 or any of a dozen similar hemispheric temperature
reconstructions (Jansen et al., 2007). That additional warmth, as shown above, largely
disappears with the use of the more appropriate dataset. Using their reconstruction, MW
nonetheless still found recent warmth to be unusual in a long-term context: they estimate
an 80% probability that the decade 1997-2006 is warmer than any other for at least the
past 1000 years.
Of course it shows a hockey stick, that is what the methods create from even random data. Do these people really believe what they write? That is the point guys.
However K=10 principal components is almost certainly too large, and the
resulting reconstruction likely suffers from statistical over-fitting. Objective selection
criteria applied to the M08 AD 1000 proxy network (see Supplementary Figure S4), as
well as independent “pseudoproxy” analyses discussed below, favor retaining only K=4
(“OLS PC4” in the terminology of MW). Using this reconstruction , we observe a very
close match (e.g. Figure 1a) with the relevant M08 reconstruction and we calculate
considerably higher probabilities up to 99% that recent decadal warmth is unprecedented
for at least the past millennium (Figure 1c).
Yet Gavin is a mathematician and can’t seem to grasp the ignorantly simple concept that you can’t pick and chose which data you want. That is all that every one of these methods does, even RegEM is just a linear reweighing for ‘preferred’ results.
Paleoclimate reconstructions are such a scam, and I really don’t like being lied to.
Furthermore, methods using simple Ordinary Least Squares (“OLS”) regressions
of principal components of the proxy network and instrumental data suffer from known
biases, including the underestimation of variance (see e.g. Hegerl et al., 2006) . The
spectrally “red” nature of the noise present in proxy records poses a particular challenge
(e.g. Jones et al., 2009). A standard benchmark in the field is the use of synthetic proxy
data known as “pseudoproxies” derived from long-term climate model simulations where
the true climate history is known, and the skill of the particular method can be evaluated
(see e.g. Mann et al., 2007; Jones et al. and numerous references therein).
Forget Hegerl et al, how about Mann08 CPS variance loss by Id 2008,2009 (hockey stick posts above),
Despite publications by VonStorch, Zorita, Christiansen and others they continue to pretend like this problem doesn’t exist in their work.
From the Tingly paper:
The abstract of the article by Blakeley B.McShane and Abraham J.Wyner (hereafter, MW2010)
asserts that “the proxies do not predict temperature significantly better than random series generated
independently of temperature,” a claim that has already been reproduced in the popular
press [The Wall Street Journal, 2010]. If this assertion is correct, then MW2010 have undermined
all efforts to reconstruct past climate, which are based on the fundamental assumption that natural
proxies are predictive of past climate. Such a bold claim warrants more investigation than is
provided in MW2010.
I don’t have any more time but this is exactly what MW2010 showed, but I like my work better again. This next link is from a conclusive yet horribly unpopular post which showed that there isn’t much temperature signal at all in Mann08 proxies.
SNR Estimates of M08 Temperature Proxy Data – I found a generous 7% contribution of temperature to the proxy data which causes HUGE sixty percent variance loss in the historic signal using Mann08 methods.
Willis Eschenbach also did a very cool calculation where he found a 20% common signal in the proxy data. If both methods are correctly done, that would mean the trees are far more sensitive to moisture or something else other than temp!
Anyway, links to the various papers are below, and my thanks to the anonymous reader who called my attention to this. I’ll try to spend some more time on this in the near future.
Spurious predictions with random time series: The Lasso in the context of paleoclimatic reconstructions. A Discussion of “A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures over the Last 1000 Years Reliable?” by Blakeley B. McShane and Abraham J. Wyner. (Tingley)