the Air Vent

Because the world needs another opinion

Archive for January, 2016

Nearly Two Teams of Hockey Sticks used in Massive Wilson Super Reconstruction

Posted by Jeff Id on January 16, 2016

So a Willis Eschenbach article at WUWT caught my attention this afternoon and cost me several hours. It is basically an average of 54 different tree ring reconstructions around the world. The sheer volume of data which went into each hockey stick and then was processed into the final hockeystick is huge.  Willis demonstrated the indescribable method used to combine the data turned out to be equivalent to a simple average. The result: Hockeystick!

53-proxies-wilson-2016[1]

Graph per Willis Eschenbach — WUWT article linked above

Last millennium northern hemisphere summer temperatures from tree rings:
Rob Wilson a, b, *, Kevin Anchukaitis b, c, Keith R. Briffa d, Ulf Büntgen e, g, h, Edward Cook b,
Rosanne D’Arrigo b, Nicole Davi b, i, Jan Esper j, Dave Frank e, Bj€orn Gunnarson k,
Gabi Hegerl l, Samuli Helama m, Stefan Klesse e, Paul J. Krusic f, k, Hans W. Linderholm n,
Vladimir Myglan o, Timothy J. Osborn d, Milos Rydval a, p, Lea Schneider j,
Andrew Schurer l, Greg Wiles b, q, Peng Zhang n, Eduardo Zorita

The data and articles are fully available on line here.

So knowing just enough about dendrochronology to actually produce work equal to those in publication, I must be an expert dendroclimatologist! Collect tree ring data, density, MXD, blue etc… Detrend by some random form of curve fit. Average or regress and compare to temp. If the comparison is not statistically significant, the bag of accepted statistical shenanigans is wide and nearly unbounded. You can correlate raw data with temp and discard data which isn’t strongly correlated. You can keep all data and use any number of multivariate regressions which functionally eliminate bad non-hockeystick data and amplify the “good”. You can use a huge variety of standardization curves and sorting criteria to create a hockey stick upslope at the recent end of the curve. You can select regions with trees of known warming signal and ignore adjacent trees to create the blade. You can even cut off data which doesn’t work out for you and paste temperature data right on the end. In short — guaranteed success every single time!!

So of course having 23 dendroclimatologists who take 54 separate tree ring reconstructions and put them all together with a nonsensical unjustifiable method that breaks down to a simple average is just par for the course and no surprise to anyone. In their minds, and the minds of various other dimwits, it is absolute proof of the robustness of their field.

D’Arrigo published a ridiculous comment which makes my point perfectly:

Several recent opponents of anthropogenically-forced global warming are familiar with statistics
but have not personally developed tree-ring or other proxy data or reconstructions themselves.
They claim that there are methodological artifacts that could bias, in particular, the Mann et al.
(1999) “hockey stick” reconstruction, and by inference, other reconstructions as well. Attempts
to refute this claim have been published by several authors (e.g. Mann et al. 2005, Rutherford et
al. 2005, Wahl and Ammann in press). However, the methods utilized by the various other studies
are often quite different and most are derived in a more straightforward manner than the much cited
“hockey stick” method (Mann et al. 1999). For example, the D’Arrigo et al. (2006)
reconstruction was developed using simple averaging of tree-ring records (after accounting for
differences in mean and variance over time), followed by linear regression. Care was taken to
evaluate the robust nature of the reconstructions developed in this case, rigorously testing for
model validity and potential bias. Thus, for the D’Arrigo et al. (2006) study and likely others,
there exists no “methodological artifact” which might have biased results in favor of a conclusion
2
of unusual recent large-scale warming. Therefore, we find the concern that there is “some kind of
methodological artifact that somehow reverberates throughout nearly all of the reconstructions
and that has gone unappreciated by people in the field” to be unfounded.
There has also been accusation of bias in site selection or so-called “cherry picking”, in which it
has been argued that dendrochronologists only include those sites that show global warming for
use in the tree-ring reconstructions. Instead, we maintain that we purposely select those trees and
sites which portray low-frequency information. Coherent trends between some tree-ring records
are indicative of a common response to large-scale temperature changes. We also pre-screened
the tree-ring records used in our reconstruction against individual station records and gridded
climate data, to evaluate their more localized response to temperature (D’Arrigo et al. 2006).
Only certain types of sites (e.g. due to their ecological characteristics) can provide large-scale
temperature information. This is by its very nature a subjective, non-quantifiable process and we
make no apologies for selecting these kinds of trees and sites to reconstruct temperature
variability. Such a signal can often be readily observed by examining core samples in the field (e.g.
increased growth in the 20th century, decreased growth during cold periods of the so-called Little
Ice Age, etc), or in tree-ring chronologies even prior to any calibration or modeling with
instrumental temperatures.

Right in the middle of the thing our resident genius admits to throwing out data which goes against the theory that the trees are measuring temperature. Those trees that DO correlate to temp, have some magic and unknowable property which binds them inextricably to temperature for all time.  Somehow this magic also doesn’t allow them to be identified any other way than after looking at the data.  Really old trees that exist prior to the temperature record are usually left unsorted.  We readers of such drek, typically have no idea how many trees must be examined before a magic ‘thermometertree’ is selected because the expert scientists don’t bother to tell us.  Now the sets are so predetermined that experts don’t even look at non-sanctioned data so we have a functional presort as a defacto standard.  D’Aroigo may make no apologies for the statistical scatology being peddled but it doesn’t mean that it is defensible or even remotely scientific. In fact, were the ‘scientists’ to do the job correctly the rejection vs acceptance of trees during sorting IS quantifiable and can be used to statistically determine if the trees have a valid signal, but a far less biased and more scientific person than Rosanne D’Arrigo is clearly required.

We have repeatedly covered how there is an infinite variety of variance amplification math available to dendroclimatology. The argument to the validity of hockeysticks due to the numerous methods are used to the same conclusion is complete nonsense for this reason. So in the superstick of Wilson 2016, I wanted to know what methods were used to create the curves eventually averaged together in our brand new Wilson 2016 hockeystick.

To that end, I took my time and read the method used in ALL papers used to create the 54 series in Wilson 2016. It took me all afternoon. I put the methods used by the authors in each series, in the table below so that readers could see the distribution of nonsense making up this new and improved hockey stick.  For each reviewed method, I determined if it was remotely reasonable. As you may know, there are plenty of hidden details in the dendroclimatology world that can only be uncovered by replication – and luck. SO…..If it even had a tiny chance of being the simple average suggested by D’Arrigo in the quote above, I put a resounding YES in the column with the heading “Statistically defensible”. If I couldn’t tell due to paper access or difficulty in understanding what was going on – I put a Maybe. Of the 54 series used in this hockey stick, that left an insane 43 big, fat, NO’s.

No way they could be defensible.

No way they would pass muster in a rational field.

No possible way that the series at the end of the paper has any use whatsoever. Yet our nearly two dozen ‘experts’ were perfectly happy to average them out just to show the world the robustness and amazingness of their cutting-edge field.

I had 4 yesses, 7 maybes and 43 No’s. If I was wrong on the No’s half the time, which I am not, that would leave 21.5 bad series still used. But the real answer is 43 or 80% of the data is complete and utter garbage having hockey stick blades created by mathematical artifact rather than actual data.

 

Article referenced Method used Statistically defensible method
D’Arrigo et al. (2004) linear regresssion produced hockey stick blade No
Wiles et al. (2014) regression analysis No
Davi et al. (2003); principal componenets — paywalled No
Anchukaitis et al. (2013) inverse linear regression No
Youngblut and Luckman (2008); paywalled Maybe
Szeicz and MacDonald [1995]; linear regression paywalled No
Wilson et al. (2014) regression analysis – paywalled No
Luckman and Wilson (2005) rcs and curve fit with average – no real hs Yes
Biondi et al., [1999]; curve fit to series and average — no hs Yes
Anchukaitis et al. (2013) inverse linear regression No
Anchukaitis et al. (2013) inverse linear regression No
Schneider et al. (2015) ad hoc regression:calculating weighted composites based on moving correlations with local temperature; extremely poor No
Gennaretti et al. (2014) linear scaling to local temperature No
Payette (2007); paywalled Maybe
D’Arrigo et al. (2003 (RW) and 2013 arstan RCS and average Yes
Rydval et al. (in preparation) not pubished Maybe
Dorado-Linan et al. (2012) regression and variance matching No
Buentgen et al. (2006). mean and SD scaling during temperature calibration period prior to reconstruction Maybe
Schneider et al. (2015) ad hoc regression:calculating weighted composites based on moving correlations with local temperature; extremely poor No
Zhang et al. (2015) RCS and something else — not sure exact method Maybe
Linderholm et al. (2014) linear regression No
Esper et al. (2014) and RCS and average -no hs apparent No
McCarroll et al. (2013) regression and variance matching – paywalled No
Büntgen et al. (2013) mean and SD scaling during temperature calibration period prior to reconstruction No
Klesse et al. (2014) paywalled Maybe
Helama et al. (2014) A unique intermixing of temperature information onto proxy data – ugly No
McCarroll et al. (2013) regression and variance matching – paywalled No
Schneider et al. (2015) ad hoc regression:calculating weighted composites based on moving correlations with local temperature; extremely poor No
Briffa et al. (2013) RCS and average — yamal HS, others not Yes
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Wilson et al. (2007) only use proxies which correlate to temperature others removed from usage – big joke No
Schneider et al. (2015) ad hoc regression:calculating weighted composites based on moving correlations with local temperature; extremely poor No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Schneider et al. (2015) ad hoc regression:calculating weighted composites based on moving correlations with local temperature; extremely poor No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Davi et al. (2015) principal componenets — paywalled No
Jacoby et al. (2000); RCS averaging, PCA first eigenvector only shows decline in warming years no HS No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
Cook et al. (2012) statistical screening based on correlation to temperature No
D’Arrigo et al. (2014) select region of positive correlation to temp, principal components regression of 6 favorite series with temp No
Hughes et al. (1999); paywalled Maybe

So instead of a validation of the robustness of the data, or the robustness of the field, what we have is is a paper demonstrating the robust willingness of climate scientists to sell trickery as science for both money and for the cause. These authors should be ashamed but even when caught truncating series, they simply push on producing ever more garbage for the small brained sheep in the media, politics and the public to use as propaganda for the government agenda.

 

Posted in Uncategorized | 70 Comments »