How Come So Many Independent Papers Claim Hockey Sticks

Ok, I have wanted to do this post since I figured out what is actually going on with the hockey stick graphs. The argument these days is that there are many papers which have repeatedly generated hockey sticks based on temperature proxies. First let me explain what a temperature proxy is and why these papers claim that we are warmer than the past 2000 years. If you already know this stuff skip to the third paragraph.

A temperature proxy is a measurement of something which is belived may be related to temperature but no-one is sure. Things like tree ring widths and isotope ratios in various objects from stalactites to mussel shells and ice cores are used. Scientists compare the graphs of these various items to temperature graphs and do statistics to see if they actually might be temperature. I personally believe they have little or no temperature signal in the graphs whatsoever but pro-global warming scientists use difficult to understand statistical techniques to sort and throw away data which don’t correspond to temperature averaging the remaining data. Much of the data is thrown away during this process. The latest hockey stick threw out 60% of its data.

With that in mind, I took the time to make up an experiment. First, I made what is called a red noise generator. It is a technical term meaning random data which demonstrates trends over short term periods.

Below are 20 series of red noise generated by my computer, I highlighted a few lines so you could follow them.

This data is entirely random. If you average enough random data you should get a flat line. So using the power of today’s PC, I made 10,000 curves like the ones above and averaged them below.

It is a pretty flat line as you can see.

Now red noise is pretty analogous to the natural noise in tree rings and other temperature proxies. For instance, a tree grows nearby to a measured tree blocking light on one side. At first there will be little problem, but as the neighbor tree grows its shadow casts over the first tree creating some ‘stress’ and slowing its growth. Another example might be drought conditions for a period of 20 years. These are just examples which create shifts in the tree ring widths over time, since we are looking for temperature signals these effects are red noise. The tree most likely also changes growth rate based on temperature, and thus the study of dendroclimatology was popularized.

The problem is, how much does temperature affect tree growth. Does a little temperature rise make the tree grow faster and a big temperature rise make the tree grow slower. It seems reasonable but very little experimental work has been done on this. The only real methods which I have read have to do with comparing the graphed shape of tree ring widths to the shape of measured temperature. To say this comparison is full of problems is an overwhelming understatement.

Let’s play with some red noise. I took the very same series as above and added a fake “temperature signal” to it. This is the signal we will attempt to extract from the random data. The signal is shown below. Temperatures are flat except for between 1200-1300AD.

I added it to all 10000 series above.

The random + temperature signals look like this.

If you look close you can see an up shift in the series from the first graph to the above graph from 1200-1300AD.

I averaged all 10,000 series in the next graph including the temp data. Computers are pretty awesome, imagine doing this 30 years ago!

As most of us would expect the temperature signal has the same magnitude as the ‘signal’ data, everything else is nearly flat. I took a random 2000 series from the temperature curve above to show that any random sampling of temperature will produce a similar shape. —- It does.

Sure enough, it has the same shape and amplitude as the curve above. You can see additional noise from a less complete dataset. (i.e. the more curves you average the smoother it gets)

What paleoclimatology does though is to look for temperature trends in the data. Clearly there is no trend in the last 100 years in the data above, yet what happens if we look for a trend. This is where paleoclimatology goes wrong. THIS IS THE ONLY SCIENCE FIELD THAT I KNOW OF WHICH DOES THIS.

If we sort the data above, the same random meaningless data according to the top 5% of the maximum upward slopes over the last 100 years. For those who know math, I fit a linear least squares curve to the last 100 years and took the top 500 maximum slopes. And for those who live in statistics and understand the limitations of this process, I am telling you that this is not significantly different from the EIV comparison or the non-centered (I like that) PCA which Mann used in the 98 paper. Sure the result is not the exact same, but looking for a trend in a near random set has huge pitfalls.

Anyway the graph is here.

Now for the rest of us who don’t want to learn every detail of climatology and how every friggin paper makes a hockey stick this should be a meaningful graph. I just took random meaningless data and found a huge very steep up slope in temperature in the most recent times! This is exactly what Steve McIntyre showed to the world in his papers and which he referrs to as mining for hockey sticks.

He presented the best evidence which could be compiled to the paleoclimatologists who rely on this method that it was faulty and not surprisingly they didn’t change their ways. If you look past the red and grey lines in the graph below which are the only measured temp curves plotted on the latest hockey stick, you can see that the other data doesn’t make a spike in recent times. It looks a lot like my graph above with a bit more red noise.

You have to look beneath the temp data (red and grey lines), but the resulting trend dips lower than the red series because of the comparison. This is similar to the above graph dipping below zero for the 1900 time frame data.

I’m not sure if I can finish the math this week because I am working on other things, but i will show later how p value correlations will create the same effect. Correlation is the rational that paleoclimatology uses for its excuse to discard otherwise valid data. From my experience processing data I can say for certain, p value correlation can find a hockey stick in this random data also.

This isn’t the only reason that “independent papers” get hockey sticks, It also happens because they tend to use exactly the same data sets. I had to add that because it is a big point by Steve McIntyre at Climate Audit!

You then get a bunch of Peers reviewing papers which sort data with similar processes to what they did in their own last paper.

—————————————–

I need to thank everyone for your support in reading this article. This presents the foundation for my second post on the subject which delves deeper into the math.

The Flaw in the Math Behind Every Hockey Stick

It’s slightly more complicated but it reveals a more serious problem in the creation of hockey sticks. The demagnification of historic data in favor of recent trends. The effect reaches across every temperature construction which employs data sorting and calibration to locate the hidden signal.

26 thoughts on “How Come So Many Independent Papers Claim Hockey Sticks

  1. Jeff,
    Congrat, I got it at my first reading. It couldn’t be clearer (even if the step with 10,000 samples is IMHO unecessary, it’s better to work directly with realistic sample numbers 1,000 similar to Mann’s “1209 proxies”).

    There must be an error: the HS-harvested curve based on 500 samples can’t be smoother than the 2,000 sample curve, right ?

  2. I must say to this layman this is a very revealing demonstration on how it is possible to mine data to get a specific result, am I right in infering that the dip before the rise at the end, shown in the red noise example, is a typical indicator that this process has been used on an othewise uniform random set?

  3. Quite so. Cherrypick-and-average yields much the same result as Mann’s PC1. David Stockwell also posted on this a couple of years ago. This is what’s going on the “other” studies.

    Now to close the loop, I think that one has to show that you can apple-pick different proxies and get a MWP – you can. I’ve shown over and over that changing a couple of proxies here and there yields apple-picked MWP. Ross has been grinding me for a long time to write this up but one would really like people in the field to grasp the point themselves.

  4. That’s a great post! It finally explains to me why temp proxies could be so bad, yet still show a Hockey Stick. (Or rather the beginnings of a Hockey Stick – one still needs to add the measured temperature rises to the end of the processed proxies.)

    I also found it very interesting that your processing produced a shallow dip before the sharp rise, and that this shallow dip is also shown in the “real” temperature reconstructions. This seems to be the “smoking gun” that shows such reconstructions are flawed.

  5. This is an excellent post. Pls consider writing it up as an academic article. Even if not published, an article in PDF form with appropriate references will have more “authority” than a blogpost.

  6. I agree with Suba that you can produce an academic article from this idea, which could (hopefully) be published in a scientific journal. In fact I had thought of doing this analysis the other day, and perhaps write an article on these lines. So I am very glad to see here that you did it and that the idea works. Congratulations!!!

  7. Jeff

    I’m editor of the Australian Inst. Geoscientists and I would be able to publish it if no other journal is interested. I have previously published Beck’s work and David STockwell’s.

  8. Jeff,
    I appreciate what you have done very much. I have not studied much statistics, do not have subscriptions to the climate literature, so I am grateful that you have reduced the discussion to simple terms that I can understand.

    You have demonstrated that it is possible to take data that has no signal; and select a small proportion of it; and get a signal to emerge. The proportion you have selected is the top 5%.
    You say that the scientists who developed the hockey stick threw out 60% of their proxies, meaning that they kept 40%. This is a lot larger than the 5% that you chose from your random noise.

    Suppose you keep the top 40% of your random noise data. What kind of signal would you get? I suspect it would be a lot smaller. Could you show us what this would look like?

    If the temperature signal was small, that would indicate that the probability that the authors of the newest hockey stick paper got a significant temperature signal out of what was really random noise, i.e. totally invalid proxies as a result of their selection procedure is quite low. You could randomly select something like one thousand proxies out of the 10,000 that you have generated, and keep the 40% that have a temperature signal for the past 100 years and see what you get for the average. Then look at the distribution of such averages to see what the probability is of getting such a signal by choosing the 40% of proxies with the largest slope.

    I believe that each proxy was actually calibrated to a local temperature anomaly rather than a global temperature anomaly. From what I have read of the discussion of proxies, I believe that they have some biological, chemical or physical theory that would indicate that a temperature signal should exist in each proxy as well, and are not choosing random phenomena to correlate with temperature. This would further reduce the suspicion that the scientists were fooled by random noise because of the way they culled their proxies.

    I am interested to know what kind of procedure you would use in research to develop temperature proxy data. If you are not permitted to eliminate data that doesn’t correlate to local temperature records, how would you determine what proxies to use in a study of paleoclimatology?

    Incidentally, the latest hockey stick paper includes an author who is a respected statistician at least according to the famous Wegman report.

  9. Eric,

    Another good question, I’m seeing a pattern here.

    First thing I need to mention which you won’t even believe so you should verify for yourself. I was exactly where you are 6 months ago. I didn’t know. Being an engineer I figured they could calibrate the trees based on biological phenomena.

    You say.
    “I believe that they have some biological, chemical or physical theory that would indicate that a temperature signal should exist in each proxy as well, and are not choosing random phenomena to correlate with temperature. This would further reduce the suspicion that the scientists were fooled by random noise because of the way they culled their proxies.”

    They do have a theory. Trees react to temp, water, sun, soil, bugs etc. What Mann
    s group didn’t do was to select trees for temperature sensitivity (IMO a very goofy insight of dendroclimatology that you could somehow pick temp sensitive trees by looking at conditions). Instead, Mann just took a pile of tree data sorted it for the ones with an upslope and called it temp. —

    You want to know what’s worse. They took tree data which contained downslopes, chopped off the ends of the data pasted on upslopes and then did correlation to see if it was temp.

    Swear to god!

    This is by far the most disgusting and false paper I have ever read. It wouldn’t get through the door in optics.

    Here is a post which shows you can extract any temp trend from mann’s data. If you look I got a near 40% correlation to a downslope.

    https://noconsensus.wordpress.com/2008/10/11/will-the-real-hockey-stick-please-stand-up/

    I did use the real unedited and actually measured data for the post at this link, not the fake, pasted on, made up, rubbish data.

  10. Jeff,
    Thanks for your reply. Your explanation of your analysis of the real data had too much jargon for me to understand exactly what you did, and what your point was. I had read it previously but couldn’t figure out what your point was.

    The news article that I read on Mann et. al’s latest paper, said that the same answer was obtained with, or without the tree ring data, arguing that it was robust.

    I find it hard to believe that they to pasted upslopes on proxies in the region where they calibrated. I gathered from discussion that I have read, that some proxies used, did not have data in the age where instrumental temperature measurements existed, and that such proxies had to be calibrated with calibrated proxies.

    The question still remains, whether using 40% of your synthetic data, you would have a high probability of finding a strong upward temperature signal from random data. I expect that this would depend on the frequency spectrum and generation algorithm of the noise that you used.

  11. Eric said “”I find it hard to believe that they to pasted upslopes on proxies in the region where they calibrated. I gathered from discussion that I have read, that some proxies used, did not have data in the age where instrumental temperature measurements existed, and that such proxies had to be calibrated with calibrated proxies.””

    The difference is between what was said to have been done and what was done. For some of the proxies this would be somewhat acceptable. However, what has been claimed by Jeff and others, is that data was replaced with generated data. Also, note, that if you overlap in the manner you describe, why do it at all? You need only calibrate the two proxies where they match, whether or not it is in the temperature calibration period. Ideally, you get no more information with splicing, but you do introduce a problem with how you assign weights to the constructed and unconstructed data. Ian Jolliffe’s comment is appropriate, paraphrasing, if it is not necessary and will introduce unknown qualities, why do it all??

  12. John,
    I don’t have the statistical background to evaluate some of the fancier statistical arguments that are being made here. I would have to go back to school and study statistics and climatology to do that.
    From what I have read and understood, the proxies in the Mann and other hockey stick papers are validated and calibrated in different parts of the instrumental temperature record and then used to estimate temperatures where the instrumental record doesn’t exist.

    Wahl and Amman have studied MM’s criticism and claim that they are invalid and that the results in the hockey stick paper are robust with respect to changes in methodology.

    Click to access Wahl_ClimChange2007.pdf

    But for the moment, let us not be diverted from the subject of this thread.

    The issue in this thread, is Jeff’s contention that all the hockey stick papers are flawed, because
    some of the data that could have been used was rejected, ostensibly in the calibration process.
    This argument is based on the idea that random noise data could have been selected to correlate with the temperature record and then used to reconstruct the climate of the past.

    This is based on Jeff’s experiment with his data produced by a noise generator, in which he selects the 5% that have rising temperatures. Since he gives a figure of 40% of the data having been selected, it is fair to ask that if he uses the best 40% of his data, what does he get.

    My understanding of what was done by the paleo-climatologists, is that they choose a portion of the last 150 years, to calibrate the proxy, and a different portion of it, to validate the proxy to determine if it is usable. Only then do they accept it as a temperature proxy. Let’s see what happens if that test is applied to Jeff’s noise data. I don’t know enough to suggest a test for validation. Perhaps Jeff, with his greater knowledge of statistics could suggest something.

    Actually we can look at the graph that Jeff produced and figure out what the result of the test would be. The 5% he selected, with rising temperatures, gave a lower temperature average in the time immediately preceding the calibration period. This is an indication that they would have failed a proxy validation test for a preceding period if the temperature had been continuously rising.
    That shows that very few of the 5% of the proxies he selected would have been validated.

    Given this analysis, I am prepared to reject the explanation Jeff has given for why so many independent papers claim hockey sticks.

  13. Jeff Id says,

    “This isn’t the only reason that “independant papers” get hockey sticks, It also happens because they tend to use exactly the same data sets. I had to add that because it is a big point by Steve McIntyre at Climate Audit!”

    So is this a problem??
    There is a limited amount of proxy data available. If people didn’t use that data to try to figure out what the temperature was before there was an instrumental record, what would they use?

    The point is that they used different analysis methods and got similar results from the data.
    Would that happen with red noise?

  14. Layman Lurker,
    The article by the anonymous Bishop Hill seems to be a one sided “he said she said” account.
    I can look into that some other time. I am not familiar with the statistical validation parameters discussed by Bishop Hill, whoever he is, to know whether MM or WA are right about the validity of the statistical methods validating the hockey stick.

    Bishop Hill’s story doesn’t speak directly to the validity of Jeff’s contention, that the Hockey Stick papers all look the same, because they are choosing red noise which accidentally looks like a temperature increase. Jeff hasn’t really replicated their validation procedure on his red noise data, and only a very small fraction of the red noise proxies, 5%, which is much less than 40% actually used, were chosen. Two different time periods were used in the real process, one for period for correlation and one for validation. Jeff did not do this.

    It takes a while to work through and understand all of these arguments if one isn’t familiar with sophisticated statistical methods. It is too easy to accept one side or another in order to avoid doing the work.

  15. Eric, I pointed out the BP article because you refered to Wahl and Ahman. BP’s account of the circumstances leading up to publication of W&A in Climate Change are based on Steve McIntyre’s blog entries on Climate Audit. BP paraphrases Steve M’s critique of W&A.

    I’m afraid I can’t comment on the need for validation of Jeff’s proxies in his red noise analysis.

  16. Eric,

    “I find it hard to believe that they to pasted upslopes on proxies in the region where they calibrated.”

    I need to get my favorite quote for you from Mann 08. I promise, I won’t deceive here. There’s no point, being wrong isn’t any fun anyway something I have done 60,000 times in one week at Watts Up. If I find evidence global warming from CO2 is real, that’s what this blog will say. I am getting more skeptical though.

    I am in China now, the internet is slow and I work long hours. I will try to get the proof later tonight or tomorrow. (This data altering is one I can prove because Mann08 was quite open about it!)

  17. Eric you say “”My understanding of what was done by the paleo-climatologists, is that they choose a portion of the last 150 years, to calibrate the proxy, and a different portion of it, to validate the proxy to determine if it is usable. Only then do they accept it as a temperature proxy.”” Your statement is what is claimed. It is not necessarily true. An example below is about the divergence probem that has occurred from about 1960 or 1980 depending on the proxies used.

    “”In a recent research paper (Loehle, 2008), I show that if this linear model is mis-specified (i.e., a linear growth response is assumed but in reality the growth response is non-linear), even a model that appears to work well during the “training” (or “calibration”) period—the time during which both temperature and tree rings are available—may fail miserably during the reconstruction period—the time in the past when only tree rings or available, that is, prior to direct temperature measurements.”” http://www.climateaudit.org/?p=4475

    “”I observed that there had been a serious alteration of the Briffa et al 2001 reconstruction in which diverging post-1960 values were simply chopped off. One of the IPCC 4AR reviewers called for the deleted post-1960 values to be shown both for Briffa et al 2001 and Rutherford et al 2001. Here’s how the authors responded. http://www.climateaudit.org/?p=1737

    The review comment said only:

    Rejected – though note ‘divergence’ issue will be discussed, still considered inappropriate to show recent section of Briffa et al. series””

    The IPCC knew and allowed, by way of the chapter authors, truncated data that invalidates your claim. It is important to note that the Loehle, 2008, is a must read to understand the problems inherrent not only to using tree ring proxies, but other proxies as well. The quadractic response is known for BOD, COD, fertilizers, temperature, water, etc. depending on species and eco-systems. I am familiar with it in the design of wastewater systems and waste-load allocation. Failure to account will mean a failed design. In this case, we are looking at a failed paper(s).

    Note that the authors have to explain this; they did not. This is part of science. Jeff has shown the problem with selecting data, Briffa has shown that the sensible is not necessarily followed, and Loehle has shown, and further it is a documented phenomena for living systems, that the linear assumtion is most probably incorrect. Once again, in science, the authors that propose to use the linear assumption, or temperature correlation, must demonstrate that it is correct. The failure to do so, as in the Briffa, invalidates the validty of the claims. i.e. the papers have flaws and need corrections, as pointed out in the CA quotes. However, as pointed out at CA, this was not done.

    I would suggest reading about Briffa and the divergence problem. But would start with the Loehle paper in order to get an understanding of the problem(s).

    Jeff shows that choosing correlations without a priori reasoning can lead to incorrect conclusions. Since Jeff’s work is more general than the methodology you have stated, his would be considered valid for yours, unless it could be shown otherwise. Loehle 2008 concludes that the linear assumption that is used in the methodology you stated as most probably incorrect. In which case, the error bars for such papoers as Mann, Briffa, etc. need to be about 1.41 to 2 times larger than the 95% CI to account for the differnce of an assumed linear versus a quadratic relationship.

  18. John,
    Wow, you do a much better job than me. That was fantastically clear.

    Those of us who have looked at Mann08 need to be careful not to push too hard on the reasonable others. Eric has a skeptical mind, of my posts as well as others (not that I expect to change it, people need to learn for themselves). How much more can anyone ask? Had I read your comment 6 months ago, I would have thought, ok this guy doesn’t like Mann. Today, I know the difference.

    Eric, please pay attention to the links and quotes John provided. This is actually 100% real, it pissed me off when I first understood it at climate audit, and it is a good part of the reason I even have a blog.

    John, The detail of what you said was absolutely perfect as far as I know but to people just experiencing it it is overwhelmingly devastating to the basic premise of the latest hockey stick.

    For everyone,

    Actually at the time I wrote this post, I didn’t have enough experience with other papers on reconstructions. I agree still with what I wrote but John and others know this blog has been a learning experience for me. After all I am president of an energy saving green company. There’s nothing wrong with saving energy if it saves money.

    The point Eric makes is that different hockey sticks get the same results using different methods. To a person not involved in the math it sounds like were nitpicking the math in a single example rather than demonstrating the huge obvious flaws we see.

    Pointing out that each of a half dozen methods has the same problems makes us look as though we won’t accept anything that shows serious warming. The reality for me at least, is that reasonable reconstructions do exist and they use difficult to understand techniques like “averaging” and “means” – Ok, enough sarc.

    Eric,

    I just don’t have the time today, but I will make it shortly to answer your questions.

    You make the point,
    The question still remains, whether using 40% of your synthetic data, you would have a high probability of finding a strong upward temperature signal from random data.

    My answer is that it doesn’t and it won’t unless I sneak a bunch of autocorrelation in the data on you without telling you (something the AGW crowd has been caught at), how’s that for a tamino denier. You only get reality here.

    Still, that is not the point of this post as John stated above. The point here was to show that by using M08 methods you can get whatever signal you want by throwing away what you don’t like (a very big science no no), even from data with no signal whatsoever.

    What is absolutely devastating about the link I gave you above and here
    https://noconsensus.wordpress.com/2008/10/11/will-the-real-hockey-stick-please-stand-up/

    is that the 6th graph from the top used 39% of the non-fake data from M08. This gives it an equal probability of being real, something I highly doubt.

    For your entertainment, this post plots the actual proxies in M08 with the real data and the fake data separated out.
    https://noconsensus.wordpress.com/2008/09/18/the-all-important-blade-of-the-stick-uses-less-than-5-of-the-data/

  19. John,
    I don’t think Jeff has really demonstrated a problem with the selection of the data. As I mentioned, the small percentage of data he selected, and the method of selection does not approximate what was done.

    The discovery of a [b]potential[/b] class of error in one paper, by Loehle, does not invalidate the conclusion drawn from the hockey stick graphs produced by a half dozen different papers using different methodologies. Loehle and McKintyre have made errors in their criticisms of the hockey stick.
    By the same sort of logic, everything they say could be judged invalid.

    I don’t have the expertise to evaluate the competing claims made by both sides of this hockey stick controversy. Here is an account of the other side:

    http://www.realclimate.org/index.php/archives/2004/12/myths-vs-fact-regarding-the-hockey-stick/

    “…The claims of McIntyre and McKitrick, which hold that the “Hockey-Stick” shape of the MBH98 reconstruction is an artifact of the use of series with infilled data and the convention by which certain networks of proxy data were represented in a Principal Components Analysis (“PCA”), are readily seen to be false , as detailed in a response by Mann and colleagues to their rejected Nature criticism demonstrating that (1) the Mann et al (1998) reconstruction is robust with respect to the elimination of any data that were infilled in the original analysis, (2) the main features of the Mann et al (1998) reconstruction are entirely insensitive to whether or not proxy data networks are represented by PCA, (3) the putative ‘correction’ by McIntyre and McKitrick, which argues for anomalous 15th century warmth (in contradiction to all other known reconstructions), is an artifact of the censoring by the authors of key proxy data in the original Mann et al (1998) dataset, and finally, (4) Unlike the original Mann et al (1998) reconstruction, the so-called ‘correction’ by McIntyre and McKitrick fails statistical verification exercises, rendering it statistically meaningless and unworthy of discussion in the legitimate scientific literature.

    The claims of McIntyre and McKitrick have now been further discredited in the peer-reviewed scientific literature, in a paper to appear in the American Meteorological Society journal, “Journal of Climate” by Rutherford and colleagues (2004) [and by yet another paper by an independent set of authors that is currently “under review” and thus cannot yet be cited–more on this soon!]. Rutherford et al (2004) demonstrate nearly identical results to those of MBH98, using the same proxy dataset as Mann et al (1998) but addressing the issues of infilled/missing data raised by Mcintyre and McKitrick, and using an alternative climate field reconstruction (CFR) methodology that does not represent any proxy data networks by PCA at all. ..”

    In addition, Tamino has an excellent and understandable tutorial on PCA, which explains how it works, and why McKintyre’s criticism. of its use in the Hockey Stick paper, was wrong.

    http://tamino.wordpress.com/2008/02/16/pca-part-1/

    I can understand Jeff’s argument, but I don’t believe that he has made the case as he claims, for reasons I have outlined.

  20. Eric you say “”In addition, Tamino has an excellent and understandable tutorial on PCA, which explains how it works, and why McKintyre’s criticism. of its use in the Hockey Stick paper, was wrong.

    http://tamino.wordpress.com/2008/02/16/pca-part-1/ “” You did not include the other PCA parts nor comments in which the author of the source claimed to be used by Mann and Tamino weighed in and 1.) Mann and Tamino were misrepresenting his work, 2.) since he did not know, and at present, as far as he knew (he is an expert and the expert quoted), no one knew what the methodology used actually meant. Further, he questioned its use since the centered PCA had known attributes as in, why use one with unknown, when a procedure with known attributes could be used. In other words, this use of PCA is not what you have alluded to in your quote from RC “”Unlike the original Mann et al (1998) reconstruction, the so-called ‘correction’ by McIntyre and McKitrick fails statistical verification exercises, rendering it statistically meaningless and unworthy of discussion in the legitimate scientific literature.”” One of the recognized world’s experts weighed in on this and found the Mann/Tamino lacking, not M&M’s criticisms.

    You further state that “”The claims of McIntyre and McKitrick have now been further discredited in the peer-reviewed scientific literature, in a paper to appear in the American Meteorological Society journal, “Journal of Climate” by Rutherford and colleagues (2004) [and by yet another paper by an independent set of authors that is currently “under review” and thus cannot yet be cited–more on this soon!].”” However, this contradicts the findings of Wegman and the North NAS studies which concluded they (authors and proxies) are not independent. North did not claim that the lack of dependence, necessarily meant the work was bad; but rather was a factor to be considered. North did, however, say that the NAS report was in agreement with the Wegman report. There are several nuances that need to be studied to understand the different positions including the use of PC1 (PC4), and certain tree-ring proxies. You can find these papers at CA if you need a place to start.

    One example to consider is your quote “”(1) the Mann et al (1998) reconstruction is robust with respect to the elimination of any data that were infilled in the original analysis.”” Note that this does not say stip bark data, or eliminate their use. Even though Wegman and North determined they should not be used. Nor does talking about infilled data mean that PC1(4, in which the determining attribute was imparted by strip bark chronologies, though containing other information than just tree-rings, was excluded.

    You state “”I don’t have the expertise to evaluate the competing claims made by both sides of this hockey stick controversy.”” I agree that few except experts in the related field(s)would have the necessary standing to evaluate. However, within the limits of understanding, it can easily be surmized (and different people could reach a different conclusion without being wrong or in denial) that there are real problems with all the temperature reconstructions, calling into question all concrete conclusions drawn from them. However, note that the IPCC is drawing concrete conclusions. Even worse, as my point above noted, CA pointed out that if this removal of data during the calibration or the calibration check were not shown, jail terms and felony convictions could occur. I also am in a field where this is true. Refusing to divulge adverse data has sent persons to jail and large fines administered. It is not a minor issue. Nor can nuanced claims invalidate the concerns that Jeff’s excellent analysis showed. The M&M works are even better, IMO. They were reveiwed by some of the most knowledgable in the the US, and therefore the world, and found substantive. Note that this means that MBH98 was found lacking. i.e. as stated, they found in favor of M&M critisms.

    Since that finding(s), the discussion has evolved into nuanced positions.

  21. This is very fascinating and quite a devastating critique of the hockey stick graphs if this is truly how they are being derived.

    However, just one small suggestion if I may… It might be worthwhile to replace all occurrences in this article of the word “independant” with the correct spelling, “independent”. Sorry if that seems pedantic but it is in the title of this article. And it is misspelled consistently throughout, suggesting it is not just a typo. Sorry, but I think these types of things are important, especially if you hope to convince fence-sitters who might be inclined to be skeptical of skeptics to begin with.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s