Ostriches

People send me stuff.  This link is to a critique of the McShane and Wyner paper by Schmidt, Mann and Rutherford.  Now I’m not a big fan of MW10, it is just another method for crushing historic variance in favor of present time – hockeystickinator.  There was an additional critique by Tingley.  Unfortunately, I’m required to work for a living and won’t be able to do complete justice to these comments right now and my laptop, and internet died last night.  Both – seriously both, unprecedented for sure. I’ll do my best to post some things but it is going to cut into my time on line dramatically.  Anyway, the team likes to pretend that all the demonstrations that Mann’s hockeysticks are junk math, simply don’t exist.

The first point I have to highlight is from SMR comment:

We deal first with the issue of data quality. In the frozen 1000 AD network of 95
proxy records used by MW, 36 tree-ring records were not used by M08 due to their
failure to meet objective standards of reliability. These records did not meet the minimal
replication requirement of at least 8 independent contributing tree cores (as described in
the Supplemental Information of M08). That requirement yields a smaller dataset of 59
proxy records back to AD 1000 as clearly indicated in M08. MW’s inclusion of the
additional poor quality proxies has a material affect on the reconstructions, inflating the
level of peak apparent Medieval warmth, particularly in their featured “OLS PC10”
[K=10 PCs of the proxy data used as predictors of instrumental mean NH land
temperature] reconstruction.

I have no clue why more cores need to be used to ‘accept’ data.  It has all got a very weak signal anyway (if at all) why not use more of it?   So Mann uses an arbitrary 8 core minimum for his selection – it’s still arbitrary and has no statistical meaning, I think it just makes them mad that the medieval period isn’t as flattened as the demonstrably very lossy Mann08 methods of picking preferred data.

Later they write

The MW “OLS PC10” reconstruction has greater peak apparent Medieval warmth
2 2
in comparison with M08 or any of a dozen similar hemispheric temperature
reconstructions (Jansen et al., 2007). That additional warmth, as shown above, largely
disappears with the use of the more appropriate dataset. Using their reconstruction, MW
nonetheless still found recent warmth to be unusual in a long-term context: they estimate
an 80% probability that the decade 1997-2006 is warmer than any other for at least the
past 1000 years.

Of course it shows a hockey stick, that is what the methods create from even random data.  Do these people really believe what they write?   That is the point guys.

However K=10 principal components is almost certainly too large, and the
resulting reconstruction likely suffers from statistical over-fitting. Objective selection
criteria applied to the M08 AD 1000 proxy network (see Supplementary Figure S4), as
well as independent “pseudoproxy” analyses discussed below, favor retaining only K=4
(“OLS PC4” in the terminology of MW). Using this reconstruction , we observe a very
close match (e.g. Figure 1a) with the relevant M08 reconstruction and we calculate
considerably higher probabilities up to 99% that recent decadal warmth is unprecedented
for at least the past millennium (Figure 1c).

Yet Gavin is a mathematician and can’t seem to grasp the ignorantly simple concept that you can’t pick and chose which data you want.   That is all that every one of these methods does,  even RegEM is just a linear reweighing for ‘preferred’ results.

Paleoclimate reconstructions are such a scam, and I really don’t like being lied to.

Furthermore, methods using simple Ordinary Least Squares (“OLS”) regressions
of principal components of the proxy network and instrumental data suffer from known
biases, including the underestimation of variance
(see e.g. Hegerl et al., 2006) . The
spectrally “red” nature of the noise present in proxy records poses a particular challenge
(e.g. Jones et al., 2009). A standard benchmark in the field is the use of synthetic proxy
data known as “pseudoproxies” derived from long-term climate model simulations where
the true climate history is known, and the skill of the particular method can be evaluated
(see e.g. Mann et al., 2007; Jones et al. and numerous references therein).

Forget Hegerl et al, how about Mann08 CPS variance loss by Id 2008,2009 (hockey stick posts above),

Mann 07 debunking by Id 2010: Mann 07 — Proxy Models Part 1, Mann 07 Pseudoproxies Part 2, Mann07 Part 3 — Unprecedented, Mann 07 Part 4 – Actual Proxy Autocorrelation

Despite publications by VonStorch, Zorita, Christiansen and others they continue to pretend like this problem doesn’t exist in their work.

From the Tingly paper:

The abstract of the article by Blakeley B.McShane and Abraham J.Wyner (hereafter, MW2010)
asserts that “the proxies do not predict temperature significantly better than random series generated
independently of temperature,” a claim that has already been reproduced in the popular
press [The Wall Street Journal, 2010]. If this assertion is correct, then MW2010 have undermined
all efforts to reconstruct past climate, which are based on the fundamental assumption that natural
proxies are predictive of past climate. Such a bold claim warrants more investigation than is
provided in MW2010.

I don’t have any more time but this is exactly what MW2010 showed, but I like my work better again.  This next link is from a conclusive yet horribly unpopular post which showed that there isn’t much temperature signal at all in Mann08 proxies.

SNR Estimates of M08 Temperature Proxy Data – I found a generous 7% contribution of temperature to the proxy data which causes HUGE sixty percent variance loss in the historic signal using Mann08 methods.

Willis Eschenbach also did a very cool calculation where he found a 20% common signal in the proxy data.  If both methods are correctly done, that would mean the trees are far more sensitive to moisture or something else other than temp!

Anyway, links to the various papers are below, and my thanks to the anonymous reader who called my attention to this.  I’ll try to spend some more time on this in the near future.

A COMMENT ON “A STATISTICAL ANALYSIS OF MULTIPLE TEMPERATURE PROXIES: ARE RECONSTRUCTIONS OF SURFACE TEMPERATURES OVER THE LAST 1000 YEARS RELIABLE?” BY MCSHANE AND WYNER (Schmidt, Mann, Rutherford)

http://pubs.giss.nasa.gov/docs/notyet/inpress_Schmidt_etal_2.pdf

Spurious predictions with random time series: The Lasso in the context of paleoclimatic reconstructions. A Discussion of “A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures over the Last 1000 Years Reliable?” by Blakeley B. McShane and Abraham J. Wyner. (Tingley)

http://www.people.fas.harvard.edu/~tingley/Blakeley_Discussion_Tingley_LongVersion.pdf

60 thoughts on “Ostriches

  1. I love this critique from the Tingly long discussion:

    The LASSO, is contrast, picks out only those few instruments
    that are most correlated with the overall composition during a calibration interval, and these few
    instruments can be poorly correlated with the performance as a whole outside of this interval.
    Within the paleoclimate context, where the expectation is that each proxy is weakly correlated to
    7
    the northern hemisphere mean (for two reasons: proxies generally have a weak correlation with
    local climate, which in turn is weakly correlated with a hemispheric average) the LASSO as used
    by MW2010 is simply not an appropriate tool. It throws away far too much information.

    They are complaining that Lasso only picks the best correlated series!!! That is exactly what Mann08 CPS directly does, so the problem must be setting the ‘amount’ of retained series too low.

    One would expect them to recognize that, if throwing out too much creates problems, then throwing out of any data causes some problem. Not so in climate science.

  2. @1
    Now let me get this straight: The Schmitt/Mann/Rutherford comment criticizes MW10 for not throwing enough proxies out, as per your first quote, and the Tingly comment is criticizing them for throwing too much out?

  3. The funny thing for me is that the real substance of MW10 is not really addressed: the whole exercise is so uncertain as to be almost meaningless. The quibbling about the number of PC’s selected is just silly; my eye-ball of the Picard-like plot says about 7-8 PC’s is about right, but so what? The uncertainty in the reconstruction is very large no matter how many PC’s you choose.

    The suggestion that pseudo-proxy data generated by climate models, with unspecified noise characteristics, is superior as a “test” of the reconstruction methods (instead of pseduo-proxy data specified by defined noise models) is wrong on so many levels that it can only be laughed at; circular reasoning at it’s very worst, funded by the taxpayers.

    When I read “Problems in climate research such as statistical climate reconstruction require
    sophisticated statistical approaches and a thorough understanding of the data used.”, I thought I would throw up.

  4. Shouldn’t it be noticed in passing that the SMR10 reply to MW10, supercilious as it is, gets published without delay whereas CAGW skeptics have gotten the run-around?

  5. OT I know but for anyone interested in the new release of the NCDC GHCN v3 beta dataset pop over to Digging in the Clay and have a look at the following thread

    http://diggingintheclay.wordpress.com/2010/09/23/ghcn-v3-beta-part-1-a-first-look-at-station-inventory-data/

    Verity and I will shortly be publishing Part 2 and Part 3 in a series of threads on the subject of how the GHCN V3 dataset differs from the previous GHCN v2 dataset.

    If you are interested in an ‘advanced’ look at the V3 dataset (in a much more user friendly normalised database format than the usual text files), why not pop over to Climate Applications and have a look at the TEKTemp implementation of the NCDC GHCN v3 beta dataset by clicking on the following link.

    http://www.climateapplications.com/TEKTempNCDC.asp

  6. What I find funny is that all defenses of M08 require a reference the M08 Supplemental Information.

    Kind of like having to see the out-takes of a movie to appreciate it.

  7. Speaking of data quality, I searched Schmidt, Mann and Rutherford 2010 for “Tiljander”. They say on ms page 2,

    The further elimination of 4 potentially contaminated “Tiljander” proxies [as tested in M08; M08 also tested the impact of removing tree-ring data, including controversial long “Bristlecone pine” tree-ring records. Recent work, c.f. Salzer et al 2009, however demonstrates those data to contain a reliable long-term temperature signal], which yields a set of 55 proxies, further reduces the level of peak Medieval warmth (Figure 1a, c.f. Fig 14 in MW; See also Supplementary Figures S1-S2 (Schmidt, Mann and Rutherford, 2010a; 2010b)).

    That’s some sentence.

    Unfortunately its tortured grammar obscures some errors.

    As Dr. Schmidt and Dr. Mann know, the Tiljander proxies are not potentially contaminated. They are contaminated. In a discussion about data contamination, this is a difference that makes a difference.

    As Dr. Schmidt and Dr. Mann know, Mann08 did not include meaningful tests of the removal of the Tiljander proxies. Those, such as they are, were done over a year later, weren’t peer-reviewed, and are not part of Mann08.

    As Dr. Schmidt knows and Dr. Mann ought to know, there are not four Tiljander proxies, contaminated or not. There are three. “Thickness” was used in Mann08 by mistake, it is simply the sum of “lightsum” and “darksum,” and contains no independent information.

    The placement of “those” in “Recent work, c.f. Salzer et al 2009, however demonstrates those data to contain a reliable long-term temperature signal…” is notable. It invites the unwary reader to assume that Salzer 09 or a similar paper addresses the Tiljander proxies’ validity, as well as the validity of bristlecone proxies. It (they) do not.

    Used car science.

  8. Appears to be an application of the “repetition of the Big Lie” technique; keep making authoritative-sounding pronouncements, and they eventually will be believed, however ridiculous.

  9. Think clearly for a moment about the Lasso method. It selects for the best proxies of temperature in the instrumental period. It may come up with a lot fewer proxies, but then what does that say about the robustness of the proxies in total. Let us say I am doing a sensitivity test and I select the best (few) proxies and determine what they show. Let us say they show something different than the whole of the proxies or even some conjured up Mannian selection process. That little exercise certainly calls into question the robustness of the proxies in all or part. Why would the Lasso selected proxies show something entirely different than the entire proxy catalogue or those selected by Mannian arbitrary techniques?

    I am tiring of the Mann et al counter critiques of: if you do not do it exactly as we prescribe your criticism is invalid. I am convinced that these people do not have any idea what a true sensitivity tests is and how it should be applied other than one they devise to support their pre-conceived notions.

    What it appears that the Mannians and supporters are saying is that we can get very different results when we use a well-selected few proxies (Lasso) versus using all of them versus using the arbitrarily selected Mannian ones. Do not they realize that what they are saying in effect is that there is a subset of proxies that can be selected that will support our preconceived notions? Just like if I had sufficient randomly generated proxies I could do the same.

  10. Mann and his friends will fight every inch of the way to defend his hockey stick. He has a messianic complex — he’s convinced he is saving the world by doing battle with the evil, fossil fuel-funded deniers. He defended upside-down Tiljander. He’ll defend anything, no matter how flawed.

    Here’s a great representation of him:

  11. Jeff, isn’t it time you wrote up your critique and published it? Blogs are fine, but your excellent, rigorous, astute, and critical work won’t get the attention it deserves unless you publish in a professional journal.

  12. OK, I have a basic question, not discussed in the Mann et al. papers or in MW ’10.

    Generally, I’ve seen optimal estimation problems cast in the form of a set of equations and assumptions involving underlying variables (to be estimated) and measured variables. The problem of historical reconstruction from the proxies, I gather, is something along the lines of:

    T(n) is a stochastic series of unknown spectral content, where n is the year.
    There are N measured sets of values (proxies), indexed as i=1 to N. Proxy y_sub_i(n) = alpha_sub_i * T(n) + beta_sub_i + gamma_sub_i*noise_sub_i(n),

    where alpha_sub_i and beta_sub_i are constants which vary proxy to proxy but not over time. For some proxies, Mann applies an a priori range to the scale factor alpha_sub_i as being either positive or negative, but generally no assumptions are made concerning the range of alpha_sub_i. Noise_sub_i is assumed white, and independent across proxies. The magnitude of the white noise for proxy i, gamma_sub_i, is not constrained.
    The problem is to use the measured proxy values to produce the best estimate of T(n), in a least-squares sense.

    Does this accord with your understanding? And why are these assumptions not stated? Are they not helpful in defining the statistical problem to be solved? I must admit to being surprised that the MW’10 paper didn’t make the underlying assumptions more explicit.

  13. #18 The assumption is that you have alpha * T but they know darned well that they cannot assume that temp is the primary signal in proxies openly. Beta is not calculated directly but rather is normalizes to zero across all proxies with offsets calculated after the solution is averaged. Gamma times noise is a common interpretation, however there are plenty of biologists who assume that moisture and other things have a stronger signal than temp.

    The assumptions are spelled out sometimes, Christiansen’s recent criticisms of paleo reconstructions did a good enough job. But in reality when someone tells you that they have 1000 series of noisy data, none of which are a-priori known to be temperature and they simply correlate them to temp and literally chuck the rest, discussing the nuances of the chosen model become a little difficult.

    Sorry if my conclusions are to quick sounding, but I’ve been down this road for two years and it flatly stinks to high heaven of thumbs on the scale.

  14. #19
    Jeff, I agree with you that the temperature is certainly not the only factor in many proxies. (Quite possibly most — the only proxy type of which I have read extensively is the tree ring.) The effects of other factors seem to be swept into the “white noise”, uncorrelated to temperature and between proxies.

    I suspect we share a distrust of treating clearly important factors such as rainfall as “noise”. But I was trying to separate the validity of the modeled equations — that is, how well they represent reality — from the technical solution of the estimation problem. I’d like to think that, given those equations/assumptions, and a criterion for the estimation error (least squares?), that there is an optimal estimator. How much that optimal estimate is worth, given the known issues with proxies, is a different question.

    Can you provide a link or more detailed citation to Christiansen? I don’t think I’ve read this paper. Thanks in advance.

  15. Rainfall and temperature almost certainly have some relationship, though it might not be either linear or even always the same sign over various ranges. It would have to be studied independently to even begin to get a grasp of its importance.

  16. Harold,

    https://noconsensus.wordpress.com/2010/07/27/bo-christiansen-variance-loss/

    I’m far more convinced of my own results on the signal in the proxies with this post than anyone else seemed to be but I’m also a lot closer to the data.

    https://noconsensus.wordpress.com/2010/09/23/2010/08/19/snr-estimates-of-temperature-proxy-data/

    This combined with Willis Eschenbach’s version linked above and several comments from Dr. Loehle (and unfortunate common sense) that a few tenths of a degree of temperature change are not the primary driver of tree growth.

    I’m unsure of the time you have spent on the subject. It’s very clear that you have the ability to do whatever you want with climate science papers, but when you write ‘rainfall as noise’ it is too realistic to be attached to the dark arts of paleoclimate. It sounds more like myself when I first asked – how are trees calibrated to temp – are they placed in greenhouses to determine the response?

    Turn’s out that despite asking for literally trillions of dollars, nobody want’s to spend the money on a greenhouse study.

    Nobody seems to care if there is rainfall correlation to temp. That just inflates the result.

    And yes, it is that ugly.

  17. #22 Jeff —
    Thanks for the reference and the comments — quick and humorous. Always good to get a smile out of this subject.

    Perhaps I’ve been too generous in my assessment. As to my time devoted to this topic, not a lot. I defer to your judgment.

    The “dark arts of paleoclimate” indeed.

  18. #23, I’ve become jaded. Really trust no one but yourself. I haven’t met many with the ability to understand and nothing to lose who don’t figure it out. Most climatologists won’t touch it with a ten foot pole.

  19. Jeff, don’t let them grind you down. It’s part of the Method. Success comes from endurance. FWIW, I was preaching qualitatively for some years that the global temperature errors were far larger than officially pictured. If you have a more mathematical discipline like surveying, a big part of the course is about the propagation of errors, closures, backsighting, redundancy, replication, ill-posed equations and so on. Younger professionals remember the mathematics and can be quantitative, but we retired people sometimes have to be content with following the competing arguments and using experience to decide which is plausible. For example, it does not take detailed math to detect cherry picking, but it does to publish and expose the magnitude of its effect in a nominated case.

    Like you, I’ve become tired of that familiar curve of global temperatures. As a young spectroscopist years ago, I got sick of looking at similar spectra over and over and started to look more at the fine detail. That’s where the adventure is. That’s why I’ve been trying to explain anomalous years or months more than anomalous centuries. It gives additional insight to mechanisms, which is what these reconstructions so lack.

    I’d hate you to feel unappreciated or underestimated. You are not.

  20. Re: HaroldW (Sep 23 23:18),

    This is basically the inverse regression problem. I have been looking at exactly this specific situation (along with some variations of it) over the past several years and I think it has a lot to offer for paleo reconstructions.

    One interesting approach is to estimate both the regression coefficients AND the T(n) series simultaneously without using the observed temperatures themselves. This is no longer a linear problem (since some of the parameters are multiplied together), however, in practice, the least squares equations can be solved using iterative methods. The solution for T(n) is unique up to translation and scale and when there are no missing values for any of the proxies, it can be shown that the the result is the first principal component of the proxy matrix. When there are missing values, the estimates can still be calculated without ANY infilling which seems to be so popular in the team’s camp. The final result can be scaled to the temperature series without severe danger of overfitting.

    Another approach is to include the temperatures in the mix as well when deriving the estimates of T(n). What makes my method different from the standard inverse regression is that the estimates of the alpha and beta coefficients are determined by all proxy values including those outside of the “calibration” period. This reduces the effect of cherry picking those series which fortuitously match the rise in modern temperatures.

    I am currently examining the SMR response using some of these techniques to evaluate their reconstructions and may post in the near future if I can pull something together.

  21. Slight correction to one of my statements above in #27:

    The solution for T(n) is unique up to translation and scale and when there are no missing values for any of the proxies, it can be shown that the the result is the first principal component of the proxy matrix.

    This should include the caveat that the largest eigenvalue is greater than the second largest. otherwise, the solution may not be unique.

  22. With the MS paper, I finally started reading up on statistics to try and understand the arguments. I came across these papers,
    written from a business perspective, which I found to be eye openers. As I recall, Mann was making his selection out of a
    total of 1209 proxies covering roughly about 150 years from 1850 to 2000. As the following references point out, the number of ways one can select 20 proxies out of 1209 is

    (1209!)/{(20!)(1189!)} = 1.56 * 10^43

    Of course, no computer can handle all that, the arguments against
    using the lasso method as opposed to some other method of restricting the number of proxies selected for inclusion in the model is silly.

    By selecting the 20 proxies with the greatest correlation with
    temperatures, one is selecting only 1/(1.56*10^43) of the data,
    and you shouldn’t be surprised if you get correlations that could
    happen by chance only once in 10^43 random trials. Here’s the first link I came across, which got me reading up on Bonferroni
    and related tests:

    Click to access jf97.pdf

    As for the second link, click on the psu.edu[PDF] link for the
    DP Foster, “Honest Confidence Intervals” article.

    http://scholar.google.com/scholar?hl=en&q=honest+confidence+intervals+&btnG=Search&as_sdt=2000&as_ylo=&as_vis=0

  23. Jeff – I was amazed at the amount of pain that a spiny, paper-matchhead-sized object could cause. I live in (probably false) hope that the 1/2 inch across one in the other kidney will just dissolve away. May your birthing pains end quickly!

  24. First, Jeff, I think you have shown that the Mannian treatment of reconstructions can underestimate the amplitude of past temperature changes and that if the correct psuedoproxies (as opposed to assumptions made by Mann et al) are used that that indeed is the case. You are perhaps disappointed that you have not received sufficient feedback confirming or criticizing your analysis and at the same time do not feel you have the expertise/time to publish on it.

    When you see others looking at the Mannian product from other points of view and with what they see as other poorly applied methodologies by Mann, you might be feeling those efforts to be less direct (and damning) than your more basic analysis.

    I too sometimes think that the criticism of Mann can be less effective by the simple fact that there are a number of methodology (including cherry picking) applications that would appear to be incorrectly applied. What I do gain from these many avenues of analysis is a better understanding of all parts of the methods that are involved in or can be involved in reconstructions.

    Not that I am in a good position to judge, but I think I learned from the MW 2010 paper and your analysis. What RomanM proposes sounds like something that will be fun to learn about also and particularly for someone who has just recently got into learning more about PCA.

    By the way, a kidney stone could surely put you in bad mood, put that too shall pass.

  25. https://noconsensus.wordpress.com/2010/09/23/ostriches/#comment-36997

    Alan D McIntire, you point to something that is easily lost on otherwise intelligent people and that is the amount data snooping that is possible with all these combinations. Of course, the data snoopers will always insist that the criteria they selected makes sense (after the fact most likely) and tend to present as it as derived a priori. We can assume with much better confidence that MW selected the Lasso method a priori than the criteria used by Mann et al. One never knows (since if there are many selection methods avaialable the Lasso could conceivably have been data snooped) but I’ll take my chances with a method that has some satistical grounding as a selection technique (assuming that the proxies have some capability to measure temperatures)over the more arbitrary criteria used by Mann et al.

    Another issue that would need to be discussed if one does a pre-selection process and that would be to attempt to assign reasons why some proxies are better than others and why if the process is not merely picking out non informative proxies at random should we see large differences in using various parts of all the proxies available.

    I think the authors of MW 2010 used the Lasso method not because they thought (some of) the proxies were reacting to temperature in predictable manner, but rather because it was a way around the issue of data snooping a selection process/criteria.

  26. Kenneth,
    I certainly would publish something on this if I thought they were honest brokers of science right now it is just wasting my time. I still may but have started a new project on Antarctic temp using Romans methods which I hope to publish. I made these posts as simple as possible only to demonstrate how absurd the methods are to laypeople. The more complex posts are often the least popular.

  27. In the end, you have to pick a lot of cherries to make a cherry pie as big as AGW.

    But I hadn’t realized cherry pie potentially gives one kidney stones.
    Indigestion, yes I’ve had that many a time, belly laughs, yes I’ve had them too,
    but not kidney stones….yet………Time will tell.

    There really isn’t a science of paleoclimatology at all is there,
    well, not one with any reliable temperature or CO2 proxies at such.
    Come to think of it, does climate science, have any, even just one, reliable global proxies, full stop. ?
    (answeres on the back of a postage stamp to….
    address withheld.)

  28. In the hospital with kidney stones today. It’s as fun as I remember

    Ha, I’ve spent the last week with the same issue. Not bad enough for the hospital but hurts like heck

  29. The NOAA website includes the ice core data from Richard Alley (2000). I can’t understand why we keep going over MBH proxies that use discredited tree ring data when there are much better proxies for high northern latitudes that go back 50,000 years.

    Alley’s data clearly shows the Minoan Warming Period, the Roman Warming Periods, the Medieval Warming Period and the Little Ice Age exactly in synch with the historical record. Here is a fairly witty commentary but if you want the raw data, just go to NOAA:

    http://www.foresight.org/nanodot/?p=3553

  30. Yeah, I used to be quite happy with ice core reconstructions, and I have looked quite a lot at the Greenland stuff..
    http://s53.photobucket.com/albums/g43/DerekJohn_photos/Greenland%20revisited%20DJA%202010/?start=0
    and,
    http://s53.photobucket.com/albums/g43/DerekJohn_photos/Hockey%20stick%20-%20yellow/Hockey%20Stick%20White%20Part%201/?start=180
    and,

    but, in the end, ice cores are about as much use as tree rings……
    http://homepage.ntlworld.com/jdrake/Questioning_Climate/_sgg/m5m1_1.htm

    excerpt, Dr. Jonathan Drake writes,
    ” Update:
    It has come to my attention that a number of AGW supporters have attempted to attack my paper. This is brilliant news because it demonstrates its significance and that it is perceived as a threat to their agenda. The best part is that none of them have contacted me in order to ascertain the physical mechanism behind the correlation and thus they do not know the reason for the subsequent correction. However, some have gone as far as making up a nonsense theory and use it as the basis for their attack.

    Check the paper for yourself and ask yourself why these individuals have behaved in this way. I know the answer, do you? ”

    It is also worth looking around Questioning Climate regarding Arctic sea ice levels and their measurements….

    At present Dr. Drake is looking extensively into the GHCN temp. gridding, as covered on this blog over recent months,
    with some very interesting and new results,
    but as yet no one is showing any interest in his work/s…….

  31. #43 Derek

    Thanks for the link to Dr. Drake. I see that he has tAV on his blogroll. I wonder if this is the same “Jonathan” that was involved in the intense tAV discussion on negative thermometer weighting in Steig’s Anatarctic temp reconstruction.

  32. #44

    It strikes me now that it was Jonathan Baxter (not Drake) involved in that discussion, along with Jeff, Ryan O, Carrick, and of course, TCO.

  33. Derek,

    Thanks for the Drake paper. I don’t understand the IGD thing but if it is correct it reduces the correlation between CO2 concentration and temperature.

    I don’t see this as very important as it still looks as if temperature leads changes in CO2 by hundreds of years, making it more likely that temperature drives CO2 concentration rather than the reverse (as claimed by Alarmists).

    No matter, as you point out, correlation does not imply causation.

  34. Steven Mosher:

    In the hospital with kidney stones today. It’s as fun as I remember

    Ouch! Sorry to hear that… I got the pleasure of an overnight visit once. The pain was so intense morphine wouldn’t touch it.

    So I feel for ya brother.

  35. As to correlation not implying causation, think again. See Pielke, Jr.’s latest piece, on a report by Oppenheimer in which the latter says: “I know correlation doesn’t imply causation but, in this case, it does.”

  36. Re: gallopingcamel (Sep 26 00:28),

    The Drake paper’s fatal flaw is that it doesn’t take into account that the rate of accumulation of ice is a function of temperature. Ice accumulates much more slowly when it’s cold, as would be expected by the lower water vapor saturation pressure. That means the age difference between the trapped gas and the ice (IGD) is much larger at low temperature. This is the expected behavior because it takes much longer to build up enough snow to close off the bubbles. Here’s a graph of accumulation rate (unadjusted for compression) and IGD for the Vostok data. The accumulation rate at the Vostok site is quite low compared to sites closer to the Antarctic coast and the IGD is correspondingly lower for those cores. This is really basic and the failure to understand this does not make Drake look very good.

  37. That’s not a ‘fatal flaw’. He doesn’t comment on the reason for IGD, he just compensates for it (I think – the paper is not at all clearly written and has no proper references).

    I have always though it a bit suspicious that in the ice core data, CO2 and CH4 and ‘temperature’ (deuterium) all correlate so well, and suspected that there is some other effect that causes these 3 data sets to match up. So he may be onto something, perhaps.

  38. Carrick @ #47.

    The pain is so intense that relieving it with enough narcotic risks overdose and death from respiratory failure when the pain is suddenly relieved with passage of the stone from the ureteral canal into the bladder.
    ================

  39. #51 Kim,

    As you probably know, the urethra is designed narrower at the bottom than the top. If I happen to go to heaven when I pass, I’m going to have a talk with the creator about his design prowess. It’s clear he could have seen this coming so there can be only one conclusion.

    Benevolent God, I think NOT! 😀

  40. Re: PaulM (Sep 27 08:26),

    But there’s no need to compensate. IGD variation has a known cause that is unrelated to the actual CO2 concentration at the time. He isn’t compensating, he’s removing the signal. If you combine a time series with a signal that’s proportional to temperature with another series that’s inversely proportional to temperature, you get a constant. That tells you precisely nothing, which is also the value of Drake’s paper.

  41. And for yet more proof of my point that in human affairs irony always increases, in the banner at the top of Jonathan Drake’s home page it says Questioning Climate with the sub heading “Correlation is Not Causation”. Now if that isn’t ironic, I don’t know what is.

    To go into more gory detail: If A is correlated to B and C is correlated to B then A and C will, of course, be correlated. But that does not meant that A causes C or vice versa. But that’s precisely the assumption that Drake uses to construct his so-called correction.

  42. De Witt Payne (#49),

    While I did not understand the Drake explanation of IGD, it triggered my buls**t detector so I tuned it out.

    Thank you for providing a rational justification to bolster my instinctive reaction.

  43. Monnin et al 2004, from the conclusion:

    Quote:
    “A new chronology for the Taylor Dome ice
    core established through CO2 synchronization reveals
    that the accumulation has changed substantially during
    the Holocene, with a long-term increase that
    shows little relation with the temperature history.
    Many timescales using ice flow models, especially
    those for Antarctic cores, are based partly on the
    assumption that the accumulation rate varies as the
    saturation vapor pressure over ice and is therefore a
    function of local temperature. This assumption is
    clearly not valid at Taylor Dome, and is likely to
    be substantially incorrect at other sites as well,
    notably in locations such as Law Dome and Siple
    Dome, which are at relatively low elevation and near
    coastal regions. At more-inland sites such as Dome
    C, independent validation of the ice core timescales
    suggests that the assumption is reasonable; however,
    it is unlikely to be strictly valid and caution is urged
    in applying it.”

    In essence, C ‘shows little relation’ to A.

    http://www.greenworldtrust.org.uk/Forum/phpBB2/viewtopic.php?p=2260#2260

    By the way, in some glaciers the correlation is essentially inverted.

  44. #52

    Jeff ponder this for awhile:

    Imagine you are born with your urethra just below your left kidney bent into an S shape and so narrow it routinely got blocked and caused your kidney to swell up to almost twice it’s normal size and be Pear shaped.

    Now ponder this:

    Imagine that they didn’t discover this until you reach the age of 17. By that time you get used to having periods of 104° temperature and a pain threshold so high you can break a leg and only think you got a sprain. During that time when you were younger and you mentioned that you hurt bad and went to the doctor they diagnosed: post nasal drip, kidney infection to finally it’s all in your head. It wasn’t until one doctor finally finds blood in your urine that they do a dye test with X-rays and discover it.

    So I can sympathize with anyone that gets a Kidney stone, because that is what I lived through for 17 years until they went in and took that section out and shaved down my kidney.

  45. Re: Jonathan Drake (Sep 28 03:01),

    At more-inland sites such as Dome C, independent validation of the ice core timescales suggests that the assumption is reasonable; however, it is unlikely to be strictly valid and caution is urged in applying it.”

    Vostok is at the center of the East Antarctic Ice sheet well away from the coast. There is no plausible mechanism to link IGD with CO2 concentration. There is a mechanism to link IGD to accumulation rate. The accumulation rate of the Vostok core clearly shows that the accumulation rate is indeed strongly correlated with temperature. QED.

  46. 59 DeWitt

    Can you discount the possibilty that very arid places like Vostok and the South Pole, remote from oceans, have accumulation derived from wind-blown detritus whose isotopic composition in deep core has no useful signal? There is plenty of wind-blown solid close to the ground, not much snow, and the possibility that many successive evaporation/sublimation events have assisted isotope fractionation in unknown ways between the oceans and the drill sites.

    I’m not wedded to this proposition, just askin’. These’s more than one possible observation for the IGD; Jonathan has given but one. Ferdinand has written a lot, but it’s hardly Occam’s Razor material, it’s convoluted.

    How, as a chemist, do you feel about linking processes in a piece of ice that isotopes indicate was in a process of change over a period of up to 6,000 years?

Leave a comment