Mann07 Part 3 — Unprecedented

Well lessee, what kind of trouble can I get in today. . Hmm, well Bishop Hill has a post that proves Phil Climategate Jones had veto authority over which papers were reviewed by Muir Russel’s crack team…. Now that is hard to resist for bloggers, but I’m sick of climate scientists cheating  and misrepresenting the truth, they have learned nothing, absolutely not one damned thing from cliamategate.  Think about it, Mann08 deletes inconvenient data for a preferred result, climate science sees no problem.  Muir Russel deletes the critics arguments for climategate, again they find no problem.

I’m sick to death of politicians – you sort out who is who.  Keep it to yourself though, because that’s not what this thread is about.   This is a math thread….. Yeah, I know, I’ve got a bunch of different readers these days who have arrived here and suffered with us through climategate and don’t generally come here for the math.  I’m sorry though, I like it.  Math is the key to science.

Long time readers have seen many demonstrations here of historic signal reduction or ‘variance loss’ from Mannian methods.  This post attempts to match some basic statistical qualities of pseudo temperature proxies to the same proxies of Mann08.

You recall from my last Mann07 post, this histogram of the autocorrelation coefficient from Mann08.

I took this histogram of about 1100 of 1209 proxies – a few didn’t converge, and copied these parameters to 10,000 series of random data such that the distribution of the rho’s was identical.   That means that I now have 10,000 proxies with a known signal and characteristics which are matched to Mann08’s (and 09’s) more recent and less famous hockey stick papers.  In Mann07 he assumes 0.32 is the ‘right’ value for rho in ALL of the proxies- red line above.  Why use only one value??? – I say use them all.

I ran the CPS algorithm from before, linked in the hockey stick posts above.  If you don’t know what I’m talking about, click on hockey stick posts, read the first second and third links.  Sorry if it hurts a bit, but we humans are forced to suffer through learning – you think this is easy?

What I’m doing is making fake tree ring proxies, bore hole proxies, mollusk shell proxies and that sort of thing by matching their statistical redness.   These are random series which over 10,000 samples will average to a flat line zero.   Then an artificial temperature signal is added that looks like this:

The sine wave portion has an amplitude of +/- 1 and the temperature signal is a ramp from zero to one.   An example proxy containing the signal and matching a Mann08 autocorrelation is below.  This is one of ten thousand pseudo-random curves that will now average to the temperature signal rather than a flat line!

You can kind of make out the signal beneath the noise.   If you average all the proxies, you do a plot of the original signal with no shape changes.  If you use mannian CPS…. (composite plus scale hockeystick methods) then what happens?

First though a little more pain on the quality of the proxies.  The standard deviation of my signal above is 0.29, the noise SD is 1.  Therefore it is not that different from the 0.25 used as worst case in Mann 07 (the high noise case), this match happened by accident, it’s close enough and I was too lazy to make it match better.  However, if I set my CPS acceptance threshold to accept correlations of 0.1 or higher, over 80% of the proxies passed correlation as opposed to Mann08’s 40%.

In CPS, the proxies which pass are averaged to create the ‘result’.   Of course if you only delete a small fraction of the preferred data, you get only a small distortion.

The red line is the signal we’re looking for, and found.  The sine wave in this case is only reduced by about 10% in amplitude – you can see it doesn’t quite reach +/- 1. This is actually a greater reduction than the low autocorrelation proxies in M07 but I sure didn’t expect to see much from only deleting 20% of the data.

Since Mann 08 retained 484/1209 proxies or only 40% of the data, I messed around a bit with higher correlation to match the data scrapping rate of the actual 08 hockey stick.

Well that decreased the sine wave to about 80%, but I don’t think we’re done yet.  These proxies have an added noise level comparable to the worst of M07 which stated this:

Our experiments allowed for various relative noise
amplitudes. As in previous work [MR02 and M05] we
defined SNR values by the ratio of the amplitudes (in C)
of the grid box temperature ‘‘signal’’ and the added
‘‘noise.’’ Experiments were performed for five different
values of SNR: 0.25, 0.4 0.5, 1.0 and 1 (i.e., no added
noise). (Note that these SNR values represent broadband
(i.e., spectrally averaged) properties. When the spectrum of
the underlying climate field is ‘‘red’’ (as with surface
temperatures), however, SNR will in general increase with
decreasing frequency.

The spectrum of my fake underlying climate field is a linear ramp function and decidedly red, so our signal to noise is a bit too high to match up with M08 results.  Consider this quote though from Mann07:

We adopted as our ‘‘standard’’ case SNR = 0.4 (86% noise, r = 0.37)
which represents a signal-to-noise ratio than is either
roughly equal to or lower than that estimated for actual
proxy networks (e.g., the MXD or MBH98 proxy networks;
see Auxiliary Material, section 5), making it an
appropriately conservative standard for evaluating realworld
proxy reconstructions.

The first time I read that,  I recognized the climate handwave.  They don’t know the signal level in a noisy curve, it’s difficult to estimate and easy to exaggerate.  Statement that something is ‘conservative’ proves nothing and stats are a lot of fun after all.  Well, if I use a ramp function as the underlying signal (as we did! – red sloped line in above graphs), our correlation of over 80 percent with these ‘worst case’ proxies is far far better than the 40% which passed (correlated well enough to be kept) in Mann 08.

Confusing if you haven’t read M08 or the links in the hockey stick posts ABOVE so let me make it simple.

If we want reality, We need more noise!!!, or less signal..  than the STATED conservative very high noise case of Mann07.    That is of course…. IF we are to match M08 proxies.  I’ve written this entire post without checking the final result.  Lessee what we get, I want a correlation of 0.1 with 40% of the proxies retained.  I’ll adjust the signal amplitude until we get where we have to be.

Wow, the peak to peak sine wave dropped to 0.7 out of 2.   That’s 35% of actual.  While I do believe this demonstration is more accurate than the PEER REVIEWED MANN07 LITRACHUR (booming voice) it is not as good as it can be yet.  The ramp pseudo temp signal is too clean.  I want to use the Mann07 pseudoproxy signal added to Mann08 noise and see what we can find out.

In the meantime, this is another demonstration of how we can employ paleoclimatology genius to get guaranteed unprecedentedness!!

unprecednosity?

unprecedentiality?

unpre.??

Ummmm… A hockey stick.

57 thoughts on “Mann07 Part 3 — Unprecedented

  1. If we use the modeled temp curves as the signal for M07, I expect a higher yet still non-realistic historic signal. It’s just a guess though.

  2. If you re-normalize your results back to the zero line, Jeff, the final ramp keeps its total intensity of 1.0, but the positive magnitude decreases along with the magnitude of the sine wave.

    There’s a peculiar distortion going on there, as the number of retained proxies decreases, with the ‘pre-twentieth century’ dip, just before the ramp, becoming ever deeper. Any idea what’s doing that? And does that have any bearing on the shape of 20th century rise in the final proxy time series in M08?

  3. #2 I think it very much does. You’re seeing the effect of the math which I’ve plotted vs frequency, redness and noise level several times here. Not only is it odd, but I believe it is quantifiable.

    See the isotemp lines of this post:

    https://noconsensus.wordpress.com/2010/07/19/9657/

    The tendency is to distort the zero toward the centerline of the ramp signal. The interesting thing is the percentage change toward zero is equal to the percentage change in historic amplitude. I’ve thought about this some, because I believe the signal distortions are quantifiable, they are therefore correctable.

    Think about that.

    This effect is also true of least squares and regression methods.

  4. What happens in CPS is that the noise which is of higher slope is preferentially selected. When you search for a straight slope, you get an overshoot on the bottom past the zero line from finding the right noise. In this version, the calibration range – red line, is offset and scaled from zero to one so the bottom of the dip is on zero. When I read CPS and RegEM, I try to ignore the vertical scale and see the shape only.

  5. Jeff ID, I like your experiment using a low budget psuedo-proxy like a sine wave – and it even removes the drift we would expect, evidently, from a climate model. Does the result put you in the Zorita and von Storch camp?

    The great find you made here was the histogram of AR1 values and applying them separately.

  6. Also, do you see the slight ramping from 1400 to 1900, this is a key feature of Mannian reconstructions which is claimed to be temperature.

  7. #5 I’m a fan of V and Z’s 04 paper, even though they made an error in replication- not in correctness.

    I think this is an extension of the work on the topic but it’s still just beginning.

  8. #5 There is nothing to say we cannot stick a sine or square wave in the historic portion of M07 proxies. It will make no difference to the amplification. That should be fun.

  9. “but I’m sick of climate scientists cheating and misrepresenting the truth,”
    Well, I suppose then I shouldn’t raise it, but there’s misrepresentation here. Bishop Hill’s post didn’t “prove that PJ had veto authority” – totally untrue. All that happened (according to BH) was that Oxburgh asked PJ whether he thought the Royal Society selection of papers was fair, and PJ said yes.

    It seems to me that that question absolutely should have been asked. PJ is an expert, and could offer useful information. Had he argued that the list should have been modified, and they had done so in a way that indicated acknowledging veto power, there’s room for criticism. But he didn’t, and there’s absolutely no indication that Oxburgh would have responded thus.

  10. I wondered if someone would say something about that. So if it’s misrepresentation tell me.

    What would happen if Phil Jones said no?

    Why ask then, if he cannot say no?

    Do you still see it as misrepresentation?

    You may be right that I stated it too strongly but my god, how nuts is it to ask the accused if he’s happy with the evidence? I suppose that comment belongs on a different thread though.

    BTW, I added you to the blogroll about an hour ago.

  11. Re: Jeff Id (Jul 19 21:18),
    Jeff, thanks for the blogroll mention.

    Yes, I do see it as overstating. This list isn’t evidence re the “accused” – it’s a list of published papers. As Lord O said, they could look at anything they liked – the RS made a summary list to help them. I really can’t imagine that PJ could or would say – you mustn’t look at that one. He might well say that they had left something out. But he didn’t.

    But we don’t know, so nothing is “proved”.

    BTW, I do find this focussing on trivialities with heavy suspicion to be pointless. Making a list of relevant papers is really no big deal. It isn’t a list of “admissible evidence”. It’s just helpful. If you think there’s a real case against Oxburgh, it would be better to focus on that.

  12. How come they didn’t ask me?

    “But we don’t know, so nothing is “proved”.

    yes, something is most certainly proved. It is proved that the committee worked with the accused to insure that the issues reviewed were satisfactory while asking not one single critic if they also agreed.

    Do not say I am misrepresenting if you hope to sell that. I should have left it out of this post anyway but am too tired of government funded sophistry.

  13. Jeff,

    I suspect Steve’s gonna snip my comment on Josh’s ‘toons responding to the same silly comment you did (about ‘going back to science’). As if. Here it is —

    Laughing at buffoons is the best way to showcase the buffoonery. “Go back to science”? Are you kidding?! Since when have Jones, Mann, Briffa, Rahmstorf and the rest of the keystone kops ever done any science? You know — calibrating instruments, replicating studies, providing honest responses to criticisms, etc. To call what these jokers do ‘science’ is to insult genuine scientists throughout the course of history. When they adopt the scientific method, we can ‘go back to science’ — for the first time.

  14. While I think he’ll snip, I really believe that belittling the rampant incompetence and the anti-science behavior is the best way to get the science back on track. Imagine a climate science where the scientists installed their instruments properly and checked them regularly. Or satisfied the basic principles of the scientific method. Or met the basic standards for forecasting before using their models.

    If the so-called climate scientists actually tried to meet the basic requirements of science for once, the changes would be so startling a lot of people would be shocked speechless.

  15. #11, I really do see the papers as the accusations. If you look at the combination of Briffa 98 and the emails, it’s ugly. If you look at the chinese temp paper by Jones, ouch.

    These guys are red handed caught exaggerating science for the purpose of global warming alarmism. It’s not like it’s a close call which requires an interpreter. The media doesn’t cover it, multiple reviews whitewash it — openly, the science problems were already out in the open before the ‘frank’ emails proved the intent.

    Yet what do we get?

    It’s not a close call.

  16. Nick @ 9:15

    If you think the Royal Society made the selection of papers, can you tell us who at the Royal Society made that selection?
    =================

  17. Jeff, tAV is such comfort food for thought. Thank you for the timely demonstration.

    Mr. REVKIN: Well, let me tell you what it is, for those listeners who don’t know. In the late ’90s, Michael Mann at Penn – who now is at Penn State and some other researchers, pulled together some of these threads from tree rings and other things and came up with an estimate, over the last thousand years, of temperature, and found that the last 50 years was really outstanding. It stood out as the – a period of warming unparalleled in a thousand years.

    Would someone pass this analysis/demonstration off to Andy to un-confuse him?
    http://www.npr.org/templates/story/story.php?storyId=128568245

  18. We already know UAE selected the papers for the RS. We just don’t know which particular person at UAE did so. Want to bet some quatloos PJ had a hand in that too?

    re post, sorry for ignoring it, but Nick is being a bit precious about representations when the people who should have been exposing a light on all this are just as guilty of hiding the odd trick or 2. Disgraceful for an inquiry to allow themselves to be “guided” in this way.

    re mann 07 and 08 – is there a paper able to be put together in all your posts on them?

  19. Over at Niche Modeling there’s some anonymous commenter, Guest, that insists that using red noise is somehow cherry-picking and only white noise will do. Red noise, according to him, contains information but white noise doesn’t. Maybe it’s someone from the Team.

  20. #23 DeWitt Payne

    Heh, I don’t think it is someone from the Team, but he is being stubborn. He has boxed himself into a bit of a corner by insisting that red noise has meaningful information. Maybe he didn’t realize at first that the “information” in red noise is not relevant wrt selecting predictors of historical climate.

    In discussion with me I believe he is arguing that if we calibrate *trendless* red noise psuedoproxies that selected proxies must somehow have LF information which would match in phase and frequency to the calibration period and extend meaningfully into the reconstruction.

    No, I’m not making this up.

  21. Re: kim (Jul 19 22:13),
    Kin, I understand Brian Hoskins, who is an eminent atmospheric scientist, took responsibility for the list. He might have got a grad student to put it together, I don’t see that it matters. It was, as they said, intended to be a representative list of CRU’s work.

  22. Heh, whoever made that list has some questions to answer. And I’m amused that you don’t seem to think whoever made the list matters. The inadequacy of the list to investigate the matter is epic. It’s highly pertinent who made the list.
    =======================

  23. Can we get back to Mann.

    The simple fact is this. If Lord O is investigating the science of CRU
    Phil Jones has no business saying what a “fair: sample is. The term “fair”
    or representative has NO MEANING in this context,

    A fair sample of tiger woods text msgs are meaningless to an investigation of his infidelity.
    If ATT were to pick 100 at random that showed nothing, tiger would say it was a fair sample.
    If the 100 happened to include some nautiness, he would say the sample was unfair.

    Fair makes NO SENSE. representative makes NO SENSE. You want a targeted selection. an unfair sample.
    One that actually focuses on the issues.

    So back to CPS.

  24. Good – this looks like progress. We have one example of how a known dataset will produce a reconstruction. I think it would be instructive to investigate the boundaries.
    1) White noise. What is the effect? but we do have the real proxies to show the frequency distribution of noise.
    2) Shape of temperature signal. Can you use a peer reviewed calibration period signal for the ramp?
    3) Parametric variation of the temperature signal. Whilst I am comfortable that this is already expressed in your red noise, I feel it may be instructive to achieve a proportion of the proxy signal degradation by moving the sine wave by +/- 50 years, +/- 20% amplitude, and similarly for the modern ramp. This gives a more direct representation of how each proxy will respond differently to a ‘global’ signal (This being Mann’s thesis, not necessarily a known fact)

  25. Heinlein once said via his character Lazarus Long, “If it can’t be expressed in figures, it is not science; it is opinion.”

    He also said, “Anyone who cannot cope with mathematics is not fully human. At best he is a tolerable sub-human who has learned to wear shoes, bathe, and not make messes in the house.”

  26. Jeff,

    This series of three posts, and the post over at Niche Modeling, really do represent progress. It is clear that there is potential for loss of amplitude of true signal outside of the calibration period due to selection of reddened proxy series that correlate with the calibration period. How much attenuation appears to just depend on the redness. If there is a rational way to determine the degree of redness (and it sure seems like there should be) then it ought to be possible to define the likely range of attenuation in Mann’s reconstruction.

    The nice part is that it begins to make some sense overall: even Mann’s analysis shows some indication of the LIA and the MWP, though smaller in magnitude than from some other estimates (growing seasons, crop regions, historical records, etc.). Accounting for signal attenuation outside the calibration period in Mann’s treatment may help resolve that apparent conflict.

  27. Jeff ID, I do hope that the fine analysis you have made here is not preempted with all the snipping about the CRU investigations.

    It is good to have a Nick Stokes as a sounding board to consensus thinking, but I judge that we need only present the more opaque parts of these investigations and then let the thinking people make their own conclusions. It is rather obvious to me that these proceedings are used to limit the damage done by climategate, but for the more skeptical amongst us I think they give views into the status of the general thinking and attitudes of the prevailing intelligentsia.
    And now that I provided the final word on the investigations, can we please get back to and expand on your ongoing analysis of variance reductions in temperature and/or climate reconstructions. Don’t do smilies.

  28. Steve Fitzpatrick:

    I do not think that the attenuation of signal is only a function of red noise. See for instance this link:

    Click to access cp-6-273-2010.pdf

    Also note what the reply of von Storch and Zorita shows in the link that I provided above.

    I think we need to be careful (and that includes me in my above statements) about the reconstruction methodology used in the reconstruction when talking about the amount of signal attentuation and what might affect it.

  29. #33,

    Agreed, Seems like part 1-3 could be written up with say a suitable co author and submitted. I’d approach Mann, Von Storch, Zorita, Tamino, Mc, Ammann. hehe

  30. Do you mean JEG who is now in Los Angeles? I remember having a lovely conversation with him back in the day. Our conversation lagged and I told him ‘I haven’t the certainty you desire and you haven’t the uncertainty I desire’.

    Among other things I told him was that the ocean was cooling, which he disputed.
    ===================

  31. #32 Kennith,

    It is an interesting paper. They do point out that increasing red noise will increase the longer term variance, and so overall uncertainty.

    The magnitude of the attenuation they describe is high with noisy data, but I am a little puzzled by one thing: their plots showing how attenuation changes with S/N ratio appear to be based on equal noise (uncertainty) in both x and Y values; in paleo reconstructions, the noise level in the age value is likely much less than the noise level in the measured variable. I expect that the size of the expected attenuation in paleo reconstructions for a given level of noise in the measured variable is less than their plots show.

  32. It is an interesting paper. They do point out that increasing red noise will increase the longer term variance, and so overall uncertainty.

    In fact when we first commented on and reviewed this paper here, I was very skeptical of the so-called correction for attenuation working when red noise is involved.

    their plots showing how attenuation changes with S/N ratio appear to be based on equal noise (uncertainty) in both x and Y values

    I believe the authors show that the most bias is with OLS and something less with TLS (where both the errors in dependent and independent variables are considered) depending on the assumption of errors.

    Another point is the ACOLS correction produces larger CIs. I made this point in the original discussion that a correction that leads to larger uncertainties is not the panacea that on first look it might appear.

    I also noted that in the paper proper the authors do not stress the limitations of the ACOLS uncertainty nor the problems with correcting with red noise while in the SI these points are made much clearer.

    On misspelling my name or shortening it, I have a standard reply that I fear most do not understand (the origin of): You can call me Kenneth and you can call me Ken or you can call me Kinneth, but do not call me Kenny.

  33. Layman Lurker, that’s it and thanks for the video. I had not seen it for a while.

    And you doesn’t have to call me Fritsch.

  34. The JEG paper follows the Mann 2007 methodology to a T and appears to make stronger claims than Mann does for retaining reconstruction signal amplitudes. I think Mann, as an advocate, has to maintain some credibility for past work that is not as pressing for JEG et al. JEG makes harder claims, in my view, for TTLS over ridge regression. Now lets see how they handle auto correlation.

  35. #44 I get the advantage of writing each line of code of course. It also helps to view the dozens of plots I cannot post – due to laziness and time -, but Mann doesn’t have a leg to stand on – except a sophists leg. If you wouldn’t mind sending the jeg paper, it would be awesome.

  36. Jeff ID, my comments in Post #44 were based on the abstract of the conference/paper referenced by Mosher. I searched on RegEM to get to the abstract. I have not been able to come up with the paper itself. JEG is Julien Emile-Geay and he had a web site so I might search for it next. If you search that conference’s paper presented using the key words RegEM, TTLT, etc, you will find a number of papers making claims for the new improved reconstruction methodologies that preserve variances.

  37. Jeff, the talk, not a paper, is titled here and below the conference statement indicates that the text of the talks will be published online. A talk, even if only 15 to 20 minutes, could be informative if there is an extended QA and the right questions are asked. Otherwise some of the referenced papers should be search, I would suppose.

    Certainly though from the number of talks at this conference with a like subject it is rather obvious that climate science has “moved on” to variance preserving methods (their claim not mine) with RegEM and TTLS.

    Julien Emile-Geay, Tapio Schneider, Diana Sima and Kim
    Cobb – Variance-preserving, data-adaptive regularization
    schemes in RegEM. Application to ENSO reconstructions

    We plan to make presentations available online as pdfs after the meeting. If you
    would rather not have your presentation available online, we will give you an
    opportunity to object later, but it would be helpful to know on upload (for example,
    you could name your presentation as Jones_noupload.ppt if your name is
    Jones in that case.)

  38. Notice that Koutsoyiannis’ conference presentation is here (not a conference site) and its placement has evidently corresponded with the conference date. I have not as yet located the JEG paper or others addressing variance retention in reconstructions. Has anyone else been searching?

    Koutsoyiannis, D., Memory in climate and things not to be forgotten (Invited talk), 11th International Meeting on Statistical Climatology, Edinburgh, International Meetings on Statistical Climatology, University of Edinburgh, 2010.
    http://www.itia.ntua.gr/en/docinfo/991/

    I guess for consensus papers there is a less pressing need to publicize them since all will agree that they are valid.

  39. Well it turns out that Mann 07 both a and b versions is riddled with basic flaws according to Dr. Jason Smerdon in his new paper in the Journal of Climate:

    Erroneous Model Field Representations In Multiple Pseudoproxy Studies: Corrections and Implications

    Click to access 2010b_jclim_smerdonetal.pdf

    One basic mistake that Mann made is that he switches the Western Hemisphere with for the Eastern Hemisphere:

    We have discovered that the geographic orientation of the CCSM field used by Mann et al. (2005, hereinafter M05), Mann et al. (2007a, hereinafter M07), and Mann et al. (2007b) was incorrect.

    Another is that he smoothes one hemisphere and not the other.

    We also have discovered that the ECHO-g field used in M07 was corrupted by a hemispheric-scale smoothing in the Western Hemisphere

    Dr. Eduardo Zorita explains it in an article at Die Klimazwiebel:
    http://klimazwiebel.blogspot.com/2010/07/mistake-with-consequences.html

    So Jeff you might have to start all over with Mann 07 since he added a 180° twist to his Upside Down moniker

  40. Important comments from the Zorita link are:

    One:

    There are several pseudo-proxy studies around that have tested in this way different climate reconstructions methods. The conclusions of these studies sometimes diverge. One of the current controversies involves the Regularized Expectation Maximization method. Some groups (Christiansen et al, Smerdon et al ) have found that this method leads to a similar underestimations of past variations as found in early reconstruction methods, whereas other group including Mann and collaborators, here and here, find that the RegEM method performs well.

    and Two:

    The errors are not difficult to understand and do not involve the implementation of the RegEM method itself…

    and finally Three:

    The most recent climate reconstructions by Mann et al published in Science in 2009 were conducted with the RegEM method, supported by the good skill displayed by the RegEM in those previous stress-tests. I am unsure as to what extent those reconstructions may be now compromised. Interesting food for thought for the authors of the next IPCC Report.
    Here is my take on these comments:

    In One we obtain a couple of papers that are critical of the RegEM clain of variance preservation. They will be a must read for me.

    In Two it would appear that the shape of the psuedo-proxy should have no effects on the performance of the reconstruction methodologies. After all Jeff ID was satisfied using a sine wave and so I say we don’t need no stinkin’ climate model psuedo-proxies.

    In Three it would appear that Zorita has no judgments/opinions/conjectures on the latest Mann claims for RegEM and variance preservation so lets look at Christiansen and Smerdon.

  41. I just finished my first read of the paper linked by Zorita (in the link from Boballab) and linked in my post below.

    Now far be it for me to assume a position to judge well, but I found this paper gave a very complete history of reconstructions, methodologies used and a comprehensive analysis of the methods and there performances. The paper found that all methodologies used in reconstructions failed to retain the psuedo-proxy variance and particularly so for the RegEm with TTLS that was claimed to perform so well for in the Mann paper for variance retention.

    I would strongly recommend this paper being the subject of a separate thread here.

    A Surrogate Ensemble Study of Climate Reconstruction Methods:
    Stochasticity and Robustness
    BO CHRISTIANSEN, T. SCHMITH, AND P. THEJLL

    Click to access reconstr_reprint.pdf

  42. The authors in Christian et al. go to great lengths to credit the work of previous scientists with temperature reconstructions, and in particular to set Mann et al. (1998) – the Hockey Stick- as a breakthrough paper at its time. I tend to agree with the authors’ view on the HS and attribute its iconic state as the reason it has been so difficult for the defenders to back down – even a little.

    The authors, in my view of the paper, also go out of their way not to offend any of those involved with past temperature reconstructions, but in the end they state in their paper and almost apologetically:

    The underestimation of the amplitude of the low frequency variability demonstrated for all of the seven
    methods discourage the use of reconstructions to estimate the rareness of the recent warming. That this underestimation is found for all the reconstruction methods is rather depressing and strongly suggests that this point should be investigated further before any real improvements in the reconstruction methods can be made. Until then, smaller improvements may be possible by obtaining more and better proxies. To end on a positive note we emphasize that the reconstruction
    methods’ abilities to reconstruct the shape of the centennial variations are promising for studies on the relative impacts of different climate forcings.

    Note that centennial reproduction is on shape and not amplitude. I find this comment relevant to some claims by dendroclimatologists who make the claim for tree ring widths/densities by showing that the blips in the reconstruction pattern match the known volcanic events, but unfortunately do not reproduce the amplitude with any consistency between proxies and or as a matter of expectancy for the relative effects of the various major volcanic eruptions over time.

  43. Jeff,

    Have you ever looked into Hurst-Kolmogorov statistics and how that might relate to hockey sticks? Koutsoyiannis’ work says that temperature time series have an H-K statistic or coefficient close to 1. AR(1) noise, OTOH, is still close to 0.5, I think. I also think that means that the noise power spectrum is not linear with frequency, but increases more rapidly as frequency decreases. Unfortunately, R doesn’t seem to have anywhere near as many apps to do H-K as it does to do ARIMA.

  44. I’ve analyzed the 1209 in-filled series from Mann using an arfima(1,0) model (armaFit from R package fArma). An arfima (p,q) model is similar to ARIMA(p,d,q) model but with d fractional and limited to a range from 0 to 0.5. The results are all over the map. There are 507 series (group I) with the AR coefficient less than 0.1 and d greater than 0.05, 375 series (group II) with d less than 0.05 and AR greater than 0.1, 304 series (group III) with AR greater than 0.1 and d greater than 0.05 and 23 series (group IV) with AR less than 0.1 and d less than 0.05. Of the 484 series that passed the temperature screening from 1850 to 1995, 255 are from group I, 107 each from groups II and III and 15 from Group IV.

    Now I need to generate synthetic series and see if the different groups behave differently.

Leave a comment