the Air Vent

Because the world needs another opinion

Michael Mann – Steppin’ in it.

Posted by Jeff Id on June 24, 2009

Mondo, called my attention to a Mike Mann RC quote in an Air Vent thread.   Thanks again to my alert readers, without whom I would never find this stuff.

The timing of this reply by RC is interesting in that it came on the 22nd, right after my post on the hockey stick creation. It’s not directed toward the Air Vent in particular but apparently a site I don’t know of that is discussing similar issues. Either way he addresses some of the points exposed here in the hockey stick posts.

In his reply he admits foreknowledge of the fact that any signal can be pulled from proxy data.

Someone named Mark called Dr. Mann’s atteention  on comment 114 of the copenhagen thread LINK HERE.

Mark Says:

22 June 2009 at 5:29 PM I have an interlocutor that says that you can get the Mann Hockey stick from random data.It took several goes but this is what he eventually said:

“Red noise is a random walk. You add a random number to the the previous value and so on.

You make a number of these series and then you use them to replace the proxy data in a temperature reconstruction.

If you produce several of these random walks you will see that some of them have a similarity to the instrumental record. Some reconstructions give these series a high weighting overiding most of the other series hence the concerns over a few proxies dictating the result.”

So you have to use random data and keep using random data until you get Mann’s Hockey Stick.

That hardly seems to be random data to me…

[Response: Actually, this line of attack is even more disingenuous and/or ill-informed than that. Obviously, if one generates enough red noise surrogate time series (especially when the “redness” is inappropriately inflated, as is often done by the charlatans who make this argument), one can eventually match any target arbitrarily closely. What this specious line of attack neglects (intentionally, one can safely conclude) is that a screening regression requires independent cross-validation to guard against the selection of false predictors. If a close statistical relationship when training a statistical model arises by chance, as would be the case in such a scenario, then the resulting statistical model will fail when used to make out-of-sample predictions over an independent test period not used in the training of the model. That’s precisely what any serious researchers in this field test for when evaluating the skillfulness of a statistical reconstruction based on any sort of screening regression approach. This isn’t advanced stuff. Its covered in most undergraduate intro stat courses. So the sorts of characters who make the argument you cite either have no understanding of elementary statistics or, all too commonly, do but recognize that their intended audience does not, and will fall prey to their nefarious brand of charlatanism. -mike

Thought some of you would like a laugh!

[Response: Disgust is a more appropriate emotion, recognizing that the errors in reasoning aren’t so innocent, and that there is a willing attempt to deceive involved. -mike]

What would we do without mike. – little m?  A little disaggregation is in order as TCO would say/

Actually, this line of attack is even more disingenuous and/or ill-informed than that.

This is of course hard to address but remember it isn’t directly about the Air Vent. This line of ‘attack’ is not an attack though, it’s math.  It’s also published but I lost the paper (not the SteveM RossM version)when my computer glitched.  If anyone has a reference I would be appreciative.  It’s a simple paper by 3 German I believe, scientists.

However, the second part of that sentence is the admission of foreknowledge.  This is exactly what I accused him  of in my previous post (part 1 link below)- knowingly matching any signal you want.

if one generates enough red noise surrogate time series (especially when the “redness” is inappropriately inflated, as is often done by the charlatans who make this argument), one can eventually match any target arbitrarily closely.

How does he explain, this admission of guilt you ask?  Like this-

What this specious line of attack neglects (intentionally, one can safely conclude) is that a screening regression requires independent cross-validation to guard against the selection of false predictors. If a close statistical relationship when training a statistical model arises by chance, as would be the case in such a scenario, then the resulting statistical model will fail when used to make out-of-sample predictions over an independent test period not used in the training of the model.

This is Mann at his best. Claiming random data will fail cross validation is only reasonable by properly doing the statistics. He performed this verification by calibration of end points to temp data and checking the middle in M08, the statistical pass has to take into account the autocorrelation of the proxy data which was done very poorly in M08.   The reason I don’t answer more clearly is that – Even if Mann was perfectly correct, this arm waiving does not address the conclusions in my posts.

My first post showed that any signal you want can be extracted from even his own proxy data, a point Mann admits to above. His admission also acknowledges that he’s aware of that fact as I pointed out Hockey Stick CPS Revisited – Part 1. My second post (last night) used proxy matched red noise data.  Any claims of using too much red or pink or whatever are specious in this case.  While the autocorrelation (redness) changes the quality of the signal extracted from no signal data, all auto-correlated non-signal noise results in a hockey stick from nothing.

My second post Historic Hockey Stick – Pt 2 Shock and Recovery discusses the signal amplitude distortion from CPS.  CPS on sloped data reduces the historic amplitude significantly.  This is caused by rescaling noisy data in the calibration range. Think about what that says,  any noisy set of signals which is scaled to match a slope in the calibration range will necessarily have a stronger signal recovery in the calibration range than historic data.

The reason this is important is that regression techniques such as EIV, RegEM, TLS, TTLS or any others you can think of are therefore guaranteed to have the same effect.  Amplification of the calibration range signal in relationship to the historic data.  The result, UNPRECIDENTED whatever slope you’re looking for created by the noise on the signal adding up positively in the calibration range and averaging to zero in history .

Verification in the calibration range does not eliminate the effect and in fact has nothing to do with solving the problems ‘selective regression calibration’ cause.

I’ll close with this quote.

So the sorts of characters who make the argument you cite either have no understanding of elementary statistics or, all too commonly, do but recognize that their intended audience does not, and will fall prey to their nefarious brand of charlatanism. -mike

Apparently,  people like me are trixters and people like you are too stupid to see it. I would like to see his reply to these obvious statistical criticisms I’ve put forth but he cannot.  There are honest scientists in paleoclimatology who may use these methods believing in their accuracy.  I don’t believe Mike is one of them.  Every hockey stick chronology which regresses individual proxies to calibration data is faulty for this reason.


    1. Mark Says:
      22 June 2009 at 5:29 PM I have an interlocutor that says that you can get the Mann Hockey stick from random data.It took several goes but this is what he eventually said:“Red noise is a random walk. You add a random number to the the previous value and so on.
      You make a number of these series and then you use them to replace the proxy data in a temperature reconstruction.

      If you produce several of these random walks you will see that some of them have a similarity to the instrumental record. Some reconstructions give these series a high weighting overiding most of the other series hence the concerns over a few proxies dictating the result.”

      So you have to use random data and keep using random data until you get Mann’s Hockey Stick.

      That hardly seems to be random data to me…

      [Response: Actually, this line of attack is even more disingenuous and/or ill-informed than that. Obviously, if one generates enough red noise surrogate time series (especially when the “redness” is inappropriately inflated, as is often done by the charlatans who make this argument), one can eventually match any target arbitrarily closely. What this specious line of attack neglects (intentionally, one can safely conclude) is that a screening regression requires independent cross-validation to guard against the selection of false predictors. If a close statistical relationship when training a statistical model arises by chance, as would be the case in such a scenario, then the resulting statistical model will fail when used to make out-of-sample predictions over an independent test period not used in the training of the model. That’s precisely what any serious researchers in this field test for when evaluating the skillfulness of a statistical reconstruction based on any sort of screening regression approach. This isn’t advanced stuff. Its covered in most undergraduate intro stat courses. So the sorts of characters who make the argument you cite either have no understanding of elementary statistics or, all too commonly, do but recognize that their intended audience does not, and will fall prey to their nefarious brand of charlatanism. -mike

    2. Thought some of you would like a laugh!
    3. [Response: Disgust is a more appropriate emotion, recognizing that the errors in reasoning aren’t so innocent, and that there is a willing attempt to deceive involved. -mike]

46 Responses to “Michael Mann – Steppin’ in it.”

  1. woodNfish said

    Yes Mann is a fraud, we know that. The question is, does Congress know it as they go to vote on an economy-killing emissions bill? On a completely cynical note, they probably don’t care – it’s the mutual respect of one group of paid professional liars to another.

  2. Stan said

    Mann certainly loves his name-calling, doesn’t he? He’s a veritable slander machine.

  3. > The reason I don’t answer more clearly is that

    I think you should. Ideally you would “make out-of-sample predictions over an independent test period not used in the training of the model” and show that the statistical model will not fail even if the original data were random.

    I know it is next-to-impossible to reason with the RC people but if your opponent throws dust in the air, a good hoovering is sometimes in order

  4. Jeff Id said

    #3 This is what M08 attempted to do. The problem is that auto correlation was apparently not accounted for reasonably as verification stats are the easiest to tweak.

    This is a side issue though because it doesn’t address the main problem in the reconstruction that even when the signal is known and perfect, distortions are created and validations will still work. See my part II post, — which now has hundreds of readers with very few comments.

  5. Jim said

    Mann says “is that a screening regression requires independent cross-validation to guard against the selection of false predictors”

    Jeff, does he not mean that once the proxies are selected via the instrumental temperature match, that then the selected proxies must be compared to a second standard of some sort. If so, what second standard did he use to further validate the proxies? Or does he mean the proxies must be compared to each other by some means so as to choose the ones that match each other?

  6. Jeff Id said

    #5 The best way to answer your question is in the paper.

    http://www.pnas.org/content/early/2008/09/02/0805721105.full.pdf

    It’s a little complex but it makes some sense, it just isn’t related to the problems here.

  7. chris y said

    Jeff Id, I just realized that there is a similar effect that occurs in Electrical Engineering. I often am looking at Gaussian noise on a digital oscilloscope. If I set averaging to at least a few traces, and set the trigger within the peak-to-peak signal range, then a hockey stick trace and/or a fictitious pulse appears every time. If the trigger is set to a positive signal slope, then the hockey stick points up. If the trigger is set to a negative signal slope, then the hockey stick points down. The ‘unprecedentedness’ of the hockey stick is continuously adjustable by varying the trigger level.

    In reality, as Gene G, a commenter over at Dot Earth once said, its a climate Rorschach ink-blot test. I prefer to describe it as policy-based evidence making.

    Keep up the great postings! I especially enjoyed the Antarctic ‘pick-a-trend’ results.

  8. MikeN said

    Chris, do you get the same buildup to the hockey stick bump?

  9. MikeN said

    So what would be a good way to reconstruct temperature from these proxies?

    What about if he only correlated to late 20th century warming, and then tested the reconstruction to see if it matches 1850-1950?

  10. Stan said

    I prefer to describe it as policy-based evidence making.

    “We must get rid of the MWP.”

    Within a year, along came Mann. And the MWP disappeared. Poof, it was “gone with the wand” — dispatched by a wave of the Mannian magic wand.

  11. Jim said

    So the methods applied after reconstruction were intended to guage the skillfulness of the reconstruction, not improve the reconstruction as it was being built?

  12. Jim said

    MikeN – I, too, have been wondering if there is any good way to extract a good temperature reconstruction. Using spatial information to supplement the temperature proxy data looks more promising, but the devil would still be in the details. There is also the on-going question concerning the proxies which is do they in fact contain temperature data or something else entirely. It may be that we will never know what the past temperature variations were.

  13. Jeff Id said

    #9 it can’t be done by correlation, and the calibration step doesn’t work. That’s the worst part about it. Calibration of noisy data by regression distorts the result. See the last graph in my previous post. – shock and recovery part II

  14. MikeN said

    Can’t the calibration be done differently?

    Also, if you only do late 20th century temperature match, you would presumably still get a hockey stick, only flatter. I assume this would then fail a correlation match for the 1800-1950 period and the whole reconstruction would be a failure.

  15. John D said

    #9 – So what would be a good way to reconstruct temperature from these proxies?

    Mike,

    Empirical data trumps all. Experimental work with the proposed proxy is critical. It’s what laboratories were invented for after all. If a geologist can simulate the conditions that form a diamond, isolating and quantifying proxy responses that correlate should be readily possible.

    In addition, never use “proxy data” that one KNOWS is not causally linked to the phenomenon of interest – e.g. don’t use bristle cone pine data if you know there is no actual causal correlation with temperature. Using, say, tree ring data that are sensitive to CO2 fertilization, because you assume temperature is sensitive to CO2, can be procedurally bad science.

    If the goal of the experiment (or computer model) is to show the sensitivity of climate to CO2, the tree ring data may be a proxy for CO2, but using it for temperature is fallacy of logic called Affirmation of the Consequent.

    I would speculate that Mann never read any Logic or philosophy of science, but actually it is an easy error to commit when designing an experiment, if you let your eye wander, or let your assumptions blind you. Mann plainly has been taken prisoner by his convictions since he never seems to address the actual criticisms that Jeff and Steve McIntyre and other have advanced. The criticisms are substantive and very serious. Unfortunately, the issue was politicized very early on, and once that happens in any field, science and logic go out the window in favour of rhetoric and politics, which are easier to implement.

  16. Billy Ruff'n said

    A layman’s question: Is “training” a model anything like training a dog?

    “SIT UP” — good data, here’s a treat!

    “LIE DOWN” — good data. Another treat.

    “UP” “UP” “UP” — Atta, boy! Here’s a bone for you.

  17. harold said

    Hi Jeff, really like your blog. A bit OT but honing in on the “disingenuous” that Mann uses in his RC reply, he used that expression in a 2005 email exchange with Marcel Crok (a Dutch science journalist).
    Answer:…”This claim by MM [McIntyre and McKitrick] is just another in a series of disingenuous (off the record: plainly dishonest) allegations by them about our work.”
    The Question was:”How do you explain the existence of the directory BACKTO_1400-CENSORED on Mann’s ftp-server? MM show that it contains the results of the calculation of the NOAMER PC’s without using the bristlecone pine series, giving a higher NH temperature in the 15th century.
    Mann’s answers are here:

    http://www.natutech.nl/00/nt/nl/49_65/nieuws/2299/Het_antwoord_van_Mann.html

    Crok’s original questions are found here:

    http://www.natutech.nl/00/nt/nl/49_65/nieuws/2298/Onze_vragen_aan_Mann.html

    My apologies if you already knew about this Michael Mann rant.

  18. DeWitt Payne said

    Any retrospective study is going to have problems. Somewhat better would be to resample the same proxies you originally selected twenty years later. Splitting the calibration period isn’t the same thing. It’s still data mining. If there actually is a temperature signal, then most of the selected proxies should still show similar correlation to temperature. This concept seems to be anathema to dendroclimatologists and the few times it has been done (Ababneh, e.g.) correlation failed.

  19. TCO said

    DP is right about the difference of true out of sample testing. I do think that Zorita has a valuable intuitive point about the higher reliability of something that has a lot of wiggle-matching as opposed to something that has one (or two) trends matched. The general Mann approach of creating huge inputs, filtering half on the first pass and half on the second seems like it could lead to data mining. But at some point, I think you have to look at a lot of wiggle-matching and say it’s a good predictor (for one thing consider the tree ring matching software). I’m not really expressing this properly…

    I don’t think we have enough sampling to make generalizations like tha “Abebnah” remark. Heck…on that one, Steve was busy throwing multiple excuses against the wall (sheep, dry lakebeds, precip, CO2) to explain the bcp 20th century rise. It’s a bit rich to now say, “oh, well, they really didn’t go up, that works for me”. It shows skeptics not being objective. Shows them favoring individual studies or concepts that help them and not in reverse. IOW, not having the same hurdle of proof and skepticism.

  20. Kenneth Fritsch said

    Jeff ID at post #6, the SI linked below describes the methods used by Mann. I am in the process of rereading Mann et al. (2008) and SI. I find the contents of the main paper a litany of conditioned claims and without good explanations of methods or results.

    My opinion, on my initial read of this paper some time ago, was that if you carefully read claims for SH and global temperature anomalies in this paper they back peddle from previous claims. I see a reconstruction that does not show unprecedented global warming in recent decades (where the reconstruction part is carried into recent times) with an instrumental record tacked (and not spliced) to the end of it and thus making the graphs misleading to the casual reader.

    Jeff, what I remember reading was that Mann et al uses a screening of proxies using p, or the probability that the correlation with instrumental data is statistically significant, and not r. The p used is not the normal 0.05 but 0.10 and that probability goes to something like 0.13 when auto correlation is considered. For your purposes in your HS One post it would not matter and as it turns out using r equal to greater than 0.1 retains about an equal number of proxies with your synthetic calibration data as Mann retained with his selection criteria.

    Mann will apparently not address his critiques directly and not at the detailed level that he is being criticized. By doing this he always retains deniability in future discussions. When he says finding “reconstruction” patterns in simulations depends on the level of red noise used and the number (percent) of samples used in simulations, he is correct and no one is going to disagree with him on this issue. That does not address the criticism aimed at his methods and in fact it avoids them. Not addressing his replies to specific individuals makes his replies ineffective to useless. All this makes me think that his replies are more for galleries of like-minded readers at RC than in shedding any (new) light on his methods.

    http://www.pnas.org/content/105/36/13252.full.pdf+html

  21. timetochooseagain said

    Is mike a conspiracy theorist? Sounds like one to me! Hehe…

  22. Kenneth Fritsch said

    I don’t think we have enough sampling to make generalizations like tha “Abebnah” remark. Heck…on that one, Steve was busy throwing multiple excuses against the wall (sheep, dry lakebeds, precip, CO2) to explain the bcp 20th century rise. It’s a bit rich to now say, “oh, well, they really didn’t go up, that works for me”. It shows skeptics not being objective. Shows them favoring individual studies or concepts that help them and not in reverse. IOW, not having the same hurdle of proof and skepticism.

    TCO what is your point here. As I recall the Abenah thesis was out-of sample data that showed a different trend than previous data that cut-off at an earlier point. Has something changed in the interpretation of that data that you did not reveal in this post?

  23. Jim said

    One (other) thing that bothers me about the proxies is that they don’t yield an actual temperature. There is a lot of devil in those details also and obviously the actual temperature matters a good deal.

  24. TCO said

    22. I AQREE with your description of that example.

    A. My first point is that it’s a single example. We don’t have enough to back up DP’s GENERAL comment. Heck, one of the comments from our own side has been that we lack them “bring the proxies up to date”. Now that we get a single example (and note it supports “our” side, that’s not sufficient to close the book).

    B. The other point follows from the first and is that this “bringing up to date” not only clashed with the previous sample, but also clashed with the RATIONALES for the uptick which Steve was floating. So even if Ababneh’s right and the earlier study wrong (and not sure that is the case, could be the other way), we still have a difference with the multiple RATIONALES Steve was floating. ANd of course the whole behaviour of doing this sort of thing. Loving Ababneh cause it goes our way seems to match the previous multiple excuse method of explaining the uptick. It’s basically any port in a storm. I mean HECK! Think about if Ababneh had NOT disagreed with Graybill. HAd validated it. Would Steve have abandonded his various excuses then?

  25. Kenneth Fritsch said

    The call from Steve M was very correctly that the tree ring measurements (and other proxy data that seemed to end around 1980 or so) needed to be brought up to date so that a decent out-of-sample test/look could be had. Ababneh’s was all that we got. The call then is for more out-of-sample measurements and as I recall Steve M showed how rather simple that would be and within a day’s drive of a Starbucks – as answer to Mann’s rather lame reasoning of cost, time and remoteness.

    TCO, unless you can provide some out-of-sample data that has been revealed of late, why in hell would you attempt to make a big deal out of people pointing to Ababneh’s work and saying we want more and not say anything about the apparent lack of effort of climate scientists to provide data for out-of-sample testing. I have not a clue as to what you are attributing to Steve M as once again your accusations lack sufficient detail to allow someone to challenge them. Put all your cards on the table or shut up until you can.

  26. TCO said

    25.

    A. Because it’s a single example. I AGREE with your point that we need more up to dating. I just don’t agree that a single example (going your way) is sufficient to make the genralization that DP did.

    B. I think that study did more than just “up to dating”. I think it also showed differences in the period of overlap. This is a different issue.

    C. I don’t think Steve’s Starbucks trip proved anything. He broke the car, had a hard time finding and getting everywhere, posted initial stuff very fast and then STILL hasn’t put the full experimental details out, nor has he written a paper.

    D. Note, I actually still AGREE that updating (and replicating, they are different) are relatively easy. I just don’t base it on a blog post with coffee pictures.

    E. We have to be careful. Jeff criticizes me for mentioning Steve and you are debating me on that topic.

  27. It would be interesting to pass the methodology used in recreating these “Hockey Stick” records past a medical statistician and ask how this would fair in a clinical trial.
    The idea of discarding data that fails to agree with your predetermined result would get very short shrift indeed.

  28. Jim said

    #25 I guess bringing proxies up to date is a good idea. But then you have to consider the effects of technology on the more current data. For example, the lake sediment data was biased to the upside due to agriculture. Tree rings, likewise, might be distorted by the extra CO2, one of the dimensions of interest WRT temperature.

  29. TCO said

    28. wow. those confounding factors have never occurred to anyone!? Smacks head. Not sure its worth having data then!

  30. Layman Lurker said

    #24

    Furthermore Kenneth, there were many reasons to question the bcp’s as temp proxies before the Ababneh thesis. Graybill himself concluded that whole bark trees were better correlated to local temperature than strip bark trees. Steve was not floating rationales on a whim, but pointing out the reasons to doubt which existed in the literature, backed up by NAS.

  31. Kenneth Fritsch said

    From Mann et al (2008) SI we have the following admission:

    The year 1995, at or shortly after which many proxy data terminate, was used as the upper limit for calibration. Due to the evidence for loss of temperature sensitivity after about 1960 (Briffa et al, 2001), MXD data were eliminated for the post-1960 interval. The RegEM algorithm of Schneider (2001) was used to estimate missing values for proxy series terminating prior to the 1995 calibration interval endpoint, based on their mutual covariance with the other available proxy data over the full 1850-1995 calibration interval.

    Divergence is a nice term for failure of the tree ring/density proxies out-of-sample, or even in-sample, for that matter. What do the authors do about that apparent problem? They reference a peer-reviewed paper that does not show reasonable evidence for what might cause the later data failing out-of-sample testing and instead pull all the “objectionable” data out and replace it by doing the RegEM. Out-of-sample testing can be readily viewed from these developments here as an important issue.

    Carefully observing the graphs (and ignoring the instrumental record tacked onto the end of the reconstructed part) shows further divergence of the proxies other than tree ring/density ones. So we have evidence of further out-of-sample failures. Mann (2008) says this about that:

    In this case, the observed warming rises above the error bounds of the estimates during the 1980s decade, consistent with the known ‘‘divergence problem’’ (e.g., ref. 37), wherein the temperature sensitivity of some temperature-sensitive tree-ring data appears to have declined in the most recent decades. Interestingly, although the elimination of all tree-ring data from the proxy dataset yields a substantially smaller divergence bias, it does not eliminate the problem altogether (Fig. 2B). This latter finding suggests that the divergence problem is not limited purely to tree-ring data, but instead may extend to other proxy records.

    Now I suppose we could all do a TCO and say that objections to the Mann handling shown above are from a bunch of skeptical AGW zealots, and worse yet presented on blogs, and since Mann has peer-reviewed papers and references to peer-reviewed papers on his side we should be asking questions like: well, perhaps there are good reasons for out-of-sample failures??

  32. Kenneth Fritsch said

    #25 I guess bringing proxies up to date is a good idea. But then you have to consider the effects of technology on the more current data. For example, the lake sediment data was biased to the upside due to agriculture. Tree rings, likewise, might be distorted by the extra CO2, one of the dimensions of interest WRT temperature

    Why sure Jim and all this change suddenly occurred when the in-sample period ended and the out-of-sample started. The point of out-of-sample testing is determine whether conditions other than temperature can throw the temperature proxy calibration entirely out of whack. If it does in current times why could not it do the same in past times. Looking for causes of changes in out-of-sample tests is a legitimate pusuit, but it cannot be accomplished by armwaving or conjecturing. And, of course, when the causes are found the clock has to start again on another out-of-sample test. This cyclical process is what those with investing strategies do when faced with out-of-sample failures and since it can go on indefinitely is not really a confidence builder for the thinking person.

  33. Jim said

    #25 – Maybe the sampling of trees would be a good project to do in the same way surface station surveys are beging done – with volunteers?

  34. Layman Lurker said

    #29 KF

    A question comes to mind. Is it possible that whatever caused the divergence problem was also affecting the proxies the proxies during the calibration period, thereby falsely attributed to temp?

  35. Layman Lurker said

    Sorry Kenneth. Your post got bumped from #29 to #32 as Jeff fishes TCO out of the trash bin.

  36. Jim said

    29 -Just having data isn’t enough. You have to have valid data. The current proxy data may be the best we have, but that does not mean it is adequate. Sometimes your best effort just isn’t good enough, especially if it is being used to make nation-changing, poverty-inducing political decisions. For consequences of this magnitude, the data has to be just about perfect and undisputable. No data we have is there yet and it may never get there.

  37. Carrick said

    Michael Mann:

    So the sorts of characters who make the argument you cite either have no understanding of elementary statistics or, all too commonly, do but recognize that their intended audience does not, and will fall prey to their nefarious brand of charlatanism. -mike

    Sigh.

    How typical of Mike. Here is a person with barely any formal statistical training going around lecturing everybody else who has on how the statistical method works.

    All in the while substituting his unique brand of ad hominems in the place of any reasoned argument.

    But of course this “logic” was just painful:

    If you produce several of these random walks you will see that some of them have a similarity to the instrumental record. Some reconstructions give these series a high weighting overiding most of the other series hence the concerns over a few proxies dictating the result.”

    So you have to use random data and keep using random data until you get Mann’s Hockey Stick.

    Do any of these bozos know how to do a Monte Carlo?

    He should try this and see if what he expects to happen, happens.

  38. Kenneth Fritsch said

    Carrick, reading what Mann said about simulations has nothing to do with what Jeff ID is doing or anyone currently critiquing his methods – that I am aware. He makes a statement about reconstruction/proxy simulations using levels red noise and the number of simulations needed to produce one with a shape like that of his reconstructions. But no one is currently doing that and Jeff ID uses 10,000 proxy simulations in a reconstruction to show something entirely different and uses the red noise level of a series of tree ring densiy proxies from Mann (2008). And further no one would disagree with the Mann statement.

    At one time Burger and Cubasch did a critique of Mann’s method using simulations and were rejoined by Mann for using a red noise level that he deemed was too high. When Mann fails to reveal details to what it is he is replying and does not name who he is replying to it becomes a guessing game as to what he means, not unlike some of his papers. His tactics are full proof for using future deniability but not very informative vis a vis criticisms of his work.

  39. mondo said

    The point that intrigues me about all this is what Steve McIntyre calls ‘the silence of the lambs’. That is, the credible, professional climatologists and dendrochronologists who must be aware of what is going on, but who won’t speak up.

    Clearly there are reasons – grant access likely to dry up if they question the IPCC for example – which I agree is pretty compelling. Maybe that is why most of the professed sceptics are retired folk.

    But there does come a point where failure to speak up is regarded as acquiescence/acceptance (the legal concept of ‘estoppal’). TCO speaks glowingly of Zorita, but he has been notably quiet on his views on the Mann corpus. As has Rob Wilson so far as I am aware.

  40. Jeff

    It has been suggested to me that the RC comment you are talking about here refers to this thread on the BBC’s website.

    http://www.bbc.co.uk/blogs/thereporters/richardblack/2009/06/climate_meltdown_yet_fusion_la.html

    It’s a very long thread, with a lot of trolling from Mark/Yeah_Whatever, but if you grit your teeth you’ll see one of the commenters discussing hockey sticks from red noise. It’s not desperately interesting to tell the truth, but I just thought you might like to know.

  41. Jeff Id said

    Thanks Bishop. I wondered where it came from. You’d think those guys would just link to my post.

  42. Wansbeck said

    Apologies for being partly responsible for this post.

    Returning from the pub in a well lubricated condition, I read Richard Black’s blog on the new Met Office model and responded to a post by a prolific troll going by the name of yeah_whatever. I have only recently discovered, via a post on CA, that the troll had reported me to the headmaster at RC where he goes by the name of Mark.

    If I had known, I would have let you know sooner.

    I’m away back down the pub now, I promise I’ll try not to feed any trolls on my return.

  43. Fluffy Clouds (Tim L) said

    Jeff, are you hallucinating?
    RC LINK TO HERE?????????????
    OR even the liberal BBC?????
    Jeff Id said
    June 25, 2009 at 4:02 pm

    Thanks Bishop. I wondered where it came from. You’d think those guys would just link to my post.

    ONE from there side MIGHT find out the TRUTH!!!!
    heaven for bid!!!!!!!!!

  44. So the sorts of characters who make the argument you cite either have no understanding of elementary statistics or, all too commonly, do but recognize that their intended audience does not, and will fall prey to their nefarious brand of charlatanism. -mike

    Kenneth Fritsch June 24, 2009 at 8:29 pm: … Not addressing his replies to specific individuals makes his replies ineffective to useless. All this makes me think that his replies are more for galleries of like-minded readers at RC than in shedding any (new) light on his methods.

    Flanagan at WUWT prefaced a rather obviously inaccurate statement involving thermodynamics with a statement to the effect that he knew his thermodynamics. Again and again I’ve seen fraudsters especially, but even more harmless people, deny that of which they are most guilty – and accuse others as the guilty ones.

  45. Wansbeck said

    I’m the guy who was quoted on RC although we can all be sure that Mann’s amusing outburst was directed at others.
    Whilst I claim no great specialist knowledge of statistics, to put things into perspective my post was in reply to this gem by Mark, the guy who quoted me to Mann, posting as yeah_whatever:

    “I just ran gnuplot on a series of random values between 0 and 1.
    No hockey stick resulted.
    So what do I do with my random data?”

    Perhaps I should have told him where to put his random data but being a helpful sort I attempted to explain noise types in easy steps.

    To second, and third, Fluffy Clouds, you do not want this guy posting here. He has now placed several hundred posts on the Richard Black blog of a similar standard to the above though usually more abusive.

    On a positive note, I had missed your recent posts on Mann’s methods but was passed here by Bishop Hill who had also been involved with Mark who appeared to wilfully confuse raw and adjusted CRU data. I have now downloaded your script and am extremely impressed.
    We need a mobile phone version so kids can play with it when being forced to watch Al Gore movies.

  46. Jeff Id said

    #45 Thanks, check out part #2. It shows the problems with CPS and noisy proxy calibration in a form I’ve never seen discussed outside of the air vent. It discredits a large number of the hockey stick graphs I’ve seen but is a bit technical.

    http://noconsensus.wordpress.com/2009/06/23/histori-hockey-stick-pt-2/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 140 other followers

%d bloggers like this: