the Air Vent

Because the world needs another opinion

Peanuts

Posted by Jeff Id on March 26, 2011

Recently there have been some posts on the internet which have had my attention.  First, is a series of posts by Steve McIntyre at Climate Audit that have led to another vastly expanded version of hide the decline.   It turns out that Briffa’s data wasn’t only truncated in recent years but also in historic years.  F-ing unbelievable fraud in my opinion, which is the only word for it.  You’ll note we don’t use that word here, but if both shoes fit….

It also looks like the team was far more active in ‘fixing’ the data than they could ever have admitted.  If many here have noticed, the over-the-top guys on the enviroblogs i.e. dhog’s and MapleLeaf types have often made the climategate argument that it made sense to replace the bad data with temp because it didn’t match temp.  Of course the argument was pure sophistry since the ‘skeptics/realists/thinking person’s’ point was that the ‘decline’ meant that the treemometer data isn’t a good temperature measure but I wonder how these sorts of sophistknowledgists apply the argument to clipping of historic data?  What is the argument that historic data should also be clipped.  The reality is obvious of course but now the peanut gallery is also faced with the question — what is the ‘scientific’ reason for chucking the ‘historic’ data you don’t like?

Here’s the absolutely damning image Steve produced.

Pink is the clipped 'hide the decline data' itself. Except that the early years which Steve McIntyre has just determined, have been hidden as well. These years show a sharp rise in 'temperature' well before the industrial age which would indicate natural variability is far greater than the consensus message. In reality, it simply means tree ring density data IS NOT FRIGGIN TEMPERATURE and therefore any hockesticks which use it are simply meaningless.

 

So with that said, a reader called my attention to a recent post at Real Climate which deserves a reply.   Of course I can’t post there (even on a climate paper with my own name on it) and therefore will be required to place my reply here.   IMO, even without regular posts, tAV is a better blog anyway.  At least we’re honest here.

The RC post was brought about by yet another leftist post from Nature – the pamphlet apparently which deserves much critique for the continued non-recognition of the actual problems in the Climate Science™ sales pitch.  If you read the Nature link, I promise that you will loose IQ points and would recommend against it, but to each his own.

Below are the opening paragraphs:

As Nature went to press, a committee of the US Congress was poised to pass legislation that would overturn a scientific finding on the dangers of global warming. The Republican-sponsored bill is intended to prevent the US Environmental Protection Agency (EPA) from regulating greenhouse-gas emissions, which the agency declared a threat to public welfare in 2009. That assessment serves as the EPA’s legal basis for regulation, so repealing the ‘endangerment finding’ would eliminate its authority over greenhouse gases.

That this finding is scientifically sound had no bearing on the decision to push the legislation, and Republicans on the House of Representatives’ energy and commerce committee have made clear their disdain for climate science. At a subcommittee hearing on 14 March, anger and distrust were directed at scientists and respected scientific societies. Misinformation was presented as fact, truth was twisted and nobody showed any inclination to listen to scientists, let alone learn from them. It has been an embarrassing display, not just for the Republican Party but also for Congress and the US citizens it represents.

Now anyone who has followed the issue knows that the EPA legislation allowing control of CO2 gasses is a kluge designed to get around Democrats actually voting for regulations in congress.  The elected democrats can’t vote directly for the economically destructive extremist policies they want (because they would lose their jobs), so they have backdoored the EPA into the process.  Any complaint about removal of the hideous mess from EPA control is simply political in nature and the climate scientists of RC and Nature digest, apparently think they want the draconian mess which the EPA will bring on economies.  Only straight talk here though, calling repeal of the ‘endangerment finding’ anti-science is a fools game, played by pseudo-scientist politicians.  What we the subjected public will (and have) received from the EPA in return for our hard earned and fast wasted money is a vastly stupid political solution.  And of course there are very good reasons, fully outlined in climate blogs as to why Climate Science™ can not be trusted.

So just what does the Real Climate ‘group’ have to say about it?  Here is a quote from the apolitical group hosted by Fenton Communications:

In so doing, it cited as an example the charade of a hearing conducted recently, including the Republicans’ disrespectful and ignorant attitude toward the science and scientists. Among many low points, this may have reached its nadir when a House member from Nebraska asked, smirkingly and out of the blue, whether nitrogen should be banned–presumably to make the point that atmospheric gases are all either harmless or outright beneficial, and hence, should not be regulated.

I do agree that it was likely a ‘charade of a hearing’, which is more honesty than you will hear from the blatantly dishonest scientists who have covered up and even lied about the climategate fraud.  It is a charade because it needs to be.  Everyone should agree that bad legislation needs to be excised from the government law. We the people need to be allowed to be heard on the issues, not legislation by fiat through an unelected branch.  Of course, climate science™ is fully ready to dictate what ‘We the People’ need best as Real Climate points out shortly after the above quote.

There have been even more strongly worded editorials in the scientific literature recently as well. Trevors and Saier (2011)*, in a journal with a strong tradition of stating exactly where it stands with respect to public policy decisions and their effect on the environment, pull no punches in a recent editorial, describing the numerous societal problems caused when those with the limited perspective and biases born of a narrow economic outlook on the world, get control. These include the losses of critical thinking skills, social/community ethics, and the subsequent wise decision making and planning skills that lead a society to long-term health and stability.

It is amazing the hubris of these assholes.  They are simply boundlessly full of themselves.  So from that statement we learn that admitted conservatives like myself, who run businesses, hire people, pay salaries and build product (the most efficient in the world), somehow are the ones with a ‘narrow limited economic outlook’ while the government funded, never worried about a paycheck, six figure, fly everywhere climatologists, have the broader view. It doesn’t matter that I talked to at least a dozen different CEO’s in manufacturing just yesterday (nearly all conservatives), what could We the People possibly know.

To use their words from their article, it ‘boggles’ the mind that not only are the climatologists experts in climate, they are also experts in proper political policy and even business economics.  If we handed over our company to the Real Climate elitist ‘group’, I have no question that they would bankrupt it within three months – because they are clueless about business. At least in business not enforced by government law.  The problem is though that — they are clueless and loud — a bad combo in any environment — college classroom, college bar, a war zone or climate blog.

I will take only a few sentences to again warn the extremist climatologists before closing this post.  First, businesses globally are strained to the limits.  Globalization happened too quickly and has forced the shifting of money to areas of the world with political goals that are very bad for humanity in general.  We know which politicians are responsible but we also know ‘true’ conservatives want less of the politicians.  CO2 regulations, limiting drilling for the US yet not others, and taxation of US burning of coal are far more destructive to the future of humanity from non-environmental issues than any of you CS people have ever written coherently.  When power is transferred away from the ex-free people (as I believe will continue to happen), the consequences to the environment of society will be far more severe than any of the IPCC’s worst disaster scenario’s. Don’t listen to me though, I’m not an ‘actual’ climatologist, but just an ignorant ‘ limited mind’ with little understanding of  economics, environment, politics, matlab, Antarctic temperature and whatever else you would like to throw on the pile.

—-

So why did I combine this into one post, because the Climate Audit post is further, shockingly incontrovertible evidence that some primary climate scientists are up to their elbows in data manipulation.  It is amazing when you consider that the main players are involved.  You cannot delete or hide data without explanation or with minimal explanation ten years separated from the main IPCC publication, and then make powerful conclusions from the remaining data.  I think a high school student should give it a shot in lab and let me know how it went – your own risk of course.  This issue is directly related to the claims by Real Climate and the Nature pamphlet. When the elitist/extremist apolitical ‘scientists’ make claims that ‘admitted’ conservatives are anti-science, without any recognition of the serious issues outlined, you just wonder who is the politician and who is the scientist.


125 Responses to “Peanuts”

  1. Bruce said

    I suggested to Steve (and my post was moderated into oblivion) that the data is just upside down.

    The right side matches instrument data.

    The left side matches up with the MWP – beofre the Team sent it to oblivion.

  2. Brian H said

    About those loose IQ points — where would they go once released? I wouldn’t want to lose track of them! ;)

    RC is charging aggressively even deeper into the swamp of mindless Hokey Team apologetics.

    Here, by the way, is the only valid assessment of the double-truncated record:
    “For a while, the ring growth pattern of a tree in the Urals wiggle-matched the temperature changes in somewhat distant locales.”

    Stop the presses!

  3. Jeff,
    I’m puzzled by the excitement that people can muster about the simple act of choosing the starting point for a plot. So what’s special about 1400? They probably have data before that too. Are they supposed to go right back to every shred of data? Or just start the plot when they think it is reliable?

    Your own paper shows Antarctic trends from 1957 onwards. There is data before that. Did you “delete” it?

  4. Jeff Id said

    Nick,

    Do you mean it or are you just messing around?

    As far as our own work, the title should clear it up:
    “Improved methods for PCA-based reconstructions: case study using the Steig et al. (2009) Antarctic temperature reconstruction”

  5. Thank you, Jeff, for continuing to spotlight deception in science.

    There is a great sense of unease in the world. I suspect that it comes from a growing suspicion that world politicians have used government science as a tool of propaganda. I agree with P. G. Sharrow’s comment that our society is like a bee hive:

    When the bees are happy the hive smells sweet and when they get mad the hive smells sour.

    Oliver K. Manuel

  6. RomanM said

    Re: Nick Stokes (Mar 26 18:52),

    Nick, you genuinely disappoint me. I would have thought that you might have a shred of personal or scientific integrity, but my trust seems to have been misplaced. The word shill comes to mind.

    Yes, statisticians evaluate data and on some occasions, they might decide not to use a portion if there are valid reasons based purely on the origins of the data itself. However, this is NOT the case of not having used the data, it is a case where the reconstruction was calculated using it and a portion of it not included because the authors did not like the result. No explanations or indications of having done that were given (don’t you think that researchers with a modicum of integrity might have explained what they did, but I guess that, in your view, this is not a requirement for real climate science).

    Or just start the plot when they think it is reliable?

    What? And exactly on what basis would you make the decision that it somehow becomes reliable? It is close to where you want it to be??? At least with hiding the decline in the modern era, there might have been a spurious basis for the divergence (which I may add was never explained in any scientific manner), but in this case, I can’t think of a single valid reason for supposing that the data magically came into tune with the universe and explained the meaning of life while providing a complete picture of the contemporary climate.

    Your own paper shows Antarctic trends from 1957 onwards. There is data before that. Did you “delete” it?

    No, Nick, they were commenting on a paper which used that time frame for a study. Get real for a change.

    Don’t give up your day job and go into a field like medical statistics where scientific honest is actually a prerequisite.

  7. Roman, #6
    Not shilling – just trying to introduce some balance into a wacky discussion. You don’t actually know that a reconstruction was calculated using this data. All you have is a website listing it (in connection with another paper).

    But I think it’s likely that they did calculate it, and saw that the variability pre-1550 was much greater than after. Maybe they related that to a reduced number of stations – maybe there is something else wrong with it. We just don’t know.

    Incidentally, the drama of Steve’s plot is associated with the big dip around 1400. Is that what they are “hiding”? But why? Wouldn’t that really “get rid of the MWP”?

    But in fact, that comes significantly from the smoothing that Steve has used. We were warned about Mannian smoothing and taking a smooth up to the endpoint. So what does this graph do? Smooths with a symmetric filter right to the endpoint, with padding at each end. And that padding is the mean of that adjacent 20 years. Which at 1400 just happens to be the lowest part of the history (about -1.05C). There’s a similar effect at the other end.

    If they did do a reconstruction based on that data, they probably didn’t have the artefacts in that magenta curve.

  8. Sam said

    Hey Jeff, there is a lot more to that editorial Trevors and Saier. I looked at it in depth here:

    http://climatequotes.com/2011/03/25/realclimate-supports-complete-drivel-trevors-and-saier-2011/

    It’s an anti-capitalist rant. I’m dead serious, go read it.

  9. RomanM said

    Re: Nick Stokes (Mar 26 20:37),

    The series in the Excel file was a chronology, not raw data. What they did was to trim the chronology without justification and present the trimmed result in a graph with NO indication that a substantial portion of the calculated result had been summarily deleted.

    Steve’s smoothing has no bearing on the subject. Try plotting the unsmoothed chronology yourself and then tell me that what was plotted was an artefact of the endpoint smoothing process. Nonsense.

    So what is the ostensible reason that the chronology failed to track the true temperatures other series which told the “correct” story, but then mysteriously had a change of heart and (sort of) joined the loose consensus of the other series until it again decided to go its own way into “hidden” oblivion? Possibly teleconnection?

    Tell me with a straight face that what they did was ethical. This was done for propaganda purposes, not as genuine science.

  10. Roman #6
    I should add that the data listed back to 1400 very likely was assembled for the Briffa Jones et al 1998 Nature paper “Influence of volcanic eruptions on Northern Hemisphere summer temperature over the past 600 years”. The purpose of that was not to estimate temperature trends, but to find correlations with eruptions, so the statistical requirements are different. There they note that “All chronologies cover at least the period 1891–1973 but many are much longer (for example, there are 287 back to 1800, 159 to 1700, 75 to 1600 and 8 back to 1400).”

    “8 back to 1400″? Sounds like a pretty good reason to stop before you get there (for temperature estimation).

  11. Layman Lurker said

    But I think it’s likely that they did calculate it, and saw that the variability pre-1550 was much greater than after. Maybe they related that to a reduced number of stations – maybe there is something else wrong with it. We just don’t know.

    You make this sound so casual Nick. So you recon it’s OK to run the data, see how it looks, then chop if it is not coherent? No discussion? No caveats? If poor replication is the excuse, then why was it used for Briffa01?

    Sorry Nick. I think you are a good guy. I appreciate your participation in discussion and value your contributions. But the ethics of data exclusion goes right to the heart of what science is all about.

  12. LL,
    See my #9, which probably came up while you were writing. I think now, that in view of the sparsity of data back to 1400, they probably didn’t even do that calc. That data was assembled for a different purpoose.

    However, yes, I do think it’s reasonable to run the data and “see how it looks”. People do this all the time. And if it looks screwy, you find out why. And if you find a reason, you don’t present it. Ideally you’d explain all that. But we’re talking here about a 1-page paper in Science which has a number of different studies to discuss.

  13. Jeff Id said

    Sam,

    Their true colors are showing.

  14. LL #12,
    It seems they did do a temperature calc, in that in Fig 1 of their Nature paper they have marked a right hand axis in temperature, going back to 1400, and with bidecadal smoothing. It doesn’t look anything like Steve’s, though.

  15. BillyBob said

    As I’ve said before.

    The Team says trees are bad thermometers post-1960.
    The Team clearly has secretly said trees are bad thermometers before 1550.

    Logic insists that trees are really, really bad thermometers in between.

    But I really, really want to know what happened to trees in 1961.

    Did they start drinking? Staying out late at night?
    Maybe they smoked a little ganja now and then.
    They started hanging out with the wrong crowd.
    Was it rock and roll? Did rock and roll make them irresponsible?
    Was it the cavern club? Did they start riding motorcycles and wearing leather jackets?

    What was it about 1961 that MAGICALLY made them bad thermometers?

  16. Anonymous said

    Nick Stokes said
    March 26, 2011 at 8:37 pm

    Not shilling – just trying to introduce some balance into a wacky discussion.

    No you are not, you are just a pathetic shill. You know exactly how bad it is to truncate any of these reconstructions to exclude inconvenient information. If the public is being asked to judge the merits of the data, the public deserves to know what quality problems it contains.

    Your credibility drops with every post.

    Mark

  17. Mark T said

    Layman Lurker said
    March 26, 2011 at 9:51 pm

    Sorry Nick. I think you are a good guy.

    Why? He’s no better than any of the other ideological idiots defending to the death some of the most egregious behavior ever seen in the world of science. These people deserve nothing but scorn simply because they repeatedly refuse to condemn what are at best horrendous ethical breaches, at worst, outright fraud.

    Mark

  18. Layman Lurker said

    Nick:

    LL #12,
    It seems they did do a temperature calc, in that in Fig 1 of their Nature paper they have marked a right hand axis in temperature, going back to 1400, and with bidecadal smoothing. It doesn’t look anything like Steve’s, though.

    Nick, Steve noted here, that while the reference was to the Nature graph…..

    It stated that the Briffa version came from Briffa et al 1998 (Nature) and Briffa et al 1998 (Pr Roy Soc London), “processed to retain low-frequency signals”.

    …..it didn’t match. But he discovered the match in a different archive:

    The Briffa and Osborn 1999 version of the Briffa MXD reconstruction doesn’t match the version of Briffa et al 1998 or the subsequent version of Briffa et al 2001, both of which were archived. Oddly enough, it does match (after truncation) a version archived at NCDC in December 1998 in connection with Jones et al 1998 (though not used in that article), where it occurs in the second sheet of an Excel file here. To my knowledge, this particular version of the Briffa reconstruction was not otherwise published.

    Except, of course, for the truncated parts.

  19. TimTheToolMan said

    “People do this all the time. And if it looks screwy, you find out why. And if you find a reason, you don’t present it.”

    The litmus test would be to see whether any of those 8 trees going back to 1402 were used in the calculations that were kept. If the data from all 8 trees was dropped entirely then their result *might* be justifiable but if it was used in a truncated form then the science is hopelessly flawed and questions of integrity and honesty become quite valid.

  20. 18# TT
    No, there’s no indication that there is something wrong with those 8 trees. It’s just a difference of purpose. The Nature paper (last 600 years) was looking for volcano signals in the dendro record. for that purpose it doesn’t matter so much whether the sample could be said to represent the NH. If you find the signal showing up in just a few chronologies, that may be interesting. But if you want to claim that the record is representative of NH temps (as in the Science paper), then 8 chronologies, no matter how good, won’t do.

  21. Bryan said

    In Nick Stokes lab the broken 24 hour clock can be used once a day for accurate measurement of time.

  22. TimTheToolMan said

    “But if you want to claim that the record is representative of NH temps (as in the Science paper), then 8 chronologies, no matter how good, won’t do.”

    And yet according to Steve, fewer chronologies have been used in other Briffa papers and passed peer review

    From CA “The Briffa et al 2001 site count was 19 sites in 1550, 8 in 1500 and only 2 in 1402, but there were enough for Briffa to report a reconstruction. (Readers should bear in mind that the Jones reconstruction, for example, was based on only 3 proxies in the 11th century, one of which was a Briffa tree ring site with only 3-4 cores, well under standard requirements.) ”

    …and so on to your statement “there’s no indication that there is something wrong with those 8 trees.”

    As I said, it hinges on the use of those series doesn’t it. On the face of it, it looks like 2 of the eight were chosen in B2001 because they best conformed to desired temperatures. But in actual fact if my reading of the situation is correct, so far the investigation of BO1999 has merely found that the trees in the 1402 – 1550 range were considered at some point because thats what the code itself tells us. Then they were discarded or possibly used but truncated…yet to be determined. Nothing more as yet.

  23. NikFromNYC said

    “I do think it’s reasonable to run the data and “see how it looks”. People do this all the time. And if it looks screwy, you find out why. And if you find a reason, you don’t present it. Ideally you’d explain all that. But we’re talking here about a 1-page paper in Science which has a number of different studies to discuss.”

    OMG Nick, you are stark raving mad insane. I’ve been hitting the blogs for at least three years now. Your comments here are simply WAY out there. I’ve never seen such a thing. It’s DIRECT ADMISSION OF BAD SCIENCE AS A POLICY. It’s not like some obfuscation like Gavin Schmidt might pull off with glossy PR style.

    You don’t present all your data?! You sure as hell damn well do. I think that you have no idea how badly you are coming off here. I can ignore the usual back and forth debate but my god man get a grip. You have become merely partisan.

    A glance at your blog impressed me, actually. Lots of creative ways to analyze Antarctica records. But it seems you risk becoming yet another loyal attack dog like Tamino who merely tweaks math to support alarmism rather than someone who like any normal scientist would be VERY HIGHLY UPSET as in close to physically sick if the TOP researchers in your favored field were brazenly chopping off data that destroys their claimed results!

    It caught my attention so much because you are not just a consensus-claiming politico but someone with the background to deal with actual details. The right of passage that I went through getting a Ph.D. in chemistry and nanotech doing a postdoc at Columbia/Harvard was very singular, and that was to UTTERLY CHECK AND RECHECK AND QUADRUPLE CHECK MY RESULTS!!! The whole exercise, the entire discipline, the CENTRAL LESSON was to ASSUME I WAS WRONG. Entire extra summers were required to chase down ANY and ALL spurious results. This does not exist in contemporary climate science. You are a cartoonishly extreme stereotype of this problem. You don’t even GET it! I rubs my self-image of a trained scientist the wrong way. You are not a well trained scientist. Period. You’re a math guy. No offense, that’s great and very valuable in this world, but you don’t have the right stuff to call yourself a scientist.

  24. TimTheToolMan said

    Oh and Nick, one more comment on “I do think it’s reasonable to run the data and “see how it looks”. People do this all the time.”

    The only valid way to discard data that doesn’t “conform” is to do it based on a non-temperature basis.

    I’m no dendrochronologist but I imagine an argument to discard might be along the lines of “analysis shows these tree rings are low in farnarkles, therefore they’re not expected to faithfully follow temperatures and so we discard them”

    You cant ever look at tree ring results and decide they dont match expected temperatures so discard them. Thats a big no-no.

  25. Nik,
    They’re not presenting their data here. They are presenting a reconstruction. It’s an analysis process. How good is it? Well, you look to see if the results make sense. If not, see if something has gone wrong in the process. If it has, you can’t present the analysis.

    Here it’s clear. The numbers of chronologies tapers right down between 1600 and 1400. Down from 75 to just 8. Now I can just imagine what you folks would be saying if they had drawn a claimed NH temp plot based on just 8. It would be worse than Yamal.

    And TT, indeed the number of sites in your quote is low. But that is not the number of trees or chronologies. You’ll note that Steve is complaining when just one site got down to 3-4 cores.

  26. Tim #23,
    As I said to Nik, they aren’t rejecting data here – they may be rejecting a reconstruction. Just saying there isn’t enough data to give a reliable result.

  27. #24 – Tim, I may have got that wrong – on re-reading the Nature 1998 paper, I think they are talking of site chronologies, so maybe the numbers in Steve’s quote do relate in that way. I’d like to see the original context though.

  28. jeff Id said

    Nick,

    They chopped the history they didn’t like, chopped the present they didn’t like and pasted a blade on it for presentation as an IPCC graph.

    I am thoroughly amused at the attempts to call a pig a goose. Just because it weighs 500 pounds and a curly tail doesn’t mean that it didn’t once have feathers.

  29. John F. Pittman said

    Nick, I assume you know how dendro’s look “for volcano signals in the dendro record”? They do it as a temperature signal. See the related posts at RC. They claim it even works to prove GCM’s. Or as sometimes is said sarcastic, Google is your friend.

    NIck, sorry but you are screwing the pooch, as the new generation says, on this one.

    Further, it was Steve and mine and other’s complaint about the paucity of using chronologies with small sample size, especially when larger sets were available. What is damning, if they use your criteria, is that they use small or large if it confirms their bias, they use deletion or not, again if it confirms their bias. The result is they confirm their bias. And note, without investigation this would not be known, because this was not presented.

    The telling quote is from the Briffa interveiw as Climategate was breaking. He said they were “working” on the problem. Yet, the articles are written such that it is claimed that they had solved the problem. If you cannot read and understand that, then you have severe confirmation bias, or reading comprehension problems. They were the ones who made the claims, not Steve, Jeff, myself, or others.

    You have to start with THEIR claims. That is where you start in a science discussion.

  30. John F. Pittman said

    Well Jeff, loks like we think the same.

    I hope it doesn’t cause you to faint to think that you think like a Liberal.

  31. andy said

    Sowell’s book “The vision of the annointed” describes the behaviour of the ‘team’ very well.

    Nick stop making yourself look silly.

  32. Sowell’s book “The vision of the annointed” also describes well the behavior of the ‘team’ in charge of data [1,2] from the Galileo Probe of Jupiter in 1995 that

    a.) Cost the US taxpayers over $1,000,000,000 and

    b.) Confirmed reports [3-9] that our Sun is the remnant of a supernova that produced distinctly different elements in the inner and outer parts of the Solar System.

    1. “Abundances of Hydrogen and Helium Isotopes in Jupiter”, in The Origins of the Elements in the Solar System: Implications of Post 1957 Observations, O. K. Manuel, Editor, Kluwer Academic/Plenum Publishers, New York, NY, pp. 589-643 (2000).

    http://www.omatumr.com/abstracts2005/Nolte_and_Lietz.pdf

    2. “Galileo probe confirms ‘Strange’ xenon ion Jupiter”

    http://www.omatumr.com/Data/1998Data.htm

    Data from the Galileo probe confirmed reports [3-9] that our Sun is the remnant of a supernova that produced distinctly different elements in the inner and outer parts of the solar system.

    3. “Elemental and isotopic inhomogeneities in noble gases: The case for local synthesis of the chemical elements”, Trans. Missouri Acad. Sci. 9, 104 122 (1975).

    4. “Strange xenon, extinct superheavy elements and the solar neutrino puzzle”, Science 195, 208-209 (1977).

    5. “Isotopes of tellurium, xenon and krypton in the Allende meteorite retain record of nucleosynthesis”, Nature 277, 615-620 (1979).

    6. “Noble gas anomalies and synthesis of the chemical elements”, Meteoritics 15, 117-138 (1980).

    7. “Isotopically anomalous tellurium in Allende: Another relic of local element synthesis”, J. Inorg. Nucl. Chem. 43, 2207-2216 (1981).

    8. “Heterogeneity of isotopic and elemental compositions in meteorites: Evidence of local synthesis of the elements “, Geokhimiya (12) 1776-1801 (1981) [In Russian].

    9. “Solar abundance of the elements”, Meteoritics 18, 209-222 (1983).

  33. Layman Lurker said

    Jeff, I have a comment in moderation from last night responding to Nick #14. It has a link to CA is that why?

    REPLY: Probably. Sorry for the delay. I don’t read the control panel very often any more.

  34. M. Simon said

    We know what the results should be. The science is settled. Since it is settled there is nothing wrong with making your results conform to the true science.

    Once you know EVERYTHING it all fits. End of argument. If it doesn’t fit it is bad science.

  35. j ferguson said

    Does the joint at which the deleted pre-1550 reconstruction met the remaining reconstruction also the point at which they picked up a lot more cores? If the bases of the reconstructions to the left and right of the joint are the same, then Steve’s views seem well warranted, but if there’s a change in underlying data at about 1550?

  36. #28 Jeff,
    Well, I’m still wondering what you would have said if they had published Steve’s graph with a huge dip (“getting rid of the MWP”) at 1400, based on 8 chronologies?

  37. Bruce said

    Nick, what happened to the trees in 1961?

    Drugs? Were the trees on drugs?

    Come on Nick, what happened? Make something up even … you usually do.

  38. Jeff Id said

    #36, I probably would say that tree rings are very obviously not temperature (which I already do). I’m sure you would agree that there would have been quite a different interpretation of the fidelity of the IPCC graph though had the data been presented honestly.

  39. Jeff Id said

    What is the effective number of series in the bristlecone weighted mannian PC reconstruction blade? We know the dimensionality reduction was substantial. My guess is Neff = 4.

  40. #37 Bruce,
    Briffa said it all in that Phil Trans paper that the Science paper referred readers to:
    K. R. Briffa et al., Philos. Trans. R. Soc. Lond. B Biol.
    Sci. 353, 65 (1998). They had a section headed “A RECENT CHANGE IN TEMPERATURE
    SENSITIVITY” and it starts:
    “In s4, we referred to a notable correspondence between ‘hemispheric’ MXD series (averaged over all sites) and an equivalent `hemispheric’ instrumental temperature series. Despite their having 50% common variance measured over the last century, it is apparent that in recent decades the MXD series shows a decline, whereas we know that summer temperatures over the same area increased.”

    and goes on to say:

    “The implications of this phenomenon are important. Long-term alteration in the response of tree growth to climate forcing must, at least to some extent, negate the underlying assumption of uniformitarianism which under-lies the use of twentieth century-derived tree growth climate equations for retrodiction of earlier climates. At present, further work is required to explore the detailed nature of this changing growth – climate relationship (with regard to species, region, and time dependence). It is possible that it has already contributed to some degree of overestimation in published reconstructed temperature means – more likely only those that attempt to reconstruct long time-scale information.”

  41. BDAABAT said

    Nick Stokes:

    While I appreciate your willingness to explore other considerations of the analysis of the tree ring data, I’m having a hard time with your conclusions. Perhaps you could provide some idea for possible justifications for the actions that these paleo folks have taken? What in your mind would be considered acceptable reasons/practices for the manipulation and presentation of Briffa/Jones/et al data?

    Personally, I’m having a hard time coming up with a rational, scientifically justifiable reason for the manipulation and presentation of the data.

    In order for the removal of data to be defensible, one would expect that the authors would have stated some specific reason(s) for the deletion. Ideally, one would expect such reasons to be stated in their methods (e.g., “We will be using only those data that includes X number of samples from Y number of trees in order to ensure adequate representation of…” “As a result of these selection criteria, we used data starting from 1550 to 1960″). Not only should these reasons be stated a priori, but they should also referenced as to the rationale for the choices. They don’t. They simply lop off the ends of the data and sort of wave away the reasons why it’s been lopped off. Your quote above on the lopping off of post 1960 data is a great example. To paraphrase, it says, “We don’t know why the data diverged, so we cut it out. As always, more research is needed.” Interesting, that publication was from 1998. And, they haven’t really been able to investigate the reasons for the divergence in the decade plus since then. Curious.

    Worse, the post 1960 data is not only removed, but is replaced with temperature data. That’s a whole other issue that’s been beaten to death. Again, is this something you would consider acceptable scientific practice?

    More troubling, the leaders in the paleo community don’t seem to think that there’s anything that needs to be addressed.
    Comments from important individuals in the paleo community have demonstrated that not only is sort of practice common, it’s completely acceptable. See D’Arrigo’s comments to the NAS panel (e.g., “You need to pick cherries to make cherry pie”) and Jacoby’s famously stated need for a “few good men”. Add to these observations the fact that important data isn’t achieved or shared. And, that updated data isn’t included in prominent analyses.

    There is no one in the paleo community that I’m aware of that has come out publicly and criticized this behavior. No one has stepped forward to say that what these folks are doing isn’t correct. Instead, it’s supported and actively condoned, and rewarded with additional grants and publications.

    Do these behaviors meet your definition of appropriate scientific behavior?

    That’s not my understanding of how science is done. All of these observations lead me to believe that the practice of paleoclimatology is NOT science. It is a practice that is intended to sound “sciencey”, but instead is one where individuals and groups in the field apply correlation analysis (with practitioners commonly using non-standard statistical methods to do so, while avoiding inclusion of statisticians in their research groups) to data using non-scientific principles and practices.

    Bottom line for me: this isn’t science.

    Bruce

  42. Bruce said

    “in recent decades the MXD series shows a decline”

    But why did they show a decline Nick?

    Drugs?

    A new girlfriend?

    And Nick, why did Briffa CHANGE THE SIGN?

    “has already contributed to some degree of overestimation ”

    Briff got it upside down. If the instrument record they grafted on the end is right, he should have said: “has already contributed to some degree of underestimation”!!!!

    See, if the treemometer goes down, whent he instrument goes up, then a sane person would concluded the treemometer UNDERESTIMATE temperature.

    Why did Briffa change the sign?

    What happened in 1961?

    Come on Nick … what happened to the treemometers? Why did they UNDERESTIMATE the temperature? Why did Briffa say they OVERESTIMATED the temperature?

  43. RomanM said

    # 24 Nick (and later in #36):

    Here it’s clear. The numbers of chronologies tapers right down between 1600 and 1400. Down from 75 to just 8. Now I can just imagine what you folks would be saying if they had drawn a claimed NH temp plot based on just 8.

    Well, they did exactly just that in the paper you referenced. You will also notice that Figure 2 shows that more than 20 chronologies started before 1500 whereas the construction successfully emulated by Steve begins only about 1550.

    You are correct that the reconstruction which Steve found in the Excel file is strongly related to the series in Figure 1, but the differences are huge, particularly in the 1400-1550 time period. The reference in the paper to the archived results is no longer correct, but I was able to track the correct location down.

    The above visual comparison of the Briffa version and the Jones version (which appears to have been inadvertently left in the archive for the Jones paper and is the one Steve shows on his site) is very interesting. I have rescaled the Jones version to have the same mean (zero) and standard deviation as the Briffa version for the calibration time period which was used in the paper (1881-1960). My guess is that the differences may be due to the use of the following “corrective” averaging procedure in the paper:

    Two hemispheric mean timeseries were constructed from these data, covering the period 1400–1994. NHD1 was formed as the average of 8 prior regional average series (the regions are shown in Fig. 2), while NHD2 is the direct average of all 383 chronologies. When averaging, the sample size (n) is time-dependent and increased variance in parts of the average series would normally arise because of a diminishing number of constituent chronologies. The effect was corrected for in all three averaging stages (that is, to regions, to NHD1 or to NHD2) by scaling the mean series by the square root of the effective number (n’) of independent samples available (refs. 2, 37) where n’ = n/[ 1 + (n-1) r-bar]: here r-bar is the mean interseries correlation between the n available samples, a measure of the common growth forcing ‘signal’.

    It appears that this could be an even better “trick” to hide the early “decline” than just lopping off the offending portion. ;)

  44. Layman Lurker said

    #39

    And don’t forget the Yamal blade – 10 cores in 1990 – one of which was an 8 sigma outlier.

  45. RomanM said

    Rats!!! WordPress decided to strip the plot that I included in my comment 42.

    You can find it here.

  46. Bruce said

    Nick, both section were removed because treemometers UNDERESIMATED temperature.

    Then Briffa said treemometers may OVERESTIMATE temperature.

    The middle part of the record could possibly be 1, 2 or even 3C too low.

    Why Nick?

    Why did Briffa lie?

    Why did the trees UNDERESTIMATE temperature?

    Why were the UNDERESIMATIONS HIDDEN from us?

    Why did Briffa say the OPPOSITE of the blindingly obviosus (thanks to Steve’s detective work)

    Treemometers UNDERESTIMATE temperature right?

  47. It’s true that in their intro (Nature) they say that the MXD series provides a good proxy for NH temperatures, and you could infer that it’s good for the whole 600 years. They were probably careless there. But that wasn’t what the paper was about. They were making use of the spikes in the series (as in Fig 1) to match with eruptions. And for that purpose, it’s not required that the series be a good representative of long-term changes in NH temperature.

    That’s relevant to the variance adjusting that you refer to. Yes, it would reduce variations in the early part of the signal, which would be misleading if you were focussing on long term trends. That reduction is the intended effect, and they spell it out. It has some justification, in that the MXD series is dimensionless and scaled to have unit variance, and there’s an argument for saying that if the variance is known to change, this could be reflected in the scaling. But again,what that paper is about is short term spikes and eruptions, and the variance rescaling isn’t a problem there, and may be helpful.

    It looks like they in fact acknowledged that variance adjusting wasn’t the right thing to do for the Science paper (which was about long term trends in NH temperature).

    I also found that data from the Nature paper and I checked that the first column (NHD1) is what was in Fig 1 – I did the 20 yr smooth. I’m not sure what the second Briffa plot is that you have – it does look like the Jones plot.

    As you say, there were 20 chronologies by 1500, and so maybe stopping at 1550 was conservative. But would going to 1500 have made much difference?

    And putting the other question, do you think they should have shown the curve back to 1400?

  48. Bruce said

    “They were probably careless there.”

    Not careless. Not the word I would use.

    If you delete 300 years of treemometer data that UNDERESTIMATES temperature, and then claim treemometers OVERESTIMATE temperatures in the past, then you are committing scientific fraud.

  49. RomanM said

    Nick:

    Yes, it would reduce variations in the early part of the signal, which would be misleading if you were focussing on long term trends. That reduction is the intended effect, and they spell it out. It has some justification, in that the MXD series is dimensionless and scaled to have unit variance, and there’s an argument for saying that if the variance is known to change, this could be reflected in the scaling.

    The idea of rescaling based on sample size is specious simply because it introduces a possibly substantial bias into the estimation procedure. The fact that a chronology is “dimensionless and scaled to have unit variance” has no bearing on the issue because the relative magnitudes of (and differences between) terms in the series has been altered. When the chronology is later converted to estimated temperatures, the bias remains. The proper approach is to expand the confidence region for the series to reflect the increased uncertainty due to the lower sample size.

    I also found that data from the Nature paper and I checked that the first column (NHD1) is what was in Fig 1 – I did the 20 yr smooth. I’m not sure what the second Briffa plot is that you have – it does look like the Jones plot..

    Sorry, the third plot was mislabelled. It is actually the difference between the first two graphs = Jones – Briffa. I hope that Jeff doesn’t mind, but I have I have corrected the third title in the linked graph to reflect that fact. The new link is http://statpad.files.wordpress.com/2011/03/hidedecline1.jpg . If Jeff wishes to edit my earlier comment, he can justinsert a “1” just pefore the period in the original URL.

    And putting the other question, do you think they should have shown the curve back to 1400?.

    If they wanted to go back that far, yes. However, the graph should have shown a confidence band to indicate the uncertainties at each point in time. IMHO, this should have been done anyway even if the intent was only for “making use of the spikes in the series (as in Fig 1) to match with eruptions”.

  50. Roman,
    OK, I didn’t put that last question clearly enough. Do you think, given the fading number of chronologies, that the Science paper should have been taken back to 1400, as in Steve’s plot? They’ve “hidden” the oscillations, but should they have been presented as real?

  51. Sonicfrog said

    Jeff, once again I’m late to the party…. Or maybe early! In the last year, I wrote two blog posts highlighting the dangers of taking climate scientists seriously on anything concerning economics. Here are the links.

    Why Climate Scientists Should NEVER Be Trusted With The Fate Of The World. Pt 2

    and the prequil

    Why Climate Scientists Should NEVER Be Trusted With The Fate Of The World. Pt 1

    Enjoy.

  52. Layman Lurker said

    Nick and Roman, wasn’t the same data used for the full period in Briffa01?

    Also, if replication is the underlying logic that explains the exclusion, what about Yamal and the high leverage (8 sigma) YAD061 in 10 cores? You don’t get to have it both ways.

  53. Bruce said

    “They’ve “hidden” the oscillations, but should they have been presented as real?”

    Nick, there is zero evidence any treemometer data is real.

    But hiding data, and then claiming the data skews in the opposite direction than the hidden data is fraud.

  54. Carrick said

    Nick:

    They’ve “hidden” the oscillations, but should they have been presented as real?

    I think that’s framing the ethics questions here in the wrong manner.

    Hiding your own data, or the analysis of part of it, is indefensible and a serious ethics breach, regardless of whether the data represent “real” oscillations. In these circumstances, regardless of whether it represents “real” oscillations, it should be included in the figure and discussed in the text:

    Here’s how I have dealt with “bad” data in the past..

    The “bad” data are demarcated (circled in this case) and discussed. (Roman would probably say I should have included estimates of the uncertainty intervals, which would have handled the problem in a different way.)

    McIntyre putting them in a separate color is another practice that I have used in the past. Even if I have reasons to exclude them from the trend and uncertainty estimates, the data should be included to allow the reader to make his own determination and (as important) my reasons for excluding the data should be stated and justified. (Inconvenience from ill-behaving data is not a suitable justification.)

    What removing the data does in the case of this figure is project a false picture of reliability of the data that is really not actually present in their own data set. ETHICALLY that must be discussed, and failure to do so would be a terminatable offense where I work. (I would certainly drop a grad student who did this, unless they gave a REALLY convincing story. Why are these full professors given a free pass here? )

    There are exceptions to the above comment on when it is acceptable to not include all of your data: Suppose you have a developing methodology and experimental apparatus. Of course, you don’t show all of your early data in the final paper (especially if unacceptable systematics exist in it which were fixed later), but you do discuss the progress of the experiment, including the systematics that led to the unacceptable data. (I can think of some very nice papers which do a remarkably good job, and some which do a rather poor job of this, from areas that I’ve work in myself in the past.)

    You wouldn’t present the data that had a known flaw on the same graph, but you would discuss them, explain why you thought the data were flawed and so forth.

    But that’s a very different scenario, isn’t it?

    Here we have a case where part of the data were left off of the graph, not because they represented the development of a methodology, but because they represented an incongruous embarrassment to the conclusions of the paper. (It isn’t helped any that the leaked emails show motivation in this case, no speculation is required as to why the data were left out, with no further discussion in the published work.)

    I’m going to repeat something that Roman said above because it nails the question and your indefensible defense of the indefensible:

    However, this is NOT the case of not having used the data, it is a case where the reconstruction was calculated using it and a portion of it not included because the authors did not like the result.

  55. John F. Pittman said

    NIck says:
    “”It’s true that in their intro (Nature) they say that the MXD series provides a good proxy for NH temperatures, and you could infer that it’s good for the whole 600 years. They were probably careless there. But that wasn’t what the paper was about. They were making use of the spikes in the series (as in Fig 1) to match with eruptions. And for that purpose, it’s not required that the series be a good representative of long-term changes in NH temperature.”

    Nick thanks for refuting GCM’s. You see the volcanos tend to do that hemispheric cooling, and Gavin and them relate their effect to temperature. Somehow these volcanos were teleconnected so that they would only be representative on a local level or were only near the treethermometers? Perhaps the critics of GCM’s should pont out to Gavin your claims.

    Though, yes it is not required, but then you just invalidate bunches of claims by Gavin and the IPCC. All I can say is thanks. Though I don’t believe you have accomplished this.

  56. RomanM said

    #49 Nick:

    Would I have taken back to 1400 given eight chronologies? I don’t know without seeing the quality of the original raw data from which these were calculated. There is nothing “magic” one way or another about the number eight.

    However, would that be a sufficient argument justifying cutting it back to 1550? Especially after doing an amount of pruning already at the other end??? Not a hope. I pointed out to you that there was a well-distributed set consisting of more than 20 series already by the time one got to 1500 (an item of information which I noticed you chose not to point out in your recent comment defending the consensus on CA – focusing on “8” instead ;) ). The fact that the end result at that point was still below the post-1550 period chronology suggests to me that delaying the start to 1550 was likely not accidental.

  57. Carrick, #53
    The first thing to say is that the data had been previously published, so it isn’t “hidden”.

    But if you’re going to make up these rules for climate scientists, you’d better specify them. Where can you start? What’s special about 1400, if not 1550? They could probably find bits of cores etc going way back. Really?

    It’s just not the practice. Look at the global temperature plots that you see. Most start about 1900 or 1880 or so. Whenever they think the data is reliable enough for whatever they are trying to say. But GHCN data (which they usually come from) go back to 1709. Is everyone who doesn’t dot in that curve committing an ethical breach?

  58. Geoff Sherrington said

    Nick Stokes,

    One designs an experiment, including criteria for success or failure. One performs the experiment, presents the results, then discusses pass or fail by the original criteria.

    One does not delete inconvenient data to make the story look better or worse; nor does one change the criteria after the event.

    Imagine the chaos that would arise in a former field of mine, evaluation of ore deposits, if assay results were not reported in totality, but were presented selectively. Economic deposits could be made to look uneconomic and vice versa. Fortunately, in geology, most countries have laws or codes of conduct to handle such situations. People who offend can get put in a small room with hammers, making big rocks into small rocks. See for example

    http://www.nationalpost.com/related/topics/Transcendental+gold+miner+accused+falsifying+results/4011195/story.html

    Climatologists need equivalent codes of conduct. They are hardly needed in geology because the comprehension of interested colleagues is usually good enough to discover misconduct before it becomes a public issue; besides, premeditating offenders know that there is very little chance of cheating and a very few cases where they could benefit from it without planning retirement in the jungles of Sth America.

    Nick, why not spend your remaining years productively, helping to produce a code of conduct for the use and abuse of climate data? You have shown yourself capable of good, lateral thinking. Now we need to guide you down a more useful path. From what I have read of climatology, most practitioners don’t even have the guts to correct a colleague who is obviously fiddling. Rather, they group together to produce papers with large numbers of authors, as if an appeal to authority will hide the misdemeanour.

    Here’s a start:
    Rule One. To have climatology accepted as a Science, one must use high scientific standards.

  59. Mark T said

    Nick is smart enough to know all this and, turn the tables, he’d argue the opposite. Hypocrisy and ideology go hand in hand… cognitive dissonance.

    Mqrk

  60. Geoff,
    Once again, this is not an issue of deleting data. Or for that matter a designed experiment. It’s a matter of what can be deduced from collected data.

    You may see full assay results, although I’m sure your boreholes stop somewhere. Probably when they decided they weren’t getting any more information. If you really want to see a full set of data, ask them for complete seismic survey results. No cutting out stuff that seems like noise – you want to see the lot. Everything the instruments return. And see if you can find someone who publishes all that.

  61. Bruce said

    Nick, what is it about trees that makes them suddenly horrible proxies for 200 years, then suddenly great proxies for 410 years and then suddenly in 1961 they are horrible proxies again?

    And what makes you think the intervening years made them good treemometers?

    You keep ignoring the question in order to answer other questions that make your moral compass seem …. icky.

  62. LL #51
    Do you mean this paper
    Low-frequency temperature variations from a northern tree ring density network
    KR Briffa, TJ Osborn, FH Schweingruber… – JOURNAL OF GEOPHYSICAL …, 2001
    ?

    I couldn’t see use of that data explicitly. They do show a series from Briffa’s paper in Quaternary Reviews 2000, which goes into a lot of detail about what makes up the series. I was struck by this explicit statement in Sec 2.2:
    “Note that, although the whole series is plotted here, the authors consider replication to be too poor before 1550 to be reliable.”
    They are talking about just one series (Tervagatory), and I think “authors” mean Jacoby et al, but still – you get the idea.

    On your other issue of having it both ways, that is exactly what I think is happening. Scientists get bashed if they do (Yamal) and bashed if they don’t (here). For my part, I just think there is a fuzzy area where you can reasonably decide either way. Neither choice is fraud, or an ethical violation. It’s just scientists using their judgment, as I would expect them to do.

  63. Mark F said

    It’s his bro that, but for a few votes, might have been PM of Canada. Cooks a mean popcan chicken, however.

  64. Robert said

    <>

    Your assessment is incorrect and I will show you how.

    See that graph. Tell me, are thermometers terrible at recording temperature prior to 1880? No, there’s just less of them and a lower spatial distribution. Please stop saying that they were horrible for 200 years. That statement is wrong. Retract it.

  65. Layman Lurker said

    Nick:

    Scientists get bashed if they do (Yamal) and bashed if they don’t (here).

    So you are saying that the authors get to exclude inconvenient data in one case because of replication (even though the cutoff date for this argument does not line up by more than 50 years – I know – they are being “conservative”). Then in the opposite situation they get to include very convenient but poorly replicated data in another case, but they don’t need to explain either case because everything is “fuzzy”. And you think we are the ones having a “whacky” discussion.

    Pointing at Yamal is not skeptics trying to have the replication issue both ways. The potential replication issue with BO99 is a straw man. The real issue is exluding data without reporting and justifying what was done. However if you or the authors suggest that replication is an issue in one case (BO99) and not another (Yamal) – now that is trying to have it both ways.

  66. PaulM said

    Nick’s comments are becoming increasingly absurd.
    Nobody is “going to make up these rules for climate scientists, ” –
    It is a very simple rule of reporting any science that you have to explain what you did, as you yourself know perfectly well.

    As Carrick says:
    reasons for excluding the data should be stated and justified.

  67. LL,
    Do you actually believe that those 15th C fluctuations in Steve’s graph are real?
    I don’t get the logic of the Yamal thing. If you believe that was wrong (I don’t), why would that justify doing the same thing here?

  68. RomanM said

    Re: Nick Stokes (Mar 27 23:03),

    I couldn’t see use of that data explicitly. They do show a series from Briffa’s paper in Quaternary Reviews 2000, which goes into a lot of detail about what makes up the series. I was struck by this explicit statement in Sec 2.2: “Note that, although the whole series is plotted here, the authors consider replication to be too poor before 1550 to be reliable.” They are talking about just one series (Tervagatory), and I think “authors” mean Jacoby et al, but still – you get the idea.

    Oh I love it! Bait and switch! The only relationship between that reference and the discussion of the NHD1 series is the number “1550”. Yes, Nick, we “get the idea”.

    Instead, why didn’t you quote the following excerpt from Section 3.1.1 (p. 91) of the QR 2000 paper instead:

    An average timeseries of all of the density chronologies, NHD1 (Fig. 5), has proved to be a useful record of yearly summer temperatures that can be considered representative of much of the higher northern landmasses, perhaps back as far as 1400 (Briffa et ai., 1998a).

    There is even a plot on the same page that includes a “low-frequency density” (LFD) curve which wiggle-matches Steve’s “deleted” pretty much exactly. The description on the graph states:

    Fig. 5. An indication of growing season temperature changes across the whole of the northern boreal forest. The histogram indicates yearly averages of maximum ring density at nearly 400 sites around the globe, with the upper curve highlighting multidecadal temperature changes. Extreme low density values frequently coincide with the occurrence of large explosive volcanic eruptions, i.e. large values of the Volcanic Explosivity Index (VEl) shown here as arrows (see Briffa et aI., 1998a). The LFD curve indicates low-frequency density changes produced by processing the original data in a manner designed to preserve long-timescale temperature signals (Briffa et aI., 1998c). Note the recent disparity in density and measured temperatures (T) discussed in Briffa et aI., 1998a, 1999b). Note that the right hand axis scale refers only to the high-frequency density data.

    The caveat at the end of the description seems to indicate that the temperature scale for the graph only applies to the distorted variance-adjusted density curve in the graphic. However, when the truncated LFD curve was used later, it seems to have been shifted down but not substantially rescaled and become “temperatures”.

  69. Adam Gallon said

    Mr Stokes.
    Why are you trying to defend the indefensable?
    I may only possess a meagre 3rd Class Honours Degree in Chemistry, I’ve spent most of my working life selling pills, potions and contraptions to the medical profession.
    I’ve read, digested, regurgitated & shredded dozens of clinical papers, argued about them with clinicians and colleagues.
    The way a paper is produced is as follows. You have your hypothesis or product to test. You decide, with the aid of appropriate statistics, what your study should be doing, how many patients/treatments you’ll need to achieve statistical significance, what tests will be run and for how long This is all laid out in your paper, when submitted to a journal. Indeed, it’s all written into the submission to the Ethics Committee when a clinician wants to run a trial.
    You run the trial, you record the results, with extreme care, these results are analysed according to the predetermined criteria, as laid out in the study’s design.
    These results are presented in tabular & often graphical form in the paper. A discussion & conclusion section are added.
    What isn’t done – or if it is, it’s usually by a pharma company who’s paid for the study and the results aren’t what was hoped for – is to dredge through the data and try to fit it to your desired conclusions. Do that, and the clinician to whom you’re trying to sell your magic potion, will usually show you the door with alacrity.
    Let’s compare that to the process followed by the Treemometer (&, it seems, all proxy) advocates.
    Your hypothesis is that trees are exquisitely sensitive to temperatures. (Hmm, OK, possible) You decide that certain species of trees in certain environments are the most sensitive (Right, quite possible).
    Simily with medicine? Let’s say our hypothesis is that Taxanes are active against malignancies, especially against breast cancer.
    Now our little Treemometerists, seek out their logs and obtain cross-sections through them. They then discard those that don’t display the desired profile.
    Our Oncologists seek out patients with the appropriate tumour types & stages, give them their taxanes and after the appropriate period of time, count up how many have suffered tumour progression/death. They then decide that some of the subjects they’ve enrolled in the trial, haven’t responded as desired as they’re somehow not right, so they’re excluded for the analysis.
    Our Treemometerists present their paper to a journal, that submits it to peer revue, they also appear to be able to suggest who should be the peers.
    Ditto our Oncologists (I know not if physicians can suggest who is asked to revue their submission).
    The paper is published in a blaze of glory.
    Our Treemometerists are lauded and their paths are strewn with flowers.
    Little Adam takes the paper into his local, friendly, Oncologist to get those sales. The Oncologist takes one look at the study, asks where the hell are the error bars, why the hell do I think he’d accept this excercise in post-hoc analysis and data dredging as valid and throws me out.
    If there were too few cores to be statistically valid prior to AD1550, why were they even analysed?
    Why has there been no further examination of the post 1960 “divergence issue”? Why just some handwaving about some human-generated reasons, no explanation or hypothesis about what these are, no supporting evidence.
    They aren’t performing science, they’re performing advocacy, no different from the lot who came out with the “97% of scientists agree”

  70. Jeremy said

    Nick Stokes said
    March 27, 2011 at 6:31 am

    They’re not presenting their data here. They are presenting a reconstruction. It’s an analysis process. How good is it? Well, you look to see if the results make sense. If not, see if something has gone wrong in the process. If it has, you can’t present the analysis.

    No, Nick. You’re missing the real point here, and this paragraph damns you rather badly.

    —> Why are they presenting a reconstruction in the first place? Why are they not presenting all attempted methods (iterative or straightforward) for creating that reconstruction?

    The fact that they present a finished reconstruction without a strong and lengthy discussion of methods attempted is the issue.

    Look at the results and see if they make sense? That’s not good science. In science you discuss methodology first. Lets discuss how they found it valid to exclude those 8 series while publishing papers with fewer. Lets discuss how they calibrated to temperature. Lets discuss this unnatural convergence in the early 1900s that goes against any understanding of natural noise from data.

    Do you want to discuss these things Nick? Or do you want to tell us that we are somehow hysterical for vocally rejecting being handed a carefully massaged answer for which no discussion of methodology has occurred? You’re doing the equivalent of telling the teacher he’s crazy because he demands to see your work, something only a delinquent student would do.

  71. Layman Lurker said

    Sorry if I wasn’t clear Nick. I believe that part of a chronology, which calls it’s validity into question, has been dropped without explanation – nothing to do with whether I think the fluctuations are real. On replication, what I am saying is that your argument is inconsistent. The BO99 replication standards that you imply obviously could not have been applied to Yamal.

  72. Kenneth Fritsch said

    I commented at CA about the Briffa-Osborn noodle in the spaghetti graph and got no response so I’ll try here.

    The “full” B-O reconstruction would appear to be an outlier to the other spaghetti noodles and thus why was not it rationalized away as such and not instead retain the middle portion. (One would correctly still want to show it). No attempt was made to rationalize. Why? Please listen to Nick Stokes as he is giving you insights into the thinking that could be used in retaining and excluding sections of a reconstruction – not that it is the correct way to operate. If indeed the presenters of this reconstruction can exclude parts from the beginning and end without explanation, the logical question becomes: is this practice a common one or is it merely an aberrant one and this is a special case.

  73. stan said

    One thing is absolutely crystal clear. No one should ever consider making public policy on the basis of “science” of the quality that Nick is advocating. If you take a step back and look at all the BS that is represented by these graphs, it’s obvious that no rational business would ever spend a nickel based on them.

    I don’t care if all the trimming and hiding is acceptable in science today. The quality of this information and it’s “handling” is so bad, that no one should make any conclusions based thereon. If we were told that the graphs are “someone’s best guess, but who the hell really knows”, we wouldn’t be arguing here. The problem is that a lot of political people (some of whom were scientists) took this crap and told us it was gold.

    The crap ain’t gold. It stinks. So bad the stench will gag you. So Nick, argue all you want about whether the crap meets the malleable, low standards of modern science, but don’t even try to tell us that this stuff is good enough quality to justify changing the world.

  74. RuhRoh said

    Krugman weighs in;

    http://newsbusters.org/blogs/noel-sheppard/2011/03/28/krugman-no-scientific-impropriety-climategate-hide-decline-effective-

    RR

  75. LDLAS said

    “When reconstructing past climate from tree-rings (e.g. the amplitude of the Little Ice Age or Medieval Warm Period), it is important to appreciate that these reconstructions are conservative as they only contain a part of the true climate signal.”

    http://pielkeclimatesci.wordpress.com/2010/08/24/futher-information-on-tree-ring-proxy-data-a-research-paper-garcia-suarez-et-al-2009/

  76. David S said

    Sorry Nick, this is a long and convoluted series of excuses. I applaud your persistence and civil tone in the face of some punchy criticism, but although you have a lot of technical knowledge you do not seem to be able to come to terms with the idea that removing data from a publication for no other reason than that it is inconsistent with one’s hypothesis is utterly fraudulent. In the general mumbling about “divergence problems”, varies nebulous potential explanations have been put forward – “not enough trees” (except when they agree with the theory, and then miraculously there are); “change in response as a result of anthropogenic activity” (no physical mechanism even suggested, let alone supported by evidence of any kind) but none of them gets close to being a coherent reason why 1550-1960 data should be credible while 1400-1550 and 1960 onwards are not. Ergo, this work is fraudulent, and anything that depends on it is impaired, potentially fatally.
    To me, this is about the only bit of the science that should be considered “settled”. It leads on to a project to understand what is left of the consensus when Mann’s Briffa’s and Jones’s tricks, and their journal descendants are removed. In particular, to what extent do the GCMs require a hockey stick with a flat handle, and what effect does a more credible assessment of the shape and uncertainty of the global temperature record have on their already questionable predictive ability?

  77. Carrick said

    Nick:

    They’re not presenting their data here. They are presenting a reconstruction. It’s an analysis process. How good is it? Well, you look to see if the results make sense. If not, see if something has gone wrong in the process. If it has, you can’t present the analysis.

    Actually, you present the analysis, because it went wrong, and you give your best explanation for why the error doesn’t compromise the rest of your analysis.

    You apparently don’t have much of an experimental background or you would know this.

  78. “Now our little Treemometerists, seek out their logs and obtain cross-sections through them. They then discard those that don’t display the desired profile.”

    Let me just take that as typical of misunderstandings here. They haven’t discarded data. If you read that Quaternary Reviews paper for example, you’ll see lots of the data laid out.

    What they did was compile an average of a large amount of existing, and mostly published data that was supposed to be representative of NH temperatures. And they stopped it at 1550 because, I presume, they believed that the amount of data before that did not provide a result representative of NH temperatures.

    Let me deal with the claim that they stopped because the results were “inconvenient”. That’s why I’ve been asking if people actually believe the results are true.

    What is the “inconvenience”. It shows a big dip going down to 1400. Bad for the MWP, maybe, but they haven’t been accused of promoting that. I think you folk overrate the dedication of scientists to maintaining some consensus view. A much stronger motive is to find something novel. And a 1C rise in the 15th C would certainly be novel. One could become famous.

    OK, maybe it’s inconvenient because it shows that the method is not as good as they claim. Well, that’s closer. It does show that the results are not reliable, but when the underlying dataset is, by their account, at that time dropping from 76 to 8, it’s not the method that is the first suspect, but the quantity of data supporting it. And they’re not hiding that.

    They didn’t publish that part of the curve because they did not believe the results were true or reliable. This is a very common situation. It’s not a dishonorable motivation.

  79. John M said

    What is the “inconvenience”. It shows a big dip going down to 1400. Bad for the MWP, maybe, but they haven’t been accused of promoting that.

    The MWP is generally considered to have ended around 1250. Why would that be “bad for the MWP”?

  80. Kenneth Fritsch said

    “They didn’t publish that part of the curve because they did not believe the results were true or reliable. This is a very common situation. It’s not a dishonorable motivation.”

    And that my friend is the crux of the matter. If scientist1 does what you say, is there a scientist2 that does that and perhaps 3,4 and more scientists? It is not even a matter of whether a curve, or a part thereof, is realistic, it is rather that apparently the methods and data are not different enough to rationalize why the proxy “failed” in this case – and perhaps with that type of thinking in other cases. It is like at the other end of the curve where not many are going to think that the proxy responded to the actual temperature (instrumental in that case) but rather that the proxy is not reliable. Why is that part so difficult for some to apparently comprehend?

  81. Bruce said

    “It does show that the results are not reliable”

    Bingo.

    The results post-1960 show that treemometers were never proxies for thermometers since they had actual real thermometers to compare the results too. They could try and con people into think treemometers were valid pre-1960 by hiding the decline pre-1550 decline.

    But now we know conclusively that treemometers are not, and never have been proxies for temperature.

    But, I’m still asking the question.

    What happened in 1960?

  82. Carrick said

    Nick:

    They didn’t publish that part of the curve because they did not believe the results were true or reliable. This is a very common situation. It’s not a dishonorable motivation.

    This isn’t about religion (what you believe in).

    Back to the comment I made. When the data get ratty you don’t throw them out, you retain them and explain the problem with the data, and then you allow the reader to decide whether he agrees with your arguments.

    What is very clear from all of this is the proxy tree ring data are essentially complete crap, because it appears you can’t write a paper analyzing the data without twisting and manipulating the graph to make the data look much better than they really are.

    There’s really nothing left to argue about regarding this point. There is a clearly identified “right way” to handle this, and it wasn’t done that way. Not just once, but at least three different times by multiple sets of authors on multiple peer reviewed publications.

    I meant what I said earlier, I wouldn’t tolerate this level of BS from a grad student, and I have far higher expectations of experienced researchers. It is a bit amazing to me that you don’t think this just reeks of an unholy mixture of dishonesty, incompetence and poor scholarship.

  83. #82 “without twisting and manipulating the graph”
    Carrick, that’s absurd. They haven’t twisted or manipulated a graph. They simply stopped somewhere. Everyone does. You don’t see time series plotted back to the Big Bang.

    In your plot that you held up as an example to miscreants, you flagged two points within the range that you considered as outliers. I note as an aside that you did a linear regression without them. You didn’t show the regression that resulted if they were included, which is actually the same sin that B is accused of here in not showing the smooth.
    But that aside, you showed a limited FM rate section, from 0.8 to 1.8 Hz. What happens outside that range? Why didn’t you show it? Would it have deviated from your neat line?

    BTW I have not criticism of that – it’s just the universal problem. There’s a region in which you have, or can get, data good enough to test a model, and a region in which you haven’t or can’t. And you present what you have.

  84. #71
    “has been dropped without explanation – nothing to do with whether I think the fluctuations are real”
    LL, I actually would have liked to see more explanation too, although I think the context (a one page Science paper in which it has a small role) makes that understandable. But poor explanation is a common failing in this imperfect world. What we have here is a barrage of claims about fraud, ethical violations etc. That is something different, and could only be justified if you think B et al are covering up some truth.

  85. Jeff Id said

    Nick,

    It is absolutely fraudulent to clip the data in recent times and replace it with temp giving ABSOLUTELY ZERO EXPLANATION. Now keep in mind, this is the IPCC report not the ten year previous report that showed more of the data and didn’t replace it with temp. Wow!!

    You keep asking us if 8 is enough to satisfy us (a title for your next blog post?). That is NOT the correct question.

    Do you believe that it is even somewhat ethical to delete data from a collection and replace it with different data without disclosure?

    That is a question you need to answer. There are others.

    Can you locate the scientists reasons for deletion of proxy data sections (historic and recent) in publication? In the IPCC report?

    If a different group of proxies has a preferable signal allowing truncation of a different one, isn’t it true that the truncatable (is that a word) proxy signal has no provable validity?

    If this proxy set is invalid, what does that say about other proxies of the same or similar type?

    It is a sophists argument you make.

  86. Jeremy said March 28, 2011 at 10:10 am

    Nick Stokes said
    March 27, 2011 at 6:31 am

    They’re not presenting their data here. They are presenting a reconstruction. It’s an analysis process. How good is it? Well, you look to see if the results make sense. If not, see if something has gone wrong in the process. If it has, you can’t present the analysis.

    No, Nick. You’re missing the real point here, and this paragraph damns you rather badly.

    Well, Jeremy, I’m damned here rather frequently – in fact, I believe my reputation has been in decline ever since I started posting. But let me do my own breakdown of that.

    Well, you look to see if the results make sense.
    Don’t you?
    If not, see if something has gone wrong in the process.
    I guess this is the most “damning” – yes you should be checking anyway. But if the results look screwy, you check again. Don’t you?
    If it has, you can’t present the analysis
    Well, can you? Actually Carrick says yes, but how often do you see that? There’s a good reason – the referees won’t let you. Journals have better things to do with their space than fill it up with analyses that failed for lack of data.

  87. Layman Lurker said

    Nick, it is an ethical violation. Ethics trumps journal space limitations. You submit your work ethically, or you don’t submit.

  88. Jeff #85
    “It is absolutely fraudulent to clip the data in recent times and replace it with temp giving ABSOLUTELY ZERO EXPLANATION. Now keep in mind, this is the IPCC report”
    I thought we were talking about a paper in Science? But it’s not true that there’s no explanation. If you follow just go to the source they quote you find a whole section on it. And elsewhere there are whole papers. They didn’t “replace” the temps – they added an instrumental curve with appropriate indication.

    So, on your questions:
    Do you believe that it is even somewhat ethical to delete data from a collection and replace it with different data without disclosure?
    Well it would be, but as I say, they didn’t do that.

    Can you locate the scientists reasons for deletion of proxy data sections (historic and recent) in publication? In the IPCC report?
    Well, this comes back to this strained use of “deletion”. As I say, everyone starts somewhere. Back to S09 and O10 – S09 chose the period 1957-2006. Now there’s data, in GHCN and elsewhere, before 1957. But it improved in 1957 with the IGY, and that’s a sensible place to start (as I keep saying, you have to start somewhere). Now O10 had many criticisms of S09, but I don’t recall anything said about “deletion” of pre-1957 data. And O10 used the same period (as they should).

    If a different group of proxies has a preferable signal allowing truncation of a different one, isn’t it true that the truncatable (is that a word) proxy signal has no provable validity?
    Sorry, I don’t understand that one. Could you give an example?

    If this proxy set is invalid, what does that say about other proxies of the same or similar type?
    The issue here isn’t the validity of the proxy sets. It’s just whether there are enough of them to give a representation of NH temperature trends. That requires both geographical coverage and enough data so that trends stand out against the noise. I think the latter is what failed in the 1400-1550 period here.

  89. Carrick said

    Nick, they cut off part of the results of their analysis, at the point where it went wonky. This is certainly a manipulation of the data and the graph. As to going back to the big bang, they didn’t analyze back to the big bang, so they don’t need to discuss it.

    In your plot that you held up as an example to miscreants, you flagged two points within the range that you considered as outliers. I note as an aside that you did a linear regression without them. You didn’t show the regression that resulted if they were included, which is actually the same sin that B is accused of here in not showing the smooth.

    Er no. First, the line isn’t a regression (then again I didn’t claim it was).

    And to regression… The only uncertainty analysis included in the paper found the normalized correlation coefficient rho = 0.982 (N=40, p < 5 x 10^-10)… and this included the circled points (sorry, that’s not my previous description, memory fades with time). It wasn’t necessary (or appropriate) to redact the data points that didn’t fall on the f_HB = f_FM line.

    Were there any issues, I would have made estimations of the uncertainty in f_FM and performed the analysis using a weighted OLS fit, in which case the outliers would have been properly down-weighted due to their greater uncertainty.

    But that aside, you showed a limited FM rate section, from 0.8 to 1.8 Hz. What happens outside that range? Why didn’t you show it? Would it have deviated from your neat line?

    The causative agent here is heart beat rate. If you go much higher than 1.8 Hz the person drops dead from a heart attack, much less than 0.8 they go into a coma.

  90. LL #87,
    If you’re claiming that poor explanation (which is a feature of innumerable papers) is an ethical violation, then you are making the term meaningless.

  91. Mark T said

    It is an intellectual fraud’s argument he makes. No better than the liars he defends. Imagine, Jeff, such a “defense” in an engineering design review. You wouldn’t be fired, contrary to popular opinion, but you would earn a new title… engineering assistant.

    Mark

  92. Carrick
    “If you go much higher than 1.8 Hz the person drops dead from a heart attack, much less than 0.8 they go into a coma.”
    Mine goes less than that quite often. I’m not comatose (just blogging).

  93. Layman Lurker said

    #90

    Nick, where was the “poor explanation” in BO99?

  94. John M #79
    The MWP is generally considered to have ended around 1250. Why would that be “bad for the MWP”?

    Well, it’s in stark contrast to the Loehle/McCulloch version, for example.

  95. Carrick said

    Here in the US it is now becoming a requirement for people who receive federal funding from many agencies to take a online scientific ethics course.

    The problems we are seeing here is the damage that is done to the credibility of the science when people start play games with their data, like in the cases here. It doesn’t just affect the reputation of the scientists involved, it affects everybody in the field too.

  96. Carrick said

    Nick:

    Mine goes less than that quite often. I’m not comatose (just blogging).

    0.8 Hz is 48 beats per minute. The operative word here was “much”. ;-)

    If you have a resting heart beat below 40 beats per minute (what I would call “much” lower than 0.8 Hz), you’d have to do a lot of jogging along with that blogging.

  97. Carrick said

    Nick:

    Well, it’s in stark contrast to the Loehle/McCulloch version, for example.

    Who’s to say which is right?

  98. kuhnkat said

    Nick,

    “Mine goes less than that quite often. I’m not comatose (just blogging).”

    That is a matter of opinion.

  99. Bruce said

    I firmly believe all AGW proponents are dishonest shills.

    All data to the contrary is defective and will not be displayed.

  100. Crrick #97
    “Who’s to say which is right?”
    No problem – it’s uncontested. Briffa is not saying there was a dip there. Steve’s plot shows one, but so far I haven’t found anyone prepared to say they think it’s true. They just pile on Briffa for not being prepared to assert it.

  101. LL, #93
    Reread the correspondence. It goes back to #71 “has been dropped without explanation – nothing to do with whether I think the fluctuations are real”. If there are good reasons to believe the fluctucations are real, that’s one thing. But if that doesn’t matter, it comes back to the allegation of lack of explanation. Which could be bad writing, but not an ethical violation.

  102. Carrick said

    Nick, you can call it piling on if you wish, but the problem isn’t in the answer to the question of which is true, but in the avoidance of the question by Briffa, Jones and others in that community to start with.

    If it’s not true (as is likely), it certainly implies something derogatory about the quality of the data. You might argue the early data should be dropped due to reason ___________ (fill in blank) and the later due to __________ (fill in blank), but the data should be shown and the reasons given.

    And as I said, the “beliefs” of the scientists don’t come into play here. You don’t get the privilege of dictating the conclusions simply because you took the data. The data must be fairly and honestly presented, warts and all. Post hoc decisions to redact should almost never be done, and certainly never without warning the reader that this has been done.

  103. Bart said

    Politicians shouldn’t legislate scientific findings, period. Nothing leftist or rightist about that, and scientists have every right to protest against such travesties.

    (my previous comment seems to have disappeared?)

  104. Jeff Id said

    Bart,

    I didn’t delete anything and nothing is in the spam bucket. Something went wrong.

    The protestation of the EPA endangerment findings is political. The EPA is an unelected branch and has been given power to enact its own laws and taxes by a bad rule. This happens ever more often in the US. It eliminates responsibilities from the elected for the consequences. The continued allowance of the endangerment finding is most certainly leftist political.

  105. kim said

    Nick’s proxy fails in
    Imitation of the mind.
    What does Briffa think?
    =============

  106. Frank K. said

    I couldn’t help noticing the tripe you cited from RC…

    “In so doing, it cited as an example the charade of a hearing conducted recently, including the Republicans disrespectful and ignorant attitude toward the science and scientists.

    They want RESPECT? REALLY?? HAHAHAHAHAHAHA…

    Maybe they can get ol’ Ben Santer to beat up the congressmen they don’t like…[heh]

  107. kim said

    Or Traversty Trenberth to Annul the Null.
    =================

  108. Layman Lurker said

    Nick, just because Briffa didn’t think the oscilations were real does not excuse this. He snipped and mislead readers into thinking that the chronology started in 1550. He took the easy way out. You are saying that authors have the green light to present information selectively because of their “belief” of being right. You’ve got to be kidding me.

  109. Carrick said

    LL:

    Nick, just because Briffa didn’t think the oscilations were real does not excuse this. He snipped and mislead readers into thinking that the chronology started in 1550. He took the easy way out. You are saying that authors have the green light to present information selectively because of their “belief” of being right. You’ve got to be kidding me.

    Exactly my point above… this is apparently about religiously head beliefs not science.

    In science, the authors don’t dictate the conclusions from the data, the data do (the authors conclusions can similarly be ignored when they are at odds with their own data).

  110. Carrick said

    Make that “religiously held beliefs.

  111. #49 Romanm,
    Re the number of sites – after a nudge from Steve, I downloaded the archives from UEA. I was able to plot the number for each year, which I showed in a post at Moyhu. By my count they were down to 41 in 1550, and still dropping rapidly.

  112. Jeff Id said

    Nick,

    I won’t have time to play today, but temperature data was not simply overlaid in the IPCC report. A bodge was applied instead.

    http://climateaudit.org/2009/11/20/mike%E2%80%99s-nature-trick/

    Now the same people have come up with multiple ways to hide the problems with this data over the years, and I agree with you that the Briffa paper was appropriately critical. Subsequent publications are less critical of the problems including the IPCC where it was more than simply clipping unacceptable data.

    Briffa is an odd character. He’ll put in ‘caveats’ in a paper which completely wreck any chance of conclusion, then make the conclusion anyway. In this Briffa paper, 1998, that was not the case.

    I think you have sufficiently confused what is an obvious issue. You cannot clip (and not reveal) data you don’t agree with without extended explanation. This example is as ugly as anything I’ve seen from ‘science’. Anyway, no time today. Maybe tonight.

  113. Kenneth Fritsch said

    Leaving aside what is ethical in cases like these, does not the thinking and rationalizing of these actions of omission lead directly to casting more doubt on what other scientists might have done and what a number of omissions allowed to accumulate would mean for proxy reliability in general. If most climate scientists had come down hard on “hide the decline” we might well judge that such an accumulation of omissions is rather unlikely, but since that has not been the case would I be correct in suspecting there might be an accumulation?

    I have seen some otherwise very intelligent people get rather fooled by stock picking schemes that were subject to ignoring what selectivity can do the statistics of the matter -like forgetting about all the schemes that failed and were ommitted early and not realizing the ones that survived could well have done that successfully by mere chance given the correct number of starts.

  114. stan said

    Bart (103)

    “Politicians shouldn’t legislate scientific findings, period. Nothing leftist or rightist about that, and scientists have every right to protest against such travesties.”

    Absolutely right. I’m sure you are even more convinced that unelected bureaucrats in the EPA should not make scientific decrees for blatantly political purposes. Especially when these decrees violate their procedural requirements.

  115. CG said

    Bart,

    Do you really think the EPA has the authority to regulate carbon dioxide? It is NOT harmful to human health in any direct way, and is not something that the legislation that created the EPA could have imagined would exist. This would be like the FDA, who’s original mandate on the food side was to ensure quality and lack of infectious diseases, deciding that it had the authority to tax&regulate saturated fat because, in the long run, some scientists think that it might be harmful to people.

  116. CG said

    I forgot the point of my post: republicans shouldn’t have to use the legislature to take away authority that was never given to begin with, but it’s the easiest and fastest way to set things right. If democrats want to give the EPA that authority through later lesiglation, we’d still oppose it, but at least they will have done it honestly/legally.

  117. Craig Loehle said

    Nick Stokes is ok with dropping data that doesn’t “look right”. So let’s say we survey smokers from different states, and in some states overweight people live longer than normal weight people, so we drop those states from our data because it doesn’t “look right”, or in some cases patients died while on our study medication, so we drop those patients since that can’t be right. In one case I know of, a study bird species assumed to live only in forest was observed via radio transmitter to be foraging in open fields, but since that “can’t be right” the grad student “fixed” the numbers so the birds never left the forest. Is that ok? Wow.

  118. Craig Loehle said

    Ooops, change “smokers” to overweight. Not enough coffee this am.

  119. Craig, #117
    Nick Stokes is ok with dropping data that doesn’t “look right”.
    No. Firstly, as I keep wearily saying, no data is being “dropped” here. They have computed an average of existing published data and the question is whether that average is representative of NH temperature.

    But I’ve never said you should drop that just because it doesn’t look right. You should certainly scrutinise it. And also check to see if you have enough data throughout the range. My LS temperature index program, TempLS, routinely produces a graph of the number of stations reporting in each year, and I include that with every post – I think it is v important in interpreting the results. And if you see unexpected wobbles, you check again to see if you have enough sites. That’s just common sense.

    But I doubt that they even went through that. They have published the diminishing data in the early stages and would have been well aware of it.

    I have shown an example of what happens when you’re running out of data. The pattern can be quite obvious.

  120. John F. Pittman said

    NIck, nobody is buying. They drop data in the present, because they don’t like. They drop data in the past because they don’t like. They drop series with too few and replace it with data that has even fewer, because they don’t like it.

    The point is that you do not drop data you don’t like if you are a scientist. You have to explain. Instead they are dropping data without justification. Explaining they are doing it, but not establishing the why it can be dropped. At that point all their science becomes speculation. A result is an explanation as to the phenomena. They have claimed results. It is up to them to justify their claims.

    What has been shown is that they cannot justify their claimed results. It was their duty. Not mine, nor Jeff’s, nor Steve’s nor Craig’s. Theirs and theirs alone. And Nick, you cannot ex post facto make their claims for them. It is simply speculation on your part.

  121. RuhRoh said

    Nick;
    Your ‘example’, again here in reference to Briffa /McI 50 year smoothed plots, isn’t smoothed. Are you vying to be anointed with your own named technique of deception?

    The hairsplit parsing is not novel. (i.e., current US administration criticising Oil companies for ‘vast leases not tapped’, in response to criticism of non-permitting of new wells).

    But maybe you can earn a moniker for Stokes Smokes, or ?
    Valiant efforts duly noted.

    RR

  122. Bruce said

    Nick, what happened to the treemometers in 1961 — other than them going in a in an inconvenient direction?

    Please explain why they were valid proxies in 1861 or 1761 and not in 1961, other than there were actual thermometers around to prove treemometers were utterly horrible proxies for temperature.

  123. Bart said

    For those arguing that the EPA shouldn’t legislate CO2 emissions because it’s not their mandate to do so:

    I’d say argue your case based on what their mandate is or isn’t (as you do here). If that conclusion is robust there shouldn’t be any need to bring in your views of the science.

    What I object to is politicans providing a view of the science that is at serious odds with the bulk of scientific evidence, and use that view to bolster their argument against the EPA endag finding. Politicians shouldn’t legislate science.

  124. John F. Pittman said

    The finding of harm does give the EPA jurisdiction to regulate CO2. However, they were supposed to follow their guidelines in doing so, and it is claimed that they did not. They could lose in court or in Congress. This is what the Mass vs EPA was about. Could the EPA regulate GHG’s? The answer was they “may.” Don’t confuse that the EPA could decide that they would not do, due to lack of evidence, versus that which is claimed a mandate to regulate from the US Supreme Court. The court did no such thing.

    Next, Bart you are wrong about what the legislatures have the power, authority and right to do. The regulations become binding unless they are set aside. The President can do this, and the Congress can do this. Because they do not have to pay attention to the science per se, they pass laws that deal with policy. If the politicians, in particular, decide that the confidence in the effects of AGW are too unknown to come up with good legislation at this time, then they should block the EPA. That is their job, not scientists’, nor the EPA’s. If they find that the EPA’s reliance on the IPCC is unacceptable, then their complaint about the science, even if correct, is also valid. If they like myself, find that the attribution of CO2 being dangerous by the IPCC depends on the accuracy of paleoclimate reconstructions as the chapter on attribution in AR4 stated, then yes, they can and should attack the “science’ as being unacceptable for policy.

    I wish people actually understood just what the chapter on attribution actually said and claimed. It sure appears that they don’t read it.

  125. stan said

    There is no evidence that CO2 is harmful. None. There is a theory that it will lead to some warming. There is no evidence that the warming would be harmful. There is a lot of evidence that warming is beneficial.

    All the warmists have is a theory employing GCMs that can’t be properly used for predictions.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 142 other followers

%d bloggers like this: