Mannian Science

First, I would like to apologize to Dr. Eric Steig for the following clip in reference to his paper. I am fully aware that this weighting analysis wasn’t done prior to publication and don’t believe he personally had intent to be involved with a problematic result. However, the responses from Drs. Schmidt and Mann have been disingenuous and deserve a reply equal to their own efforts.

I referenced the link to this in my last post. The link was located thanks to a reader in the Tiljander thread on CA. Any smartassed (yet fantastically correct) uses here are my own doing and have no relationship to CA’s tone or blog.

Dear Dr. Schmidt and Dr. Mann,

monkey pic
Click to play

Finest and most sincere regards your humble servant,

Jeff

————-

Gavin:

The analysis you saw is simply a fishing expedition, an analysis of what the calculation is doing (fair enough), combined with an insinuation that the answer is somehow abnormal or suspicious (not ok).

Had they been honest or kept quiet, I wouldn’t do this but the replies are insanity. Keep in mind that I didn’t ask them to reply at all.

I’ve been accused regularly on RC of not being willing to have a reasonable discussion. If we cannot discuss the incorrectness of an upside down thermometer (or even a proxy- Mann 08) honestly in climatology, what does that say for the science? It is due to their own choice to address reasonable criticisms in this fashion at RC that Gavin has lost his credibility as a non-advocate.

The problem for them is threefold: First they actually work in the field. Second, their cohorts can now see the upside down thermometers. Third, a decent highschool student can figure out that upside down thermometers – ain’t right.

—–

I hope naively that Climatology begins to learn a lesson from this and actually present their own linear weightings in their publications. Think of how many papers would have been corrected or not published. Had the authors seen this beforehand, only a dishonest scientist would submit it for publicatoin and I don’t believe Dr. Stieg would have agreed to it.

I also want to mention that I suspect RyanO’s efforts may well have corrected this negative weighting problem through the inclusion of more PC’s. Also, he may have created a far superior reconstruction with appropriately constrained spatial weighting and in that it doesn’t rely on the very noisy AVHRR satellite information for trend, only for linear surface station weighting. Therefore, it may do exactly what Dr. Stieg intend to do from the beginning with this clever and interesting method.

I can’t wait to get some time to find out.

59 thoughts on “Mannian Science

  1. Jeff,
    Maybe the way to convince them is to do a very simple hypothetical example where the actual values are known and where the reconstruction is shown to get worse and worse the more you ignore the upside down thermometers?

  2. Or rather I should say a few simple example reconstructions where they are shown to get worse and worse when you have more upside down thermometers or the more contribution the upside down thermometers have to the reconstruction.

  3. The analogy that comes to mind, is trying to get the average height of Africans by sampling mostly Pygmies. Not sure how a few inverted Pygmies would come in, though.

  4. having more PCs is much more crucial than some sort of a priori disgust over negative weighting. I completely see that the best prediction might be some weighting scheme that has negatives. Heck, ever hedged?

  5. I mean, heck LOWER cold loop temp in the reactor can be a predictor of higher centerline fuel plate temps. For instance, if the reactor is at power, this would indicate more steam demand and thus power level and thus a higher temp required at the centerline to drive adequate flux, with constant average coolant temp. When the reactor is shut down, of course lower cold loop temp is indicative of lower centerline temp.

    Heck, how about North and South hemispheres. Cold in one, means hot in the other.

    Or for that matter think about some tendancy to co-linearity. if X1=X2, then 2X1-X2=X1.

  6. I told y’all from the beginning (read my RC comments) that issues with the low PCs or other similar aspects of the specific algorithm were more a concern than this general disgust at not doing area-weighted averages. You HAVE EXTRA INFO with the overlap of stations and satts, that can be used to get a better guess than by just looking at stations on their own in the old period.

  7. TCO, please, dude, do some math before you comment. Your assertions are tiresome. Negative thermometers ARE shit. 100% shit. I shouldn’t even need to say it to make it so. If the math results in negative thermometers, then something is wrong with the math. In this case, it is easy to see what is wrong with the math: too few PCs are used to describe the covariance.

    Please try to understand what a negative weight means. It’s not some magic sum-of-modes math; it has nothing to do with reactivity. When doing this kind of calibration/extrapolation, a negative weight means that during the calibration period, the regression shows that temperatures at the surface station and temperatures measured by the satellite are inversely related. This is a completely unphysical result. It means the math is wrong.

  8. I think TCO is confusing a proper use of negative weighting, as a predictive variable, with an improper use of negative weighting as part of a credibility assignment. Correct my if I’m wrong, Ryan, but essentially the weights boil down to an assignment of credibility, which should be on a scale of 0 to 1. It is nonsensical to apply a negative credibility weight to something. It may be reasonable to have credibility assignments sum up to less than 1, with an assignment of the remaining complement to a zero value, (or some other reasonable value) if your information is lacking and you do not feel confident assigning full credibility to your observations.

    As a predictive variable, negative assignment may be reasonable if it can be shown that there are oscillation patterns that can be reasonably expected, so that a value that occurs at X should have some negative weight to predict the value at X+1.

    The fact that you can come up with some examples in mathematics where negative weighting makes sense does not imply that negative weighting must be legitimate in all cases. That’s against the rules of logic.

  9. Diatribe Guy: I think you and I are actually on the same side. I’m just saying that gold stocks for instance are negative beta.

  10. #9, It’s a weighted average type calculation with weights corresponding to each stations contribution to the total trend. The stations weights aren’t constrained by the math to sit between 0 and 1 or even to sum to one. They do their best to match each other as well as the satellite data.

    Something I’ve been considering is that an upslope in satellite data will bias the weighting, preferentially weighting high sloped station temps. That sounds pretty Mannian doesn’t it.

    There are forced anti-correlation patterns created by PCA. It looks for the maximum variance PC1 and removes it, PC2 then becomes the maximum top to bottom oscillation of the continent. PC3 becomes an orthogonal side to side oscillation. This almost guarantees some strong negative correlation but aside from that, the negative value doesn’t just flip the local oscillations but the trend as well and it turns thermometer measurements into nonsensical rubbish in an effort to falsely match this high frequency information with the wrong data.

    An inverted station is no longer temperature of course and the copying of it to the wrong side of the continent is an artifact of the algorithm doing the best squiggle matching it can. Even if this made the squiggle match look a hundred times better, it would still be nonsense.

    ————————-
    TCO’s comments are naturally going into the Cialis bucket, WordPress seems to sort them better than me. I’m deciding whether I should let them out. It’s the same old rubbish and I don’t want him to take over this blog again.

  11. I can’t address the statistics used by the AGW scientists, but I can say that I am fed up with the collective, condescending attitude that seems to prevail among them.

    Grant funded scientists have an obligation, in general, to provide anything to the public that the public wants to see. For the most part, none of us care about the details of scientific research because, for the most part, it doesn’t affect us directly.

    AGW researchers are demanding that the American public significantly change the way we all live. They need to provide every bit of data, methods, computer programs, etc., for public inspection and they should, further, politely and completely address all criticism of their work (and check the condescending attitudes at the door).

    Thanks Jeff, Steve, the Ryan’s and others, for forcing them to address questions.

  12. Heh. I for one am glad you aren’t switching to PC monotone… sometimes ya just gotta laugh at this stuff.

  13. #15, I thought that was PC. Hmm, maybe I don’t have it in me 🙂
    ————-
    TCO is cussing away at the cialis bucket. We’re all stupid and he’s a genius.

    Being cut from a discussion is crap, I hate it more than anyone but TCO you have to cut the swearing, stop repeating the same thing over and over and we all would appreciate it if you used your mind to figure out what is happening rather than your mouth (fingers).

    I’ve never been forced to cut anyone like this…… WordPress spam filter is doing its job.

  14. TCO is cussing away at the cialis bucket. We’re all stupid and he’s a genius.

    Other than being purposively disruptive or having an unhealthy need for attention why would somebody post at a site that they hold in such low regard? One always hopes, I guess, that such a person could contribute something substantial to the discussion at some time and in the meantime put up with his antics or ignore his rantings. TCO no longer posts at CA and CA is better for it. Maybe Steve M would have some off line advice on handling the situation – if that is not too personal.

    It’s your blog, Jeff, but I am voting TCO off the island – a second time.

  15. RE: #16, “TCO is cussing away at the cialis bucket. We’re all stupid and he’s a genius………..
    I’ve never been forced to cut anyone like this…… WordPress spam filter is doing its job.”

    LOL. I was wondering why his posts have been so tame. Silly me, I thought he had cleaned up his act. Oh, well.

  16. #17, I hate cutting posts. I already asked SteveM his thoughts and he wrote a fairly detailed email about it. It’s part of whats going on now. I don’t mind the criticism and actually wish I could attract a better quality of it. It really hit home after TCO stopped commenting that there was nobody really left anymore.

    He chased people away, a TCO only blog is the last thing I want to waste my free time on. People like Carrick, yourself, Nic, Jeff, Ryan, Geoff, Laymen, Steve, Page, and a bunch of others I can’t list, need a little space to discuss things rationally. If TCO wants to play nice, he can act like a reasonable human.

    If Gavin or Mann want to play nice, they can comment too. I’ll give them all the space they want.

  17. If Gavin or Mann want to play nice, they can comment too. I’ll give them all the space they want.

    Being part of the scientific community, they’ll expect you to submit real papers,

    Why does Watts report than RyanO is hesitant to submit his earth-shaking results to peer review?

  18. If Gavin or Mann want to play nice, they can comment too. I’ll give them all the space they want.

    To be clear, why should working scientists bother playing your game?

    And why should they play nice? Professional science conferences are (speaking from experience in the CS world) anything but nice.

  19. And reading the top post fully (which I hadn’t done after being brought here by a link)

    You’re an ass.

    We knew that, though.

    An ignorant ass.

    When a denialist like TCO gets disgusted by your unscientific crap, don’t any of you wonder?

  20. #20, Perhaps you would care to explain my ignorance of thermometers in more detail. Shall we discuss brownian atomic motion in liquids, crystal diffusion effects on calibration, thermocouple types or should we simply move on to why you shouldn’t read them upside down?

  21. Dhogaza :

    Professional science conferences are (speaking from experience in the CS world) anything but nice.

    What are you talking about, troll?

    You mean you finished high school?

    Conference time is my favorite time of year. It’s actually better than Christmas.

  22. jeff,
    dhogaza can go into the troll bin too!
    that is from some one who hates being deleted for no good reason!
    i hope you get the request on the inverse stations, and have time to run a study.
    Tx you

  23. funny though this is my exact thoughts on gavin and mike!
    “You’re an ass.

    An ignorant ass.”

    We knew that, though.
    WE? he is two three or more personality’s?

  24. Jeff,

    I’m an infrequent visitor, but I admire the fact that dialogue, such as it is, can still occur.
    I’ve read both TCO’s and dhogaza’s posts on other sites – and ad hominem doesn’t begin to describe what they choose to post on sites that are not in keeping with their personal views.

    Keep this kind of effort up. When all that can be thrown at you is unsubstantive, you know that you’ve not wasted your time. No matter what “they” say, everything adds to our collective understanding, and eventually, truth will out.

    As far as Ryan’s work and the others who have contributed, I would suggest peer review – knowing that the process is massively flawed – as an important goal post as those who believe in CAGW seem to think it is. In other words – score a touchdown despite their efforts to block you form doing so.

    Hats off to you!

  25. Dhogaza pops in, whines, swears, calls names, and draws a line in the sand: “thou shalt be no more denialist than TCO”.

    Sort of like “though shalt go no further right than McCain”. Don’t you just love people who issue commandments like Gods?

  26. The maxim here for the sort of behavior exhibited by dhogaza and TCO is: “anonymity breeds contempt”.

  27. @#23: don’t expect too much from a six garde science troll. I asked him on another forum to refute Dr.Pielke Pere’s (a “gone emeritus” as he called him) article, which of course he’s fully incapable to. For the warmista’s types like Dhogaza defending them is a sign of the times, people are getting more and more desperate

  28. All you need to do to 100% falsify the paper now is to show that Steig et al actually used negative weights in the reconstruction.

    That would put a nail in the coffin of the paper, would make a nice short submission to Nature that could not be refuted, and it would put an end to the usage of this RegEm mathematical procedure which is not a robust process and requires a lot more theoritical support and boundary-setting from the mathematical community before it should be used again.

  29. TCO,

    Gold stocks do not have negative betas. Low betas, yes (currently at around 0.5 – 0.7), but not negative. It would be very hard to find a non-structured investment with a negative beta.

  30. Compy:

    Makes sense. I got some conflicting info when I googled this and I don’t currently have Barra access. Walmart and cobblers are doing well right now…

    At least you get the principle, even if we would have to go to some bond or short or such to find negative beta.

  31. #34 – Jeff, can you explain figure 4 a bit more. I get that the pre-1982 difference is very close to zero – why does it jump up post 1982? I understand that this is due to the satellite data in some way – but couldn’t somebody claim that this lack of a match post-1982 invalidates your weights?

  32. Probably past this point by now, but #11…

    “#9, It’s a weighted average type calculation with weights corresponding to each stations contribution to the total trend. The stations weights aren’t constrained by the math to sit between 0 and 1 or even to sum to one. They do their best to match each other as well as the satellite data.”

    Understood. The fact that they aren’t mathematically or algorithmically constrained doesn’t necessarily mean that’s the best approach, even if it provides the best answer. I think that’s essentially what you are getting at, is it not? That the best answer was given by weightings that, once anlayzed, include values that make no sense (i.e. upside-down thermometers). Numerous times in my own work I have had to provide my “best” answer as one that did not maximize or minimize whatever criteria I was using, because the optimum answer produces irrational values. This simply happens because of noise in the data, and is much more likely the lower number of points you are using. All I was saying was that I think the weights above can be seen as a credibility assignment. And summing to 1 isn’t problematic, because implicit in that assumption is that there is a negative weighting against a zero value that brings the sum to one.

    I think we’re saying the same thing, but I’m probably thinking about it from the approach of an actuary and you’re thinking about it from the standpoint of an engineer. Engineers are probably smarter…

    “There are forced anti-correlation patterns created by PCA. It looks for the maximum variance PC1 and removes it, PC2 then becomes the maximum top to bottom oscillation of the continent. PC3 becomes an orthogonal side to side oscillation. This almost guarantees some strong negative correlation but aside from that, the negative value doesn’t just flip the local oscillations but the trend as well and it turns thermometer measurements into nonsensical rubbish in an effort to falsely match this high frequency information with the wrong data.”

    Exactly. I don’t pretend to understand the mechanics of this process, but this is nothing new in solving multivariate problems. Either constraints are in order to prevent this, or an after-the-fact application of judgment is in order to remove those distortions. Or, as you guys have done, you enhance the credibility of the algorithm or data in order to eliminate those distortions naturally by squeezing out the noise.

    “An inverted station is no longer temperature of course and the copying of it to the wrong side of the continent is an artifact of the algorithm doing the best squiggle matching it can. Even if this made the squiggle match look a hundred times better, it would still be nonsense.”

    This is exactly what I was trying to get at. I think we’re on the same page.

    The purpose of the rest of my comment was to point out that TCO’s error is to take a case – predictive modeling – where negative weights may make sense and falsely assume that the same application can be made here. It’s apples and oranges, and despite his comment on #10, we are not on the same side. The beta on gold is negative because that is how it naturally correlates to market movements. A negative beta is not intuitively or inherently nonsensical. But beta measures a relationship of one price versus another price. This is not remotely close to comparable against this study. Instead, it would be as if you wanted to see the average price of gold by taking the weights of different brokerage houses. You decide to weight each house according to volume of sales. After all is said and done, you weight three brokerages’ volume wiht a negative number, because you just think it gives a better answer against some other metric that you are trying to be in line with. That approach would be nonsense regardless of whether or not gold has a negative beta against stocks.

  33. #35, After rereading I can see why that’s confusing. Figure 2 is the original recon by Steig, pre-1982 is thermometer data, post 1982 is the PCA of satellite data. There is no thermometer data post 1982. This was Kenneth F’s point when he first realized what that meant, they’re not the same thing. We’ve pasted air temps onto satellite skin (read dirt/ice) temperatures.

    So in the reconstruction we see a linear combination of surface station air temps pre-1982 and satellite temps which fluctuate similarly to the surface station air temps post 1982.

    What I’ve shown is that with these weights multiplied through the surface stations you get the same answer as Steig et al. Figure 3 is surface station air temps only. — I used the weights I calculated times the surface stations to reveal the air temp data never revealed (post 1982 period) in Steig et al.

  34. 39. I think the “Jeff pie chart” shows the weighting of the stations in the overall trend. So it is not just a PC data reduction thingie, but after stuff is fed through the algorithm. And that algorithm has a predicitive function. Basically for the various areas in Antarctica, you see which combination of stations gives you the best guess of what a sattelite would have shown if they had been flying in the olden days. There is some PC stuff in the middel. But the key issue as McI and Jeff correctly note is that essentially your output is a compilation of stations, just using some math to show the weightings. This is a multiple predictor type situation. Jononathon will back me up as well…

  35. #36, I believe you’ve got what Ryan and I are saying exactly. Also, SteveM and RossM’s reply on mann 08 to PNAS makes the same point.

  36. #38 – Jeff, Ok, so figure 4 should really stop at 1982 – because after that it’s apples and oranges.

    I think I finally am coming close to understanding the Steig methodology. Take geographically sparse post 1982 station data and compare it to satellite date that covers the entire continent for the same period of time. This produces weightings of the post-1982 data that create a “satellite like” temperature series, derived from the station data only, but covering the entire continent. We then apply those weightings to the pre-1982 data to project the “satellite like” temperature series back into the pre-satellite era.

    Is that close?

  37. #41 Yes. It should stop at 1982.

    It is theoretically possible to use Steig’s methodology to extrapolate the satellite data backwards to 1957. Their methodology does not do this, however.

    Remember that the PCs are just a series of coefficients. Attached to the PCs is a map. The coefficient is multiplied by each point on the map to get temperatures. So the PCs are not temperatures – they’re an abstract quantity representing how temperatures get distributed across the map.

    The tool used to “extrapolate” – RegEM – works by minimizing the squared error for every quantity. That’s where the problem comes in. An error in a thermometer does not mean the same thing as an error in a PC. A thermometer error is simply a temperature at a specific point on the map at a specific time. An error in a PC propagates throughout the entire map – and is only a “partial” temperature. RegEM doesn’t know the difference.

    So what ends up happening is that the pre-1982 reconstruction is dominated by station data. There’s a lot more station data than PCs, so the errors in the stations drive the output of RegEM. It follows the stations, not the PCs.

    This means in the pre-1982 portion of the reconstruction, the result “bends” the PCs to fit the stations. This is perfectly okay – as long as you don’t tack the raw satellite PCs which have no station data onto the end of it. That causes a discontinuity at 1982, where the splice occurs. This is one – among many – problems with the Steig reconstruction.

    So Steig really has 2 choices: Do an exrapolation of the satellite data (and it’s not clear how it’s possible to weight the PCs to truly make this an extrapolation) or use the stations as anchor points and use the PCs to fill in the empty space between stations based on the station temperature. It is very clear mathematically how to do the latter. In fact, Steig already did the latter. All he would have had to do is throw away the satellite PCs and extend the ground station-based reconstruction all the way to 2006.

    Dhogaza: Why do you take Michael Mann’s word for what I say I am going to do? Do you have any idea how long it takes to go from what we have to a paper? You can bet your ass that this will become a paper. If you want to find out whether I am “hesitant” about peer review, I would recommend asking me rather than RC. Usually helps to go to the source.

  38. Ryan, it’s good that you will write a paper. It’s bad that Jeff Id talks about final straws (we had similar final talk months ago) when you haven’t even gotten things organized.

    ========REPLY
    TCO, You are making the same argument over and over with statements like read multiple regression, publish and Jeff Id is a moron.

    I don’t have time to read the book you wrote right now. I will try for later and consider approving it if there is a different point.

  39. #39 – I understand your point. It’s clearer to my now what your argument is. And I can even understand a bit more as we go along how an argument might be made why it might be considered a reasonable approach.

    But I still think it’s a problematic application, because it doesn’t make any practical sense.

    The predictive modeling I’m talking about are oscillation patterns. It says “X happened at time Y, and we can expect some negative factor of X to contribute to the result of time Z.” This can make perfect sense. In this case, however, it appears to be a simple case of a calculation that produces the optimum result, but for which it doesn’t make particular sense. No reasonable person would consider any station’s temperature measure to contribute negatively to overall average temperatures. Thus, if this is the weighting that best back-fits the data, then it speaks to noise (problems) in the data, or a flawed methodology, or both. “Predictive” or not, there’s still the final reasonableness check for sensibility that must be made.

  40. TCO: I don’t have a problem with Jeff talking about final straws because the math is right. Whether it gets into publishable form doesn’t have anything to do with the accuracy of the math. 😉

  41. #34 Jeff, Im not sure using satelite skin temperature is a good idea since NASA says it is only accurate to 3 degrees K, and has trouble with cloud cover. See my attached Q&A from NASA

    1) How are “surface temperatures” determined for forrests and other areas where the land is covered?
    2) What is the accuracy of land surface measurements (+/- degrees C)
    3) What is the effect of cloud cover on accuracy?
    Thank you, Gary

    Our Response:
    ———————————————————————-

    Thank you for your interest in the AIRS products.

    1) Surface Skin Temperature is the specific AIRS product. It is
    determined by the combined retrieval algorithm which determines the
    cloud-cleared radiance (brightness temperature) and the surface
    emissivity. Dividing the first by the second yields the physical skin
    temperature, which may be ground (if bare surface), ocean skin
    temperature (not to be confused with bulk temperature), or forest
    canopy skin temperature.

    2) Land surface temperature is problematical, since the emissivity of
    bare earth will vary greatly over the 50 km diameter spot in which our
    retrieval is made. Our estimated uncertainty at present is 2->3 K.

    3) We have found no correlation with fraction of cloud cover, beyond
    our retrieval yield dropping when it reaches about 80%. Low stratus
    clouds are problematical, as we cannot discriminate between a field
    covered 100% by low stratus and a clear field. The temperature of the
    cloud tops of low stratus is close to that which would be encountered
    on the surface.

  42. Ryan: It’s a bit of work to write a good paper. I’ve done several. I think a day to get a decent draft and a couple days to polish it perfect (and I take a “QDR” type attitude to submissions that few do…) But much more work would be organizing your thoughts and verifying your claims. In several cases, you and Jeff make broad theoretical statements (area weighting is better, no negative weightings, etc.) but you can’t cite literature to back it up.

    Yeah…it’s a bit of work. But if you want to be taken seriously, its necessary. And certainly having people need to bother reading and responding to some sort of evolving amateur mishmash is way more time-wasting than the time to write things up properly.

  43. RE47, Gary.

    I can add that others think near surface temperature by satellite measurement has accuracy issues. I had a similar set of caveats sent to me today from Dr. Roy Spencer regarding using AMSR channel 4 (surface) data.

    RE49, Yes, but you’d spend more than a day filtering the language, so what is the net gain?

  44. Ryan O,

    Doghaza actually said that Watt’s thought you were hesitant to publish. This claim did not originate from Mann as you claim. Either way, 3rd party hearsay is irrelevant.

    I’d personally like to see this published. Have you thought any about the form you’d like it in? You could, for instance, just make a paper criticising the methods of Steig et al., but I think you’d gain more traction, scientifically, and therefore improve the chances of its publication in a high profile journal, if you presented your own analysis and tried to contextualise it relative to Steig’s work (showing its advantages) and to models (showing discrepancies or agreements). I am aware of some other studies looking at this area coming along the pipeline in the next few months, so you have some competition!

    To Gary,

    You’ve been talking to folks at NASA working on AIRS which is on board TERRA. AIRS has a much larger footprint compared to AVHRR so isn’t very suitable for this kind of analysis. The errors you quote for skin temperature are specific to AIRS. AVHRR has its own unique error characterisation. Also, AIRS and AVHRR deal with clouds in different ways. AVHRR is sensitive to the presence or absence of clouds. AIRS, rather weirdly, report that they aren’t, odd. To summarise, the performance and retrieval characteristics of AIRS are irrelevant to AVHRR.

  45. Jeff, Ryan O

    It has been quite interesting reading about the work you guys have been doing. It seems to me that your points are valid if not earth shattering.

    Everyone is setting up the peer reviewed paper straw man. Being realistic if neither one of you guys is a University type or at least a PHD I’m not sure what journal is going to take this seriously no matter how good your math is. In the end, from the view of a journal, you show less warming than Steig et al., and some things that are awkward about their methodology.

    If you can get a published mathematics/statistics guy/gal on board then it seems to me that you might have the possibility of publishing a methodology paper that uses Steig et al. as a counter example.

    Alternatively write up something short and submit it to Nature as a response. Its probably a long shot to getting it published but a lot less time than writing an original paper.

    None of this invalidates your work, but you have to choose you ground, and the blogosphere is probably the right place for this.

  46. Nicholas

    Why would you say the blogosphere is the appropriate place for this? The appropriate place for any science is the Journals.
    If you don’t publish, you perish.

    It may be useful for Ryan O and Jeff ID to approach a university and get them to comment on the work and suggest where to publish, but not publishing should not be an option.

    To be taken seriously science needs to be published, that’s where the debate takes place.

  47. Blog posts tend to be lazy and sloppy: in logic, organization, referncing the literature, graph labeling, soundness of arguments, consistent terminology, etc. Also they are ephemeral–where is Chefen’s work now? In addition, they are subject to change and editing without document control.
    =========

    REPLY: You try to do these calculations somewhere on line and see if you are perfect. If you have the guts to be wrong once in a while you can shrug it off and move on. If you’re trying to prove your the smartest and most accurate guy in the world – blogging isn’t the best way to do it.

    I grow weary of the constant criticism again TCO, I don’t need to spend my time fishing insults by a person who does not understand the content they criticize out of the viagra spam bucket. Add something constructive.

  48. The point is NOT to say “ha ha you’re not perfect”. They point is KNOWING how easy it is to be imperfect makes a systme which drives better work product more valuable. IN ADDITION, when you confound having controversial, revisionist views with not even being able to state them properly, you do everyone a disservice. If anything, you should try to be even more perfect on the simple things. So you don’t get gigged for it. and also so that all (proponents, opponents and those in the middle) can clearly engage on the issues of substance.

    I don’t care if you are tired or if the comments upset you. They are content-filled. Go snip some of the mindless ataboys, if you want to clean things up.


    TCO: I’m not actively snipping you, I’m reading your comments and approving them out of the cialis bucket. I can turn the bucket off but then you’d go back to swearing and putting in a hundred comments an hour. WordPress considers you spam, I think it must learn when I snip the cussing.

    I’m not interested in letting your half educated remarks take over my work here. You don’t understand what you are saying and those who get the math can see it. You aren’t bad but really you need to study more.

  49. There are some posts in the other thread that are old that need to be let in. about halfway through the thread. Just casue I don’t know the details of some computer program or stats thingie doesn’t mean that I don’t know more about science publishing than you, or even that I don’t have more logical thinking processes.

  50. #56 If I chopped something worthwhile, I’m sorry …. ish.
    It’s become important that you don’t take over this blog. People like Johnathan Baxter AND myself need room to discuss with reason these points. You are welcome to comment but you don’t get the math or the meaning.

    I let you comment freely in the past because you are clearly a smart guy, there is no question. As far as trolls go you are the best. However, of course there is a however, you do not admit when you are wrong and you don’t listen to the counter arguments. When the arguments show you the clear detail you don’t have the background to understand. In my opinion, background is not required… only study and study REQUIRES PRACTICE. — this is not a small point.

    TCO you are unqualified in this arena, not due to a lack of innate intelligence but rather a lack of experience or willingness to spend the time. — You need your pushups CO if you want to hang.

  51. Why do you take Michael Mann’s word for what I say I am going to do? Do you have any idea how long it takes to go from what we have to a paper?

    Yes, I do, and no, I didn’t “take Michael Mann’s word for it”, because I have no idea if he’s said that or not.

    However, Watts said that he’d encouraged you to submit for publication but that you were “reluctant” to do so, over on his blog.

    Perhaps you meant that you were reluctant to do so *prematurely*, and that he didn’t understand your meaning.

Leave a comment