The Reviews

Ryan O’Donnell

The most pervasive theme in the “Doing It Ourselves” thread was what the reviews said. Rather than respond individually, I thought it would be best [read: less work for me!] to do a post on it.
This post will focus on the comments that required changes to the manuscript. With one exception, I will not spend any time on the comments that we addressed without changes. There are a number of reasons for this. The biggest reason is that several of these comments required only wording changes for clarity as the comments were motivated by misunderstandings. Another reason is that successfully addressed comments have no bearing on the science that will be published.
As I mentioned before, there is one exception that I would like to bring up, because it makes a salient point about how important wording can be in a paper. Quoting from S09:

In this Letter, we use statistical climate-field-reconstruction techniques to obtain a 50-year-long, spatially complete estimate of monthly Antarctic temperature anomalies. In essence, we use the spatial covariance structure of the surface temperature field to guide
interpolation of the sparse but reliable 50-year-long records of 2-m temperature from occupied weather stations.

Now . . . what does this mean?
Depending on how much you know about the S09 method, this could be interpreted in a number of ways. If you know quite a bit, it could be interpreted as combining infilled station data with the AVHRR spatial eigenvectors. This would be the mathematically correct interpretation (well, almost correct). If you know not as much, it might be interpreted as using the AVHRR spatial structure to help predict ground data. This would not be a mathematically correct interpretation. During the review process, one reviewer took the later interpretation, and generated this comment:

. . . S09’s methodology is less sensitive to errors in the ground stations than is RO10’s because the former uses information from the AVHRR data when infilling the ground stations, while RO10’s does not. This does not necessarily mean that S09’s methodology is superior: RO10 provides a sound argument as to why the use of the AVHRR data to help infill the ground-station data may possibly be problematic . . .

This is easily shown to be the improper interpretation. Our response:

The claim by the reviewer that S09’s methods are less sensitive to the quality of the ground data is not accurate, and the reasoning given is also inaccurate. In S09, the number of ground stations (42) overwhelms the contribution from the PCs (3). One can test this by splitting S09 into 2 steps: first infill the ground stations, and then add the PCs. If this is done, the following results are obtained:

West Antarctica Peninsula East Antarctica Continent
Original S09 0.20 +/- 0.09 0.13 +/- 0.05 0.10 +/- 0.10 0.12 +/- 0.09
2-Step S09 0.19 +/- 0.09 0.13 +/- 0.05 0.10 +/- 0.10 0.12 +/- 0.09

As you can see, the results are virtually identical.
The primary point from all of this is that clarity matters a great deal. Note that S09 do not claim in the paper that using the PCs to help predict missing ground station data makes the reconstruction less sensitive to errors in the ground station data. However, if the statement that is actually present in S09 is improperly interpreted because the description of the method is not entirely clear, this may be interpreted as a claim of improved performance.
Clarity matters.

Okay, with that finished, let’s move on to the review comments that elicited changes. One group of comments, which I will simply list out, were minor editorial changes, which included (with like comments or issues that were repeated more than once in the text combined):
1. Inconsistent mathematical notation (several spots)
2. Inconsistent citations (e.g., North, 1982 instead of the correct North et al., 1982)
3. Remove a reference to RealClimate
4. Remove discussions concerning GCMs (none of us agreed with this, but as it was not relevant to the main point of the paper, we yielded on this one)
5. Make the abstract consistent with the main text (e.g., note that statistically significant warming is found for the West Antarctic regional average in the abstract)
6. Explain briefly how our criticisms apply to all of the S09 reconstructions (TIR, AWS, and standard PCA)
7. State whether confidence intervals took into account a degrees-of-freedom reduction due to serial correlation of the residuals
8. Change the title to more appropriately reflect the scope of the paper (the original title was “Deconstructing the Steig et al. [2009] Antarctic Temperature Reconstruction”)
9. Move as much of the relevant information from the Supporting Information to the main text, and keep the Supporting Information to a manageable length (originally, the Supporting Information was double the length of the paper)
10. Specify whether the residual trend between the raw AVHRR data and ground data was statistically significant
11. Keep the same style (i.e., active or passive voice) throughout the paper, rather than switching styles between paragraphs
12. Clarify the description of the S09 method to prevent confusion
13. Clarify the section on using RegEM to attempt to calibrate unlike variables (several portions needed clarity)
14. Rewrite the section on the difference between the AVHRR eigenvector weighting and the weighting used in RegEM (all of the original 3 reviewers had similar comments concerning the difficulty they had understanding this section)
15. Clarify the explanation of how the S09 method geographically relocates the Peninsula trend
16. Clarify the equations used to generate summary statistics listed in the tables
17. Fix several improper table reference numbers (i.e. “Table 5” when what was meant was “Table 6”)
18. Add more substantial justification for the choice of regularization parameter in the RLS method
19. Add a table showing seasonal trend results
20. Changes to figures:
a. Move the replication of S09 figure to the Supporting Information
b. Move the area definitions from the SI to the main text, and add station locations
c. Include a figure visually demonstrating the claim that “variance loss in our reconstructions is small”
d. Move the seasonal trend maps from the SI to the main text
e. Show boundaries of statistically significant trends on the trend maps
f. Include a figure showing statistically significant differences in trend between RO10 and S09 to visually demonstrate the claims of differences in the text
Quite a laundry list of changes . . . and remember, I combined a lot of them. These were the kinds of things that are indeed important, but that “blog reviews” generally do not catch. Peer review, on the other hand, did a good job in requiring that they were fixed.
So those were the minor changes. Now for the major ones!
MAJOR CHANGE #1
The most substantial change (which was due to the reviewer who generated the 88 pages of back-and-forth commentary) – and probably the most important (must give credit where credit is due!) – was for us to remove our primary reconstructions from the main text.
The initial version of the paper used reconstructions where we infilled the missing ground station data using RegEM TTLS. We chose this route to have reconstructions that were as close as possible to S09’s method. However, the results are strongly dependent on the truncation parameter used. To support our choice of truncation parameter, we spent a good deal of time in the paper explaining a rather comprehensive cross-validation method and several alternative methods that all arrived at the same result. This reduced the amount of time that we could spend actually discussing the results and added some confusion by referencing a bunch of different procedures.
During the review process, we provided data from RegEM Ridge showing that ridge regression provided the same general results, but with much improved verification statistics and superior reproduction of the LF information in some key areas (such as the Peninsula and West Antarctica). The reviewer suggested that, as we believed the ridge regression results to be superior, these should be the ones shown in the main text.
This resulted in a major rewrite of the paper, and a complete re-do of all of the calculations, figures, and tables (I spent about 3 weeks on it). The entire “Results” section had to be thrown out and rewritten from scratch, the code required some major rewrites, much of the commentary about truncation parameters no longer applied, and the cross-validation method had changed.
I won’t go into detail here about the differences between RegEM TTLS and Ridge – you’ll have to wait for the published paper for that!
MAJOR CHANGE #2
The second substantial change was something we instituted on our own based on cross-validation concerns generated by the same reviewer. While all of us felt that these concerns were poorly justified, demonstrating this required a whole new set of cross-validation statistics. Since the new calculations provided stronger evidence that our concerns with the S09 method were valid, we removed the old cross-validation methods from the text and code and substituted the new.
MAJOR CHANGE #3
Another rather important change – which was requested by 2 of the 3 initial reviewers – was to detail the effects of each of the proposed modifications. For this, I will quote excerpts from our review responses.
Effect of including additional satellite eigenvectors alone


In the full period, this variant captures some (but not all) of the features in the RO10 reconstructions. Those features captured are the reduced warming in the Ross region as compared to S09 and better localization of the Peninsula trends. Absent are the prominent Ross, South pole, and Weddell area cooling. Additionally, the continent-wide trend is much closer to S09 (0.010) than RO10 (0.05).

More significant differences are apparent in the subperiods. The 1957 – 1981 plot looks far closer to the equivalent S09 subperiod than the RO10 reconstructions. The Ross cooling is reduced, the pole is warming instead of cooling, and the strong warming in Victoria/Wilkes Land is absent. Given that the latter two features are in well-observed regions of the continent and match ground records, their absence is significant.

The 1982 – 2006 plot is also substantially different, as it is merely the truncated, but otherwise unaltered, AVHRR data. This is a crucial observation. If the regression coefficients are directly compatible with the AVHRR eigenvector weights, then using the modeled PCs could not greatly alter the patterns of this subperiod.

Effect of additional satellite eigenvectors + constraining the regression coefficients by the eigenvector weights (i.e., add a constraint that prohibits “negative thermometers”):


In the full period, [constraining by the AVHRR eigenvectors] provides patterns that are very similar to the RO10 reconstructions. Most of the essential features are captured, albeit with the Weddell and South Pole areas showing less cooling than RO10. While the spatial patterns are reasonably well represented, as noted in the response to the previous problem, this does not extend to the overall magnitude. This variant captures only 2/3 of the difference in the continental trends, leaving a substantial portion unaccounted for.
In the subperiods, the patterns remain significantly different from RO10. As Mod 3 only affects the 1982 – 2006 period, the 1957 – 1981 plot is unchanged and retains the same deficiencies noted in Variant 1. The 1982 – 2006 plot, on the other hand, looks substantially different from both Variant 1 and the RO10 reconstructions. It is clear that properly calibrating the PCs has a significant impact on the spatial distribution of trends. This confirms the statements in our text that the coefficients used to predict the PCs differ materially from the weights used to recover gridded estimates, and shows the reviewer’s belief that use of the modeled PCs has little impact on the spatial patterns is not correct.
Furthermore, the 1982 – 2006 plot is missing all of the essential features of the RO10 reconstructions. It shows a visibly apparent loss of variance, displays a large cooling region in the Ross area, and is missing the Victoria/Wilkes Land and Weddell area cooling.

Effect of constraining the regression coefficients by the eigenvector weights, properly calibrating the PCs, and using only 5 eigenvectors:


With only 5 PCs but including [use of the properly calibrated PCs and physical weighting constraints], most of the essential spatial features of the RO10 reconstructions are present, both in the 1957 – 2006 period and in the subperiods. Though there is visually apparent variance loss between these reconstructions and RO10 – and the warming in Victoria Land near Cape Adams is significantly reduced in the 1957 – 1982 period – the overall pattern is close to the RO10 reconstructions.

It is clear that additional eigenvectors alone cannot account for the spatial differences between S09 and RO10. The same is true of the combination additional eigenvectors and [use of the properly calibrated PCs]. Furthermore, the dependence on the number of retained eigenvectors is less than implied by the reviewer, as most of the essential features of RO10 are reproduced with as few as 5 retained eigenvectors.

Five eigenvectors, properly calibrated PCs, but failure to use physical weighting constraints::

This variant demonstrates the significant impact of [the weighting constraints]. While most of the full period features are captured in this reconstruction, the subperiods are clearly different from both [the previous variant] and the RO10 reconstructions. In particular, without [weighting constraints], the 1957 – 1981 and 1982 – 2006 subperiods are virtually identical, with the exception that the latter displays muted trends.
To address the concern that the contribution of each modification is not documented, we have amended the text to include the table at the beginning of this discussion.

MAJOR CHANGE #4
The final major change – and one that I was quite reluctant to part with – was to remove the discussion of Chladni (e.g., standing wave) patterns in the eigenvectors. However, since we could not spend too much time developing this argument (most of the relevant stuff was only in the SI), the section did seem somewhat out-of-place. So we yielded on this issue and removed it. However, the discussion of Chladni patterns is of critical importance in general to valid methods of choosing truncation parameters.
We therefore intend to write a standalone paper dealing with this issue (work on that has not yet begun).

THE END

(thankfully!)

Edit:  Added from below

There are three ways in the reviewers requested/proposed modifications:

1. Present a reasoned, substantiated argument that something we had said was incorrect, incomplete, or not sufficiently detailed.
2. Request that additional information be provided as it would be useful for readers.
3. Make unsubstantiated, hand-waving claims that something is wrong.

For the “laundry list” of minor items presented above, most of these fell into #1 or #2, and were what I would consider part of the “typical” review process. These primarily came from two of the initial reviewers, and the fourth reviewer that was added later. For the “MAJOR CHANGES” listed above, all came as a result of comments made that fell into category #3.

In the specific case of the effect of the modifications, one of the reviewers suggested that a table or discussion of the importance of each would be valuable to the reader. This, of course, would have been easy to comply with and would not have required an inordinate amount of discussion. Another reviewer – rather than asking what the effects of the individual modifications were – made several claims that were not substantiated in any way:

1. The difference in patterns of trends was due “almost entirely” to the use of more AVHRR eigenvectors
2. The difference in magnitude of trends was due to the use of the calibrated, modeled PCs

So rather than simply add a table and discussion concerning the effects of the modifications, I had to spend 5+ pages demonstrating that the hand-waving claims of the reviewer were incorrect. The reason I had to spend this time was because the reviewer used his claims to attempt to make yet another claim that we did not show that the Peninsula trends were geographically relocated by the S09 method, and that the reduction in the continental trend was due to an arbitrary choice concerning the PCs and not through any mathematical requirement or objective criteria. This resulted in a great deal of extra work for us, an unnecessary delay, and an overly long response.

While the end result was that the paper was improved, a great deal of the work required to implement the improvement was, in my opinion, valueless.

The frustrating (and unnecessary) part of the review process was the sheer number of completely unsubstantiated claims that we ended up having to show were groundless. In my opinion, it is perfectly acceptable for a reviewer to request additional information or additional research to support the conclusions in a paper. What should not be acceptable is for the reviewer to force the authors to respond to arguments for which the reviewer presents no evidence that his claim is correct. The former merely requires the authors to perform value-added activities, while the latter requires the authors to perform a heap of extra, non value-added work to address unsubstantiated hypotheses. Just as authors are required to show objective evidence for their claims, so should reviewers, as the reviewers can affect whether a paper is published or rejected.

So while it is true that in the end the paper contains stronger evidence for our conclusions than it did in the beginning, I question whether the amount of effort required was justified. One can spend five years sanding and re-lacquering a table to make it “better” than a 3-day refinishing job, but when 99.9999999% of the people who see it can’t tell the difference and the ones who can don’t really care, was the extra 4.99 years of effort worth it?

Anyway, the biggest issue I had with the reviews was that one particular reviewer insisted on making claims – which we had to rebut – yet rarely provided evidence that these claims were true. In my mind, that is not how the process is supposed to work.

30 thoughts on “The Reviews

  1. I don’t understand most of the math, so my comment is only worth so much. With that caveat, this post is a very nice exposition of this instance of peer review. In most respects, it worked in this case as it should. In saying this, I am disregarding the question of whether the amount of work that you put into review-spurred revisions was “worth it” — just noting that overall, you state that the paper was improved by the process in a number of ways. I’m also disregarding any possible issues about the 40-page reviewer.

    Thanks for sharing this.

  2. Two reactions.

    One, the comments and process do seem to have significantly improved the paper, made it more rigorous and more accessible.

    Two, is pre-publication review the place for doing that? I am not at all sure. The problem is that it all takes time, and in effect what is being held is a debate about the science, but its being held off line. So you can take a very long time, the result is tradeoffs and bargaining in order to get through it, but what is actually being done is what in previous eras would have been done post publication and with a much larger audience.

    I can see that it has merits, not least it may protect the authors if the scrutiny is serious and constructive. But you can also see how it can turn into an obstacle race based on what is being said, phrased as some very subjective issues about ‘quality’.

  3. Ryan,

    Thanks for this interesting post.

    I was under the impression that the presence of standing wave patterns in eigenvectors was pretty well known (eg “Numerical Aspects of Deconvolution”, Per Christian Hansen, 2000); am I mistaken about this?

  4. If you are referring to applied mathematics / physics / signal processing, you are correct. If you are referring to climate science, apparently not. 😉

  5. One thing I neglected to point out in the post above. I have been doing lots of writing lately . . . and quantity always degrades quality! Haha. 😀

    There are three ways in the reviewers requested/proposed modifications:

    1. Present a reasoned, substantiated argument that something we had said was incorrect, incomplete, or not sufficiently detailed.
    2. Request that additional information be provided as it would be useful for readers.
    3. Make unsubstantiated, hand-waving claims that something is wrong.

    For the “laundry list” of minor items presented above, most of these fell into #1 or #2, and were what I would consider part of the “typical” review process. These primarily came from two of the initial reviewers, and the fourth reviewer that was added later. For the “MAJOR CHANGES” listed above, all came as a result of comments made that fell into category #3.

    In the specific case of the effect of the modifications, one of the reviewers suggested that a table or discussion of the importance of each would be valuable to the reader. This, of course, would have been easy to comply with and would not have required an inordinate amount of discussion. Another reviewer – rather than asking what the effects of the individual modifications were – made several claims that were not substantiated in any way:

    1. The difference in patterns of trends was due “almost entirely” to the use of more AVHRR eigenvectors
    2. The difference in magnitude of trends was due to the use of the calibrated, modeled PCs

    So rather than simply add a table and discussion concerning the effects of the modifications, I had to spend 5+ pages demonstrating that the hand-waving claims of the reviewer were incorrect. The reason I had to spend this time was because the reviewer used his claims to attempt to make yet another claim that we did not show that the Peninsula trends were geographically relocated by the S09 method, and that the reduction in the continental trend was due to an arbitrary choice concerning the PCs and not through any mathematical requirement or objective criteria. This resulted in a great deal of extra work for us, an unnecessary delay, and an overly long response.

    While the end result was that the paper was improved, a great deal of the work required to implement the improvement was, in my opinion, valueless.

    The frustrating (and unnecessary) part of the review process was the sheer number of completely unsubstantiated claims that we ended up having to show were groundless. In my opinion, it is perfectly acceptable for a reviewer to request additional information or additional research to support the conclusions in a paper. What should not be acceptable is for the reviewer to force the authors to respond to arguments for which the reviewer presents no evidence that his claim is correct. The former merely requires the authors to perform value-added activities, while the latter requires the authors to perform a heap of extra, non value-added work to address unsubstantiated hypotheses. Just as authors are required to show objective evidence for their claims, so should reviewers, as the reviewers can affect whether a paper is published or rejected.

    So while it is true that in the end the paper contains stronger evidence for our conclusions than it did in the beginning, I question whether the amount of effort required was justified. One can spend five years sanding and re-lacquering a table to make it “better” than a 3-day refinishing job, but when 99.9999999% of the people who see it can’t tell the difference and the ones who can don’t really care, was the extra 4.99 years of effort worth it?

    Anyway, the biggest issue I had with the reviews was that one particular reviewer insisted on making claims – which we had to rebut – yet rarely provided evidence that these claims were true. In my mind, that is not how the process is supposed to work.

  6. By the way, Jeff, if you think it would be better to add this as an “edit” to the original post, I would be okay with that. Up to you! 🙂

  7. PolyisTCOandbanned says:
    Your comment is awaiting moderation.

    10 December 2010 at 2:06 PM
    1. Still waiting to read the paper. Looking at it, side by side, with Stieg will be interesting. Comments below are “gut” reactions on the whole kerfuffle, from a blog dispute follower perspective, fwiw:

    2. From when I first understood what Steig had done, I thought it was a neat idea. Something definitely at least worth trying, and possibly something that would give us a better picture. The basic idea (I think) being to use sattelite period when we have HUGE information about how different weather patterns appear on the continent, to allow us to infer past temps at “blank spots” in the continent, back in the day when there was just surface stations.

    3. It’s actually my LIKING of that cool concept that makes/made me worry about the approach of doing PCA and throwing away the stuff that does not go into the first three PCs. You lose the possibility of a lot of complicated patterns if you only have 3 PCs. Similar to mapping a complicated surface, to looking at a chemical compound, etc. 3 hybrid orbitals are better than 1, but may not be sufficient to understand some behaviors. For instance, let’s say there was some weather pattern that a spot on the continet would be warm when one other three other spots were warm, and two were cold (assume they are all far apart and somewhat independently variable). You could not follow that with the 3 PC approach. Of course, I don’t know that such complicated patterns exist. But if anything it would be nice if they did as it would give us more of a key to unlock a door…more of a “fingerprint”…to give us pattern detail of the past from this deft approach of Steig.

    4. I’m interested in the PCs and their geographic appearance. I’m not a math jock, but from the beggining the PCs have always sort of reminded me of orbitals for electrons (the pictures), or perhaps those drum patterns you get when you take Engineering Mathematics and (try to) learn about Bessel functions and the like. I’m actually interested to learn how much Eric or others have thought about this issue and if there is any point, to pushing thinking about the analogies.

    5. I’ve always been a little worried about some of the implicit, partial, hidden (pick a caveat, this is my impression!) assumptions about PCA by “the team”. I guess mostly Mike, since he’s the one who is most mathematically sophisticated given the math-phys background. It seems like there is often an approach of assuming that PCs represent something physical (a factor), but of course sometimes they may and sometimes they may just be a mathematical abstraction. Also, that one can take the important first few, throw out the rest of them and actualy sort of “clean” stuff up, almost like filtering noise out. It just makes me worry of the analogy of throwing away outliers versus doing a simple average or trend, with all the data. I think of PCA as being more suited to factor searching for, or perhaps data compression, or facial recognition, or operation of a Nate Lewis chemical nose. But not as a “filter”.

    6. From the beginning, I’ve been annoyed by the approach on The Air Vent. First the incredible huge amount of noise to signal. All the different blind alleys and trial approaches. The mixing in of all kinds of triumphalism along the way, before the argument had even been refined. Not only is it annoying to read because of all the riffraff and forum games, but it’s just LONG.

    6.5. A few examples. The krieging approach (did any of it make prime time?) Also the “negative thermometer” complaining. A quick think would show that mathematically an opposite signed predictor can happen in a data set. And even some examples of weather patterns, where one area being cold correlates to another being warm (hemispheres of the Earth, El Nino WESTPAC versus US SE, even the pictures of the jet stream reaching down the middle of the US and snaking around that we see on the evening news since at least the 70s; and note the last two could have somewhat of a time dynamic, that is more frequent than just the winter-summer simple example) make one think it might be possible in practice. And despite even several skeptics showing Jeff Id mathematically how he was wrong, he could not (or would not out of stubborness) get it. That was a time when RyanO failed me too. Here, he never directly corrected himself “I was wrong, I take it back” but instead said after interminable internet style repetitive debate and even writing to Jollife and getting a genuine PCA expert to grace their thread, Ryan came back with a “oh my view evolved during he thread, but I never really wanted to say so, you should figure it out”. A simple tiny point, but the real issue was the failure to understand a clarification and to admit it–not that it blew up their other concerns.

    7. I’m curious why the pre-print was not circulated as a white paper. Is this JOC policy? Reading the pre-print, vice the blogorhea would be a way more efficient way to evaluate the counter-team’s thoughts. And those guys have a ready “bandwidth” and site and all to post it on. For that matter, given that they post the ongoing analysis (in blog posts), why not post the more refined and better communication?

    I worry that this is a “tactical approach”. In other words, make a lot of blather-on forum posts, and then just submarine up papers without exposing them to criticism. For instance the recent MMH paper had some real (obvious) problems in defining standard deviation (even!) and in clearly discussing the hypothesis test. And in that case, it looks like a bunch of (unimportant to the key dispute) math was larded on to make it look fancy or new or distracting, rather than squarely focusing on the definitional and hypothesis test debate.

    8. There is also a LOT of whining about reviewer comments. Certainly, it might be possible for a reviewer to be unfair, but my general impression has been that authors whine more and are more often in the wrong in these kinds of disputes. I’ve always found being brutally clear (even to admitting dropping a sample on the floor) to assuage reviewers. It’s only when one tries to spin too hard, that they dig in. Also, have found that very clear writing and following the POSTED guidelines of a journal does a lot to get a paper through the wickets. Many authors could save themselves trouble if they just put a higher level of care (basically make the paper as perfect as you know how, imagine that the reviewers will fall dead and you need to get that paper perfect on your own) before submission, rather than dealing with all the back and forth letter writing. If you really do it right, you should be able to get a lot of papers accepted without revision.

    I have to take the reviewer whining with a grain of salt, on the most recent counter-team paper. Of course it COULD be justified. But how can we tell unless we see the inital paper, final paper, and perhaps the correspondance? Based on the blog posts I’ve seen, some previous McIntyre submissions, writing conference pressentatations the night before a meeting, etc., I can’t be confident that the initial draft was even smooth. Of course, NicL and Ryan may have done a better job and they seem to have had more of a hand in this one. But…I’m explaining my Bayesian betting basis…

    9. My general impressions of the authors: NicL have never seen anything bad out of the guy. I don’t know if he is a genious, but he seems more intellectually honest and definitely more moderate in tone. Ryan is pretty good also. I’ve had one or two times when he failed me, but he seems at least capable of perhaps doing work that would even more strongly prove a “team” hypothesis and then still publishing it. McIntyre is bright for sure (although a little brittle about it, the whole unexplained Oxbridge time and the “could have been doing econ with Samuelson” and the love of adulation from his forum hoi polloi) but you have to respect a guy in his 60s who actually codes and weeds through data and manipulates linear algebra and all (how many tenured profs do? It seems like many leave math for their younger days.) But he has been repeatedly dishonest in presentation. I don’t trust him to reveal an analysis that helps the other side…and he LOVES sealawyerly word games (I think he accusses others of pea and thimble so much, since that is his own wont) as opposed to real Feymnannian, pure scientist curiosity. And his presentation of ideas is so hard to follow, it’s incredible. Not only is it unsuitable for science, it would not get the job done in any corporation, in the military, etc. It’s just not clear discussion of analyses. Jeff Id is the least of them. Lacking the math chops of the rest and even more emotional and bombastic than McI (sort of halfway to Watts really). Probably be fun to drink brews with and slap high fives with if we were cheering for the same sports team, and his personality animates his blog and attracts viewers. But not a serious analyst. The “I’m an aerospace engineer” so will opine on contrails being missiles pretty much showed the guy’s blind spot, not just that he was wrong, but that he was overimpressed with himself. I think in some ways, a little knowledge is a dangerous thing and you have to at least progress to the point of knowing what you don’t know (ala Rumsfeld) to make real progress. I mean heck, I’ve SHOT Tomahawks from a sub off of SOCAL, but I don’t consider myself a photographic expert on contrails (although I do know that test shots get fighter pilot escorts ready to shoot down wayward missiles and are coordinated and planned to the second and not likely something that “happen on the wrong day”.)

    10. And of course, the treatment at CA/WTFIUWT/TAV seems to be continued blog gamesmanship and PR drum-beating. Is it a refutation or not? Where is a thoughful post describing all the blind alleys and aspersions in the previous posts, and now clarifying which were right and which wrong? Where’s the paper? How can we beleive the reviewer complaints. Etc.

    11. P.s. I don’t read TAV that much, but I did see that they wanted an attaboy from me. I am happy that they have produced a paper (so I can read that instead of blog posts). I would post there, but they submit my posts to more moderation than other commenters and I’m on strike because of it. But “good job, looking forward to reading it”.

    (cross posted at AMAC’s blog in case this does not make it through moderation)

    (brok down and posted here…Amac’s blog makes me chop up my remarks and then erases what I’ve written after the second post that gets through.)

  8. Reread my comments around the negative thermometer time and you will see SEVERAL times when I said the number of PCs and level of geographic detail was the most interesting and likely area. I’m fully capable of pushing on one point (and I was “right” and you “wrong” on negative thermometers) even while ceding other issues, and not just as a method of trying to make one side or the other win, but because this is a method of advancing understanding. This is just disaggregation. EOM and take care.

  9. Thanks for the nice psych review TCO, gotta love your ability to figure stuff out.

    Hey Ryan, I think you need to redo the paper. TCO, the genious (his spelling) is back to wanting inverted thermometers.

    “Effect of additional satellite eigenvectors + constraining the regression coefficients by the eigenvector weights (i.e., add a constraint that prohibits “negative thermometers”):”

    No problem I’m sure, you’ve redone it enough times anyway.

  10. I actually like TCO . . . it’s just I finally had to stop arguing the point. He’s tenacious, which is good, but unfortunately has difficulty disaggregating ( 😉 ) what he thinks is being said from what actually is being said.

  11. Reviews of the type “Make unsubstantiated, hand-waving claims that something is wrong” are often representative of what much of the (hostile) audience will think upon reading the paper. Sometimes it requires unnecessary work to deal with, but the hidden benefit can be increased clarity in the manuscript for the sake of the idiot readers who would otherwise jump to the wrong conclusions.

  12. Welcome back TCO, and thanks for putting all your points in one post.

    One question based on your point 6: Given that you don’t like the approach here, which blogs in your opinion do it right?

  13. @ #9

    Ryan O said
    December 10, 2010 at 12:20 pm

    …The frustrating (and unnecessary) part of the review process was the sheer number of completely unsubstantiated claims that we ended up having to show were groundless. In my opinion, it is perfectly acceptable for a reviewer to request additional information or additional research to support the conclusions in a paper. What should not be acceptable is for the reviewer to force the authors to respond to arguments for which the reviewer presents no evidence that his claim is correct. The former merely requires the authors to perform value-added activities, while the latter requires the authors to perform a heap of extra, non value-added work to address unsubstantiated hypotheses. Just as authors are required to show objective evidence for their claims, so should reviewers, as the reviewers can affect whether a paper is published or rejected.

    While I sympathize and agree that review processes should not turn into stalling tactics as this one apparently did. I don’t see how one could make the argument that reviewers must be forced to back up all claims. It is the paper that is being submitted that must present and back up their argument, not the reviewer. The burden of proof must always lie with the submitter(s), imo. To do otherwise is to introduce an even worse element of human nature, which is shifting the burden of proof to the “skeptic”. That is precisely what climate science purports to do to us and it should be guarded against.

    THAT SAID, there should be a rational mechanism wherein a hostile reviewer is exposed and/or dealt with. The current system is obviously flawed in a way that protects cabals of gatekeepers, and doesn’t ferret out conflicts of interest. Climate science is definitely *NOT* the only area of science/engineering that this occurs in. I and people I know could tell you lots of stories of papers being delayed for strange reasons, then seeing other people’s papers with similar/better results come out at the same time or right after their own.

  14. Jeremy, there is a difference between a reviewer asserting that we did not properly substantiate something in our paper (which is valid) and a reviewer asserting something that neither appears in our paper nor in any other paper.

  15. Ryan, that’s a little different from what you said in what I quoted. Perhaps I just didn’t follow your story well enough to catch that. Apologies.

  16. Ryan O said

    I actually like TCO . . .

    must admit, me to, he is to the point.
    for some of the reasoning in his posts he makes sense to me (as giving a critical anyasis), but he can at times reason to far.

  17. PolyisTCOandbanned,

    “I’m not a math jock, but from the beggining the PCs have always sort of reminded me of orbitals for electrons (the pictures), ”

    You do realise that electrons do not “orbit”, that it is an outdated conceptualization?

  18. TCO can be sloppy in his comments at times, but here I think electron orbital can be properly used, even though not in the ordinary sense. See this link. And I agree that the some PC patterns remind of “orbitals”.

    http://en.wikipedia.org/wiki/Atomic_orbital

    It is always good to have TCO drop by (and quickly depart) and give us his psychological profiles of the main players on these blogs and do it sooo authoritatively – even if it might appear that it informs more of TCO’s state of mind.

  19. When reviewers make unsubstantiated claims, the editor should disallow that comment. In one paper I was detecting periodic behaviors in the climate system. A reviewer simply said “one can’t do this type of analysis” that’s it. No explanation as to why or based on what reference. The editor rejected. Wow. One also finds angry comments on the topic of climate change (and certain other politically charged topics also) and these angry comments should NOT be given weight by the editor.

  20. TCO (not TCO) said:

    Jeff Id is the least of them. Lacking the math chops of the rest and even more emotional and bombastic than McI (sort of halfway to Watts really). Probably be fun to drink brews with and slap high fives with if we were cheering for the same sports team, and his personality animates his blog and attracts viewers. But not a serious analyst. The “I’m an aerospace engineer” so will opine on contrails being missiles pretty much showed the guy’s blind spot, not just that he was wrong, but that he was overimpressed with himself.

    Then says:

    P.s. I don’t read TAV that much,

    Uhm… might want to do some soul searching on the “overimpressed with himself” concept.

  21. Sonicfrog,

    TCO is just mad because I put a trap up for his comments (a whole year ago) and while everything passed moderation it slowed him down so he couldn’t spam the threads with repeated nonsense comments. He’s never had anyone let him comment yet not in real time, drove him nuts!!

    Anyway I think I keep up with the group just fine but if I have to be the ‘least’ of them, considering the company, it isn’t much of an insult 😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s