Honesty in Blogging

Tamino’s wrote a quick reply to Dr. Loehles vindication post at WUWT. The post ended with this comment.

Gee. When compared honestly, Loehle’s so-called “vindication” becomes an indictment. What a surprise.

Below is the first graph from his site which Dr. Loehle used to point out the amazing similarity of a new article by Ljungqvist in comparison to his own work.  Work he was excoriated for by the flat handle hockey team members.

Dr Loehle’s plot is in blue the new work is in black.  Then Tamino uses a different Y offset for the plots below and claims that this is the graph it should be:

The vertical alignment (offset) of these anomaly graphs is basically an arbitrary thing, but they should match each other. Scientists try to make a match to current temperature in recent times but the data is very noisy.  The instrumental ‘red’ line Tamino overlaid gives all the information people need though to see the truth of what Tamino did in his indictment post.  This is all the information scientists have to align any reconstruction – the measured temperature ‘anomaly’ which also by no coincidence has its own arbitrary offset. When comparing two plots of allegedly the same temperature anomaly it is of course reasonable to offset according to the series mean, but in the case of a temperature reconstruction it might also be reasonable to make only the calibration period match.   The calibration period being the timeframe for which we have reasonable temperature measurements.

Tamino makes the claim that an ‘honest’ comparison makes the new work by Ljungqvist an indictment of Dr. Loehle rather than a confirmation.  So let’s talk a bit about honest.

Grant Foster  AKA tamino, is a PHD with an extensive mathematical background, he has published in climate and is fully capable of understanding simple mathematical points – which this represents.   He is fully aware that anomaly offsets are determined by the base period in which the anomaly is taken.  He is also fully aware that it would be reasonable to compare the series by offsetting each graph to zero according to the full dataset mean or a calibration period mean, either is completely fine.  What is not fine, is to simply take the graphs from two different methods and assume they have identical offsets.  Grant is very very aware of this.  When Grant saw that the original alignment shown in the second plot above didn’t match his red line well, did he wonder about the offset?   He declared victory and indirectly accused Dr. Loehle of dishonesty.

After writing the above, I received an email and Zeke did a post on this issue and realized the mistake at the end.  Nice post altogether but it leaves questions about how offsets were done for all curves.

Dr. Loehle answered at WUWT where the original thread on this was started.  The post is worth a read.

I see that Lubos has also made a similar point.

There is not much worse than a dishonest writer, Tamino demonstrated that characteristic pretty badly and due to the stupidity of his claims, boldly.

75 thoughts on “Honesty in Blogging

  1. Its somewhat tough to create a common period to normalize to with the Loehle reconstruction, at least using the temperature record for calibration, since it only overlaps during the 1880-1930 period (vs. 1880-1970 or so for most other reconstructions).

    The point does remain that Ljungqvist appears to be much closer to Moberg than Loehle.

  2. Zeke,

    Dr. Loehles reconstruction is based on non-dendro data so it has very few series but it unfortunately uses boreholes. That is the only problem I have with it.

    If all the reconstructions were centered around the 1800’s it might give the best comparison. But I’m very very critical of the methods and the data for almost all of these curves. Loehle’s method was ‘averaging’ which is pretty time tested but looking at these other curves, all I see is mathematical distortion of reduced dimensionality rednoise.

  3. Jeff you did see where Ljungqvist posted on mathematical distortion, agreeing with you and others?

    And that MBH98 was not today’s standard.

    I wonder if the good Ljungqvist realizes that now there is more evidence that IPCC WGI on attribution is being falsified. 😉

  4. Why is it so hard to say “We don’t know”? With the gaping holes that have been poked in all of these reconstructions, that would be the only honest assessment. We have some ideas, clues here and there, a lot of conflicting and fuzzy data, or at least what we think might be data, but we really don’t know. I guess “we don’t know” doesn’t get you millions in grants or the justification for radically transforming the global economy.

  5. #3

    Jeff you did see where Ljungqvist posted on mathematical distortion, agreeing with you and others?

    A subject worthy of a tAV post, I would have to say. 🙂 Given Ljungqvist’s willingness to participate, wouldn’t it be great if the author could join in on an objective discussion of such a post?

  6. It strikes me that regardless of the the offset used, the new proxy reconstruction shows a large degree of (natural)temperature variation and even the (UHI contaminated) instrumental data is not unprecedented. If it is not then we have no direct verification of AGW and only unsubstantiated modeling of long term climate variation.

  7. No matter the adjustment, I agree with A Semiconductor guy that it is the variation within the reconstruction that occurs before the instrumental period that informs us most about the critical pre-instrumental to instrumental temperature variations – assuming that the reconstruction can capture the extent of the variation and given an assumption that can be very much in doubt.

    The Team is very much aware of the pre-instrumental variation in reconstructions and the importance of these variations and this was the beauty of the original paper (for policy use) leading to the hockey stick, i.e. the handle on the stick was nearly straight and trending down before the instrumental period. Moberg with all its greater pre-instrumental variations was never popular with the Team as I recall seeing some hand waved criticisms.

  8. It is a serious disadvantage to be absent minded and packing for a trip at the same time, but I eventually have remembered that my own reconstruction is set to a zero baseline for the entire 2000 yrs, so it is only possible to compare to other series that are centered likewise. Gee Tamino, didn’t you read my paper before calling me dishonest?

  9. Well there you go then. The others are centered on the mathematically amplified blade of different lengths, and Dr. Loehles is centered on a two thousand year mean. Problem solved.

    How come so many of us can look at the blade end of this reconstruction, see the misalignment and know the problem immediately yet a math genius like Tamino actually prefers it.

    hmm.

  10. Really, if there were no such things as thermometers, would one reconstruction be pegged to another based on an arbitrary, narrow, slice of time? We would be thinking only in terms of trends, means, variability, etc. and how these features compare with other reconstructions for common (long) time periods. As time scales shorten then the error bars – for comparison with other proxies, reconstructions, and even thermometers – get wider. The notion of confidently aligning these reconstructions to instrumental readings within narrow slices of time has to be put into proper perspective.

  11. Grant Foster is the most dishonest of the propagandists, and keep in mind that the competition is stiff in that regard. You will learn that if you post at his site. He edits your comment to make you look dumb or to blunt the point you are making. Then when you protest in a followup he doesn’t let it through.

  12. How come so many of us can look at the blade end of this reconstruction, see the misalignment and know the problem immediately yet a math genius like Tamino actually prefers it.

    Maybe it’s a case of looking at the world through tinted lenses (http://www.fostergrant.com/)?

  13. #12

    To expand on my point in #12, I loaded the Loehle and Ljungqvist reconstructions into R. Then I offset the Ljungqvist series by it’s mean to re-center at zero as Craig Loehle did. Since Dr. Ljungqvist’s reconstruction anomalies are expressed as 10 year averages, I had to average Dr. Loehle’s yearly data the same way. Now here are the differences between the series (Loehle – Ljungqvist):
    http://picturepush.com/public/4273295

    The standard deviation of the differences is 0.2063 C.

    And the differences during the period of instrumental overlap (1880 – 1930):
    http://picturepush.com/public/4273381

    Imagine if GISS and CRU had these kinds of differences in a 50 year time frame, then imposed an offset on one or the other series which was never to be questioned! No one should take a calibration offset between L07 and L10 too seriously until there is another 100 (give or take)years of overlapping instrumental data to smooth out the (both white and red) noise. There are step differences and even some slight trends in these differences on much longer time scales then the 1880 to 1930 period. This should not be a surprise to anyone (who has followed Jeff’s work) when we know that there are large amounts of persistent red noise in the proxies.

    This is not an idictment of either series but rather a recognition of the fact that these are noisy proxies, not thermometers, and therefore the chances of noise (even in ten year averages) causing a spurious relative offset when calibrating is significant and can’t be ignored.

  14. This is another can of worms that “the team” should not even open. If one were to get into the “uncertainty” of the cumulative error bars of the intercept bias of each study, eventually the “uncertainty” WILL make their data just noise.

    As semiconductor guy pointed out, only interesting in the variation, nothing close to a thermometer.

  15. #14 Tom C

    Yep the irony of Grant calling his sight “Open Mind” is thick. Why anyone finds an echo chamber of emotional ranting interesting is beyond comprehension. The close 2nd award goes to Lambert’s Deltoid, another tightly controlled ranting echo chamber. Tim and Grant are the CSP brothers and cannot handle dissent with their childish emotional dispositions. The lack of integrity of the regulars on those sites is as vast as the Universe.

  16. As I pointed out over at CA, perhaps Mann etal and Al Gorepbel should have been nominated for the (Ig)Nobel Award as given each year by the “Annals of Improbable Research (www.improbable.com). It seems to me that the Hockey Stick and the defense of same would qualify. 🙂

  17. Jeff, regarding rescaling…I think you do want to rescale, because there is a difference in Loehle geographic representation relative to Ljungqvist. I did this for yuks, using the variance adjustment method, but you might be match scales using calibration period data, which is of course a cleaner approach to use.

    Not surprisingly after making an additional adjustment, my results look marginally better than yours, but the description remains qualitatively the same as what you found. The biggest difference is Ljungqvist doesn’t see a Roman Warming Period, which is odd, thought they both agree on the MWP duration, temperature and extent.

  18. Carrick,

    The plots are by Tamino. Craig had his centering done by the whole length of the plot whereas the rest are by calibration period. Can you upload your plot to a picture server or send it by email so I can add it above?

  19. Hi Jeff,

    I’m a bit tired from travel sorry about the misattribution… Here is my figure. I’ve adjusted Loehle to fit Ljungqvist.

    I’m not saying this is perfect, but if you want to correlate the time series, it’s a much better way of doing it than just naively comparing two series that use different geographic weightings…

  20. Carrick, Ljungqvist does see a Roman Warm period.

    I tried rescaling as well, after I posted #17. I just picked an arbitrary method of scaling up Ljungqvist making the sd’s match. I rescaled first as this changed the mean and therefore the offset by approx 0.09C. It seemed to take a bit of the trend out of the graph of differences – looking a little more like a series of steps up a down every few hundred years or so. I didn’t crunch the change in the 1880 to 1930 endpoint but it looked to just shift the difference by roughly the amount of the offset. The sd of the difference graph increased only marginally.

  21. LL, don’t see it. If you look at Ljungqvist from 0AD to 850 AD (blue line in my figure), you pretty much have a monotonic increase in temperature.

    That doesn’t look physical to me, let alone confirm a RWP.

  22. Actually Carrick, I think you have the reconstructions mixed up.

    indeed.

    what is the justification for multiplication with a factor?

  23. Thanks, LL. You’re absolutely right… I went back and looked the script I used to combine them and I did reverse the files by accident. Teaches me to do something when I’m tired.

    here’s a revised version

    Jeff, I scaled Loehle by a factor of 1.39, which I computed by taking the ratios of their standard deviations after resampling Loehle to a 10-year period to match Ljundqvist. This is a simple and quick way of calibrating series that are thought to represent measures of the same underlying process—global mean temperature here—but are not phased matched frequency component by frequency component. (Think of it this way , the variance of a1 sin(w1 t + phi1 ) + a2 sin(w2 t + phi2 ) + … is given by a1^2 +a2^2 + … independent of the values of phi1, phi2 etc… assuming you’ve averaged over a sufficient number of periods.)

    In order to match the series to instrumental data, I would first need to generate an extratropical time series to match Ljundqvist to, calibrate Ljundqvist to that series, then calibrate Loehle to Ljundqvist. (I’m still ruminating over the best way to do that.)

  24. #33 I would recommend leaving Loehle’s alone and scale the other. Loehle’s is done by an average whereas the regression of the other plot causes historic variance loss. I assume you scaled Loehles by 1/1.39? Doing it by sd makes good sense.

    When people are going around the internet writing that the new graph matches other regression graphs better, they are missing the point that all of the non-Loehle graphs have variance loss. Only one addresses the problem.

  25. Jeff, sorry, I’m still catching up on that screwup of switching the columns when I combined them. Layman Lurker has the right description of how to recreate the graph.

    Which is I multiplied Ljundqvist by 1.39. Here’s the exact steps from my command line:

    % awk '{ if ($1 <= 1935) print $1 }' ljungqvist2010.cvt.txt | cspline +i LoehleMcC.temp.txt | paste - ljungqvist2010.cvt.txt | awk '{ if ($1  /tmp/comb.txt
    % awk '{ print $1, $2+0.00836, 0.254/0.182 *($3+0.233865) }' /tmp/comb.txt > /tmp/comb2.txt

    The paste command is putting Loehle in front of Ljungqvist. In second line I’m clearly multiplying Ljungqvist by 0.254/0.182=1.39.

    I obtained the factors to shift and multiply by using this command (showing the output):

    % meanColumns.awk /tmp/comb.txt
    1 974.5 558.585 31.217 193
    2 -0.00836411 0.254137 0.0914555 193
    3 -0.233865 0.182365 0.483596 193
    

    The output is has the format “column_number mean standard_deviation root_mean_square number_points”

  26. The line with the paste command in it got mangled… hopefully it is obviously what I was doing. The last awk command looks like this:

    ... | awk '{ if ($1 < = 1935) print $1, $2, $4 }' > /tmp/comb.txt

    It’s just printing columns 1, 2 and 4 for years ≤ 1935.

  27. #32 Sod

    what is the justification for multiplication with a factor?

    I think Carrick’s insights make a lot of sense. My own interest in scale was tweaked by a comment from Dr. Ljungqvist at WUWT:

    Fredrik Charpentier Ljungqvist says:
    September 28, 2010 at 7:16 am

    A comment from the author:

    Some remarks have been made suggesting that the amplitude of past temperature variability are deflated. It is indeed true and discuss in length in the article. The common regression methods do deflate the amplitude of changes in the reconstructed temperatures. This reconstruction shares this problem with all others.

    I felt that if this statement was true then the scale would be an issue in his reconstruction. My choice of factor (to match the variability with Loehle) was totally arbitrary and I make no presumptions about it being correct – although the effect that it has on the fit is intriquing. The offset by series mean is another matter, the argument that this is a meaningful way to compare the series is IMO very compelling.

  28. When people are going around the internet writing that the new graph matches other regression graphs better, they are missing the point that all of the non-Loehle graphs have variance loss. Only one addresses the problem.

    Jeff ID, you make a very important point here and one that I am afraid often gets lost in the blogging on the many faults that are found with the reconstructions. That is why when I pointed to the critical point of reconstructions as being the pre-instrumental variance versus the instrumental variance, regardless of the timing of the excursions in temperatures, I qualified that by stating that most reconstructions, if not all, lose variance amplitude in the pre-instrumental period.

    We seem to get lost in attempts to be technical in analyzing specific points and not making clear what we already know about the problems with the reconstruction methodologies. I think that is what frustrates you also.

  29. Jeff:

    When people are going around the internet writing that the new graph matches other regression graphs better, they are missing the point that all of the non-Loehle graphs have variance loss. Only one addresses the problem.

    There’s a way to look at this (power spectral densities). We’ll see how they compare tonight.

  30. Kenneth Fritsch:

    We seem to get lost in attempts to be technical in analyzing specific points and not making clear what we already know about the problems with the reconstruction methodologies. I think that is what frustrates you also.

    I disagree. Addressing technical issues is what builds up the case for the overarching problem.

    As Layman Lurker points out via Ljungqvist’s, most reconstructions “deflate” the variance during the reconstruction period. What we find from a simple analysis here is that Loehle’s method has 40% more variance than Ljungqvist, and it’s claimed by Loehle (and I guess by Jeff) that Craig’s approach doesn’t suffer from this problem, so this is in “the right direction” to confirm that suspicion.

  31. Craig’s approach used pre-calibrated proxies so the individual series could have some of this effect in them but when they were combined, no additional differential in the loss of variance occurred.

  32. I wrote kind of a summary post Friday morning at WUWT. In a nutshell, I emphasise that there is no demonstrably right way to analyze & combine proxy data. Lots of unprovable assumptions which affect the results. I only believe even my own recon as a qualitative result, and I set out to do it purely to show what happens when you leave out all treerings (which I don’t believe are valid more than a few hundred years in the past).

  33. Jeff, I have updated my plot to include all of the Ljungqvist data :
    http://picturepush.com/public/4278043

    I have also prepared a vector of instrumental data in 10 year average segments ending in 1990. I don’t have the current instrumental data, but the comparison with Ljungqvist would end with the 1980 – 1990 segment in any event.

    The reason I have not put the instrumental series into the updated graph, is that I haven’t figured out how to code it yet. I used “ts.plot” for the other series but my instrumental series vector starts in 1871. I will have to plod through the R manual for awhile but I’m thinking that I have to throw in “NA’s” into the series dating back so that the length of the series matches up with L07 and L10. Then mask the “NA’s” when plotting.

  34. Here is a crazy idea for what I would do to get a proxy series, and some sense of the uncertainty of it. Anyone let me know if anything remotely like this has been done.

    First, gather up a HUGE amount of proxies, of just about every kind.

    Second, identify a period in which all the proxies have over lap.

    Next, baseline all the series, to that particular space in time, and standardize the series all based on that span of time.

    Then average the proxies over that shared span of time (if You MUST weight them, please just use geographic weights, not regression based “teleconnection” weights…)

    Just to make sure that there is consistency for that period of time, do some sensitivity tests, seeing what happens if you remove proxies individually, and up to half of them in total, TRY EVERY COMBINATION (yes, I know this is becoming a NP problem, sorry) and make sure that the results still tend to be stable with certain numbers of proxies. Also this helps get a sense of the uncertainty spread.

    Now try to do this for a longer time period, with fewer proxies, and compare the results over the period of overlap, make sure they are reasonable. Keep creating series and sets of series that cover more periods of time and checking them for reasonable consistency in the periods of overlap. Now, scale all the longer series to match as closely as possible the set of series that we put together at first, that all our proxies covered that period. Noting the uncertainty of these regressions would be a good idea, also. Finally, as a last sanity check, look at the original proxies as spaghetti, their spread, etc, to see if our average seems reasonable.

    And if you absolutely MUST see what would happen if you do so, calibrate it NOW to the GMST data, but remember to carry forth the uncertainty of that regression, too.

    Now with your massive set of sanity checks, calculate some uncertainties. 🙂

    Golly, this is what I’ve wanted to see done, but no wonder it hasn’t been, this sounds like a big job even to me!

  35. Jeff, I understand the argument about using pre-calibrated series. It doesn’t guarantee that when you average points from disparate points that the sum of the atmospheric ocean fluctuations

    Craig, I can think actually think of one improvement. You can make some progress by using the telecommunications function between a single point and the global mean value (developed using the instrumental data 1950-current, the period that I think is reliable enough for that application) to weight the values from different locations, rather than use an unweighted average as you did.

    At the moment, I would be predisposed to low-pass filter the data to remove frequency components with periods less than 30-years. I think it would be very difficult to get the global mean average of these shorter periods correct when you’ve irregularly sampled the globe, without making a Herculean effort.

  36. Jeff, I cut off part of my comment, sorry. Repeat:

    Jeff, I understand the argument about using pre-calibrated series. It doesn’t guarantee that when you average points from disparate points that the sum of the atmospheric ocean fluctuations will have the proper phase relationships so that they add “correctly.”

    In other words, you get a suppression of the high-frequency components of the variance of the signal even using Craig’s method.

  37. #45 Andrew

    Here is a crazy idea for what I would do to get a proxy series, and some sense of the uncertainty of it.

    You’re right that is a crazy idea. So crazy it…just…might…work! 🙂

    #48

    I mentioned before that endpoint of L10 and instrumental should line up. This is not the case. L10 has one additional segement for 1990-99 so the endpoints are not quite in sync yet.

  38. Ok. Here is version #2. This one uses the instrumental series used with Lungqvists paper. The instrumental data I had on file meant that I had done 10 year averages on smoothed yearly data. The endpoints for L10 and instrumental are now in sync up to 1999. Again, no rescaling for instrumental but does use the mean of the rescaled L10 as an offset.

    http://picturepush.com/public/4279226

  39. That looks great, LL. I wouldn’t take the upwards hook of the green curve (instrument series) for the early instrumentation period very seriously. I don’t think it’s particularly reliable.

  40. Jeff ID:

    They are all forms of dimensionality reduction, if the balance between today and in history is good, that’s probably the best you can do.

    I’m pretty sure it can be done better, and I even have (poorly formed) ideas of how, just there’s only so much of my free time I’ll willing to burn.

  41. LL,

    It is a nice match. Can you make one more with the best fit for both to the instrument plot so you have one scaled with total variance, one scaled with coexisting data variance. Then we do the same two plots without scaling, discuss Carricks points on global weighting, discuss yours, Ljundqvist and mine on variance loss and we have a post. If we spend a reasonable amount of time and WUWT will probably carry it.

    Now that you are doing number work you might as well have people read it 😀

  42. Carrick,

    In my spare time, I’ve been considering your ideas on freq domain reconstructions. I don’t know if I commented on your post but there are endpoint issues which are beyond my experience.

    On the freq loss issue, I’m not sure it is worth an incremental improvement over averaging. One of my weaknesses is that once I’ve seen the quality of data is poor, working too hard to make the poor data perfect seems a waste of time. Ryan and Nic just reworked the Antarctic reconstructions a hundred different ways, using multiple different regressions, and far more thoroughly than Steig will ever imagine, all to very similar results. Tis a weakness of mine that I have difficulty making myself be so thorough when the data isn’t clean and the results don’t change much.

  43. Jeff ID, my interest is in understanding the technique, though I admit my interest is dampened by the messiness of the data.

    To start, I think a better approach would be a combined temperature and precipitation reconstruction, since it’s my suspicion there are interactions between the independent variables for most proxies.

    It’s definitely a hard problem, and sizable progress doesn’t seem to be in the wings.

  44. I disagree. Addressing technical issues is what builds up the case for the overarching problem.

    As Layman Lurker points out via Ljungqvist’s, most reconstructions “deflate” the variance during the reconstruction period. What we find from a simple analysis here is that Loehle’s method has 40% more variance than Ljungqvist, and it’s claimed by Loehle (and I guess by Jeff) that Craig’s approach doesn’t suffer from this problem, so this is in “the right direction” to confirm that suspicion.

    I have not seen any evidence that the Loehle approach would retain the variance amplitudes. He uses somebody else’s reconstructions, I assume. Or did he simply average together other people’s proxies?

    Anyway, what I see here and at the Blackboard is not a comprehensive techinal analysis of the issues at hand in this thread, but rather a back and forth about what reconstructions show or do not show and without mentioning all the assumptions and limitations that includes.

    But perhaps I missed something here, so, if you will write a short summary of what has been shown here or at the Blackboard with regards to a technical analysis of the matters at hand, I would be most grateful. I can see by eyeballing that various reconstructions have large variations in variances but I am not sure that we can say anything about CIs for those variance.

  45. Kenneth:

    Or did he simply average together other people’s proxies

    He averaged together other people’s proxies (though some of the proxies are composites).

    I’m not exactly sure why Loehle’s approach is immune from variance deflation (like the others are known to be), perhaps Jeff can explain this.

  46. Here’s my comparisons for Loehle against the others…This summary was also posted (piecemeal-wise) on Lucia’s blog.

    Loehle-vs-Wild

    The rest, unfortunately look like low-pass-filtered noise.

    Mann ’09

    Comparison of correlation coefficients:

    Ljundqvist 0.59
    Mann CPS: 0.58
    Mann EIV: 0.76
    Moberg: 0.57
    

    The p value for all of these is less than 0.001.

  47. Sighs… Mislabeled first line of the correlation coefficients. Should say “Loehle”.

    I’m comparing all of the curves to Ljundqvist.

  48. Kenneth and Carrick,

    It is not immune from variance loss, you guys are completely right. If we did one more regression/cps process everyone else style, we would have even more variance loss.

  49. #56

    Sorry for not responding yet Jeff (forced to take a “life” break). It will be later this evening or tommorow before I’m done working through it.

  50. Jeff ID:

    It is not immune from variance loss, you guys are completely right. If we did one more regression/cps process everyone else style, we would have even more variance loss.

    One thing I’ve pointed out in the past is that Loehle succeeds in retaining the high frequency information. The band-passed red-noise like characteristic of the other series, suggests that the various atmospheric-ocean oscillation components are being added incoherently. This could be an important loss of variance too (and it may even be another way of stating the same thing).

  51. Carrick, could you please provide the CIs for the correlations?

    What we know from your estimates currently is that the probability that there is some correlation between reconstructions is greater than 99%. Of course, when we square the correlation coefficient, we see that various other reconstructions account for about 35% of the variation in the Ljundqvist reconstruction, except for Mann 09 EIV which accounts for 58% of the Ljundqvist variation.

    Also I am not at all certain that the overall correlation estimation between reconstructions informs of the critical aspects of reconstruction differences such as pointing to those differences that are significant between reconstructions over extended periods of time within the reconstruction time period.

  52. Should not tree ring reconstructions theoretically contain more high frequency information than a reconstruction limited to non-tree ring proxies?

  53. Kenneth Fritsch:

    Carrick, could you please provide the CIs for the correlations?

    Correcting for serial correlation, they all have p values less than 0.001. In general, the correlation gets worse the farther back in time you go. if I have a chance this evening, I’l run some correlations-over-time estimates to study this.

    I think you are right, the dendro proxies could contain higher frequency information, but ice core data can be pretty good too in that respect:

    See this.

  54. This is a repost from Lucia’s blog. Basically it shows the correlation over time of each proxy (500-year windows).

    As you can see, the correlation tends to get worse as you go backwards in time. This is consistent with the skill of the proxy deteriorating as you project further back into time, an effect I would expect to happen. (Nonetheless, the error bars, when they are shown, tend to be constant width over the entire period, which seems odd.)

    These are all against Ljundqvist.

    Proxy Correlations Versus Time

  55. Jeff, here is the graph with Craig Loehle’s original adjustment – offseting L10 to equate the series mean to 0 with no rescale. Instrumental data is now included in the comparison with the same offset and therefore anomalized WRT mean of L10.
    http://picturepush.com/public/4284206

    Here is a close up of the instrumental period – same process as above. I see I didn’t keep the colors consistent. Oh well. Too tired to fix it.
    http://picturepush.com/public/4284210

    Here is a close up of the instrumental for both rescaled and offset L10 and corresponding offset for instrumental.
    http://picturepush.com/public/4284217

    And a close up with no adjustments.
    http://picturepush.com/public/4284222

  56. Carrick, I second Jeff ID’s comments that your graph is interesting (and informative). There are 500 year periods were the R^2 is sufficiently low to have little explanatory skill of one reconstruction onto another.

    I was hoping you could do CIs for the correlations in the manner of the link below.

    http://davidmlane.com/hyperstat/B8544.html

  57. Jeff, I got curious about Moberg so I went here to look at the data. From the abstract:

    Here we reconstruct Northern Hemisphere temperatures for
    the past 2,000 years by combining low-resolution proxies with
    tree-ring data, using a wavelet transform technique to achieve
    timescale-dependent processing of the data. Our reconstruction
    shows larger multicentennial variability than most previous
    multi-proxy reconstructions, but agrees well with temperatures
    reconstructed from borehole measurements and with
    temperatures obtained with a general circulation model.
    According to our reconstruction, high temperatures – similar
    to those observed in the twentieth century before 1990-
    occurred around AD 1000 to 1100, and minimum temperatures
    that are about 0.7K below the average of 1961-90 occurred
    around AD 1600. This large natural variability in the past suggests
    an important role of natural multicentennial variability that is
    likely to continue.

    The data archive has a low frequency time series from year 133 to 1925. I was curious what this series looked like so I decided to “standardize” the series wrt to Loehle (like I did with Ljungqvist) by rescaling and offsetting and then plotted here along side Loehle.

  58. Good catch LL. As more and more understand what attenuation or amplification means, Loehle looks better and better. Despite the “opaque reactions” misfire, from Tamino the (self-snip), if an author assumes that one has accomplished a reasonable reconstruction, an author would use the mean of the reconstruction with scale; if one and others are not confident, the use of the calibration period and then, argue calibration matching, while ignoring that the confidence intervals in the calibration period CANNOT be zero, would be appropriate. If someone argues with this, you need only point out the truncation at 1960’s for Briffa, and truncation at the 1980 for others. Your ace in the hole is that to make Briffa not have a MWP, the IPCC re-zeroed, not to the calibration period, because they deleted it, but to the MWP of reconstructions with known attenuation, or amplification problems, so that Briffa agreed with a cool MWP, as the others.

    Note that those, like Tamino, who would make a very tight interpretation of the reconstructions restrict what MUST be tightened up by the models. And just what is Annan doing? Doing everything he can to loosen them. IF it is acceptable that the reconstructions can be from floor to ceiling, and the models can be floor to ceiling, one CANNOT rule ou the null theorem. This is falsification of the TAR and 4AR.

  59. if one and others are not confident, the use of the calibration period and then, argue calibration matching, while ignoring that the confidence intervals in the calibration period CANNOT be zero, would be appropriate.

    Maybe it is time for a “spaghetti graph” of standardized reconstructions which essentially replicate millenial variance.

Leave a comment