the Air Vent

Because the world needs another opinion

Another Mathematically Honest Reconstruction

Posted by Jeff Id on October 7, 2010

I learned a little more today.  Behind the scenes, Steve McIntyre had a polite conversation with Dr. Ljungqvist who has recently performed a temperature reconstruction from proxies with remarkably similar results to Craig Loehle and Hu McCulloch’s work. Dr. Ljundqvist was kind enough to share the data with Steve. The internet made a bit of a stink about the fact that it visually matched Dr. Loehle’s much maligned work quite well.  Climate science has learned to hate reconstructions with a medieval warm period so Tamino even took the time to try and trick (not the apparently good definition) people into thinking the match was dishonest.

Dr. Ljungqvist’s paper uses CPS method which Mann08 used and is known amongst tAV readers to create variance loss.  Carrick and others were generous enough to provide a copy of the paper to me which has a description of the methods quoted below.

We use the common “composite-plus-scale”
method for creating our multi-proxy reconstruction
(von Storch et al. 2004; Lee et al. 2008). All
records with less than annual resolution were linearly
interpolated to have annual resolution before
the records were normalized to zero mean and
unit standard deviation, fitting the mean and variance
AD 1000–1900 and then we calculated 10-
year-mean values of the records. The arithmetic
mean of all 30 records was then calculated to form
a dimensionless index of Z-score units. This index
was scaled to fit the decadal mean and variance
over the period AD 1850–1989 in the variance adjusted
CRUTEM3+HadSST2 90–30°N instrumental
temperature record (Brohan et al. 2006;
Rayner et al. 2006) and adjusted to have a zero
equalling the 1961–1990 mean of this instrumental
record. The decadal correlation between proxy
and instrumental temperature is very high (r. 0.95,
r2 0.90) and the 2 standard deviation error bars
only amount to ±0.12°C in the calibration period
AD 1850–1989. As would be expected from different
sorts of proxy records deriving from different
regions, there is a certain standard deviation between
the decadal mean values of the records, as
seen in Figure 2. This should, however, not be of
concern for the accuracy of the reconstruction
since the coherency between the records is rather
stable in time back to c. AD 1000. The standard deviation
is somewhat larger in the first millennium
of the reconstruction, probably primarily because
of the decreasing number of proxies covering this
period, but even so this deviation is not much
higher than in the calibration period. To account
for changes in the standard deviation between the
records in the error bars, we have increased the
width of the confidence interval with the same
percentage as the standard deviation between the
records in a given decade exceeds the mean standard
deviation during the calibration period AD
1850–1989.

CPS is a method where standard deviations of proxies were matched to temperature standard deviation.   Other papers have used this method combined with data sorting or using the standard deviation only in the measured temperature period, and it creates hockey sticks from totally random data.  Now on first reading of this paper and the authors comments, I assumed that this used a similar method, but it does not.  Ljundqvist matched the variance of the entire timeseries to the variance of temperature rather than just the variance in the calibration period!! This is a very much reasonable method of calibration which is sometimes used in paleo.

This makes all the difference in the world.

Steve McIntyre shared the data with me by email along with extensive code verifying the correctness of the series but pointed out that some of the series are still top secret data or data not revealed by the original collectors.  Therefore I will not be able to share the data for the below replication, but the original authors need to be encouraged to archive.   I can say the proxies look exactly the same as all the rest I’ve seen, I’m getting familiar enough to pick the type of scribbles out just from looking at them.  Sediment from borehole from treering from etc.  I suppose SteveM and paleo’s consider that a minor thing but it’s news to me.

Anyway in starting this post, my intent was to replicate the reconstruction and take a look at variance loss.  Ljundqvist left this quote in the vindication thread linked above.

Fredrik Charpentier Ljungqvist says:
September 28, 2010 at 7:16 am

A comment from the author:

Some remarks have been made suggesting that the amplitude of past temperature variability are deflated. It is indeed true and discuss in length in the article. The common regression methods do deflate the amplitude of changes in the reconstructed temperatures. This reconstruction shares this problem with all others.

Of course from the endless reconstructions we have all examined, it seemed quite reasonable coming from the author.  Thanks to the sharing of the data, and the simple methods, I was able to reasonably well reproduce Ljungqvist result below .

It’s not a perfect replication but these are temperature proxies so the two are close enough for my liking.  Some readers will recognize my standard lack of enthusiasm for more work beyond a certain point.  My recon (black line) actually fits a little worse in the known temperature range.  But consider this point:

All proxies are scaled in the Ljundqvist reconstruction by their entire length to match the variance of temperature.

All proxies are used, none thrown away by correlation sorting.

These two facts make it impossible to create a difference in variance loss between the calibration period and the historic period. They are scaled equally, all data is used!

In other words, this is a mathematically honest reconstruction!!  It’s basically averaging!

What I don’t understand is the comment on variance loss by the author.  I’m going to have to write him an email tomorrow.  All combinations of timeseries are considered dimensional reduction and all cause varaince loss.  But in this case, the loss is equal throughout the series.  Perhaps he doesn’t realize how bad the problem is with recons like Mann08 or perhaps he doesn’t realize that we are criticizing a differential in varaince loss not the 100% statistically reasonable variance loss of averaging.

Now there is more to the story but this is enough for today.  Just for a moment though, readers should consider this piece of evidence from the paper.

The decadal correlation between proxy and instrumental temperature is very high (r. 0.95, r2 0.90)

That is an amazingly good result for proxy data, especially when not preferentially sorted.  Many of the series were pre-created to match temp but this result is not easily dismissed with a handwave.  I’m going to have to look deeper into this paper (subpapers) to see if the reason for the incredible match to temperature can be understood.  Wow.

Now we have two reconstructions, Craig Loehle’s and this one which use mathematically similar and reasonable methods.  This says nothing about proxy quality but it is very much telling that two methods which address the differential variance loss match each other so well.

 

 

 


28 Responses to “Another Mathematically Honest Reconstruction”

  1. Jim said

    Does it use strip bark pine?

  2. Brian H said

    Does the variance loss imply a flattening of the paleo curve?

  3. TimG said

    How do the smoothing of the instrumental compare to the smoothing of data.
    Is it reasonable to slap the instrumental on the end of the proxy data?

  4. tonyb said

    Jeff

    Very interesting post.

    Are you saying that Tamino was flat wrong and used some sleight of hand to defend his position?

    This reconstruction by any means using all sorts of material is one of the reasons I tend to prefer observations and historical records when writing my own climate articles.

    Look forward to part two. Good stuff

    tonyb

  5. Fredrik Charpentier Ljungqvist said

    From the author:

    No, I didn’t use any strip bark pine in the reconstructions. Mostly because they usually comes from high elevation sites in generally semi-arid regions and may be considerable influenced by drought/precipitation besides temperature. Many tree-ring width records from the North American Southwest have been suspected not to always show a linear response to warming in cases when a warmer climate also reduces the availability of water.

  6. Jeff Id said

    #5,

    Thanks for the comment, I wonder if you could describe what you see as sources of variance loss in your reconstruction. Were you referring to the creation of subseries or to the varaince loss across the whole trend created by the averaging?

  7. PaulM said

    “Another Mathematically Honest Reconstruction” – indeed – as opposed to one which puts a huge positive weight on the bristlecone pines in a small region of western USA, and very small or even negative (!) weight on other proxies (see latest Climate Audit post).

  8. Kenneth Fritsch said

    The decadal correlation between proxy
    and instrumental temperature is very high (r. 0.95,
    r2 0.90) and the 2 standard deviation error bars
    only amount to ±0.12°C in the calibration period
    AD 1850–1989.

    You are looking at an average over 10 years in a time period from 1850 to 2000 or 16 data points. You are losing degrees of freedom. What are the estimates of the correlations for some of the individual proxies? What are the CIs for the correlation coefficient?

  9. Phillip Bratby said

    It’s difficult to see what’s going on with the CRUTEM overlay. Can’t you produce another graph without it? Please.

  10. I’ve seen lots of reconstructions using mixes of proxies. Great to see Ljungqvist here. But as Philip Bratby says, it’s difficult to extract the instrumental overlay.

    If this is not a stupid or irrelevant question – I would dearly love to see a collection of every single proxy used by everyone, as individual graphs (with a few identifying notes) – just as one can look up GISS graphs. Even if there are issues of logarithmic scaling and normalization, these can surely be looked at, and discussed. It’s the shapes I want to visually mine (yes, mine) for similarity / dissimilarity of shapes to each other, not just shapes that match the instrumental period).

    I don’t have the skill to turn data into graphs and lack time to learn. I’m sure a lot more like me would really appreciate having access to a collection of alll the individual proxy graphs. Moreover I could then use them to do photoshop work to great visual effect and scientific evidence and insights, as I did with Yamal.

    Can anyone point me to a source of such material? or volunteer to assemble it all???

  11. To be fair, this reconstruction is also quite similar to Mann et al ’08 (EIV) and Moberg: http://rankexploits.com/musings/wp-content/uploads/2010/09/Mo-Lj-Reconstruction-Comparison-Uncertainty6.png

  12. Steven Mosher said

    Lucy.

    The Mcshane paper has all the proxies. if they are held as ts()
    then plot() will do what you want. dead easy.

  13. Jeff Id said

    Zeke,

    Since we know that Mann suffers from so much variance loss, my thought is that the subseries used in the Ljungqvist reconstruction have it already incorporated. A number of them were pre-calibrated to temp. It is more than a bit confusing for me to see the match between these when this paper did the math reasonably and Mann’s did not.

  14. Jeff Id said

    #13, Also, the incredible match to temperature is a clue that the individual series may have issues.

  15. Jeff Id said

    Lucy,

    There are a number of places where proxies are plotted. For your own efforts some of the turnkey code in the hockey stick posts has all of Mann08’s proxies in a time series for you. If you look back to august-sept of 2008 at CA you can find a gif video Steve did of all the proxies one at a time.

  16. M. Simon said

    Strip, bark, pine?

    How foolish.

  17. Fredrik Charpentier Ljungqvist said

    From the author:

    There was some question why I stated that I think my reconstruction underestimates the true low-frequency variability. I write quite much about that in the article. For a clarification, I think it is enough that I cite some sentences from my article:

    “Many available proxy records also end sometime during the 20th century and thus cannot be calibrated to the high temperatures during the last decades of the 20th century. This may result in an underestimation of the true temperatures in earlier warm periods.”

    “The amplitude of the temperature variability on multi-decadal to centennial time-scales reconstructed here should presumably be considered to be the minimum of the true variability on those time-scales. It is for several reasons likely that our reconstruction, together with most previous large-scale reconstructions, seriously underestimates the actual coldness of parts of the Little Ice Age (e.g. the 17th century) (Datsenko and Sonechkin 2008; Datsenko and Sonechkin 2009; von Storch et al. 2009). One circumstance that possibly has led to an underestimation of the true variability is that we must presuppose a linear response between temperature and proxy. If this response is non-linear in nature, which is often likely the case, our interpretations necessarily become flawed. This is something that may result in an underestimation of the amplitude of the variability that falls outside the range of temperatures in the calibration period. The true amplitude of the pre-industrial temperature variability could also have been underestimated because of a bias towards summer temperatures among the proxies. If the magnitude of cooling during the Little Ice Age in the extra-tropical Northern Hemisphere was more pronounced during the colder seasons of the year, and the relationship between the seasons have not been stationary in time, our reconstruction of annual mean temperature underestimates the Little Ice Age cooling.”

    “A major problem with many non-tree ring proxy records used in the reconstruction is their temporal uncertainty. For example, the seafloor sediments from the Bermuda Rise (Keigwin 1996) have an estimated dating uncertainty of ±160 years and the lake sediments from Lake Tsuolbmajavri (Korhola et al. 2000) of ±169 years. The dating uncertainty of proxy records very likely results in “flattening out” the values from the same climate event over several hundred years and thus in fact acts as a low-pass filter that makes us unable to capture the true magnitude of the cold and warm periods in the reconstruction (Loehle 2004). What we then actually get is an average of the temperature over one or two centuries.”

  18. John F. Pittman said

    Thanks Dr. Ljungqvist.

  19. Layman Lurker said

    #17

    Thanks for the clarification Dr. Lungqvist. It seems your comment WRT to variance “deflation” refers to uncertainty in the proxies rather than bias of the method.

  20. Jeff Id said

    #17, “The author” ;)

    Thanks much. I think it is important that climate science notes points like this very clearly. I have not covered the effects of temporal uncertainty here much but others have. Your point on temporal signal loss was a suspicion of mine. It is quite clearly implied by your above writing that the MWP due to the less accurate dating could be even more affected by variance loss because it is farther back in time, but a non-mathematical person may completely miss the point.

    What surprised me about your comment on variance loss and CPS at the WUWT thread was that you applied correct math yet still described the effects of variance loss. It makes me wonder if paleo guys understand just how bad the Mannian stuff is. — sorry no answer expected. It is very very bad though. Your paper is more significant IMHO than the skeptic climate blogs understand yet (CA excluded because Steve knows all the proxies so well).

    One thing I’ve done offline is to look at tree rings vs the rest and they surprisingly confirm each other relatively well. Perhaps there is more learning to be done? At this point, my understanding of your result will only come from individual papers and series methods.

    I’m thoroughly surprised to see reasonable agreement with papers that have very bad math. Maybe the temporal effects are more severe than I realized. I’m going to keep looking through them and see if I can figure it out.

  21. Jeff Id said

    After writing that, I have to point out that nobody (or very few) here denies AGW. I called it a skeptic blog but some call it lukewarmer. I’ve never said the models are wrong though so perhaps it is better described as an I don’t know blog.

  22. Søren Rosdahl Jensen said

    Jeff,
    I have a suggestion:
    Since Ljungqvist uses a different variant of CPS than Mann, could it then be interesting to redo some of your CPS analyses with Ljungqvists method?

  23. J said

    Jeff,
    Have you checked this out?

    http://www.skepticalscience.com/new-remperature-reconstruction-vindicates.html

    I almost puked when saw his reconstruction.

    It clearly begs for a counter-post.

  24. Søren Rosdahl Jensen said

    To test variance loss in the Ljungqvist approach compared to the standard CPS,
    I created 25 synthetic proxies, used the two variants of CPS on them, and plotted the result:

    As seen the Ljungqvist result also suffers from variance loss, but not as much as the standard CPS.The simple average is better in that respect.
    For small levels of of white noise the 2 methods gives a much more similar result.
    The noise used to generate the proxy in the figure is arma(1,1) with parameters: sigma=0.2 phi=0.84 theta=0.75.

  25. Jeff Id said

    #24, Nice plots, what do you consider standard CPS?

  26. Jeff Id said

    Actually it looks like you need to rescale them after the fact. All the cps methods do that. My demo’s above didn’t but I only discussed the ratio between the reconstructed and historic signal. What you show doesn’t have the correlation sorting effects (unless you CPS on the calibration range only) so the result is much improved from Mann08/09.

  27. Søren Rosdahl Jensen said

    #25 and 26
    A general disclaimer, I am rather new to this so I could likelly have made a mistake some where.
    I am not sure I understand what you mean by rescaling.
    Below is the cps part of the code.

    Standard CPS is done this way (scilab syntax):
    // Standard CPS
    for i=1:25
    y=Proxy(:,i); mu=mean(y(1850:1989)); s_d=st_deviation(y(1850:1989));
    proxy_unit1=(y-mu)/s_d; // unit variance and zero mean
    proxy_c=proxy_unit1*S_d+Mu;//calibrating
    Z_m(:,i)=proxy_c;
    end
    // Composite mean
    Z=mean(Z_m,2)

    Ljungqvist is done this way

    ///// Ljungqvist method
    for i=1:25
    y=Proxy(:,i); mu_l=mean(y(1000:1900)); s_l=st_deviation(y(1000:1900));
    proxy_unit=(y-mu_l)/s_l; // unit variance and zero mean
    Z_l(:,i)=proxy_unit;
    end
    // Composite mean
    Z_l=mean(Z_l,2);
    scaled_Zl=Mu+S_d*(Z_l-mean(Z_l))/st_deviation(Z_l) //calibrating composite mean

    For plotting I subtract the mean from the composite series:

    plot(Z-mean(Z),’r’)
    plot(scaled_Zl-mean(scaled_Zl),’c’)

  28. [...] it is key to understanding why the hockey stick paleo graphs are bogus in general.  Some are honest in their math but still contain bad data, others by some of the more famous paleoclimatologists, I [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 142 other followers

%d bloggers like this: