Understanding Growth Response Modeling

As we have continued to argue on the last thread about whether it makes sense to sort trees by ‘sensitivity’ to temperature (it doesn’t), we left off one of the most important critiques of dendroclimatology.   This has been discussed here before in the context of Dr. Craig Loehle’s publication on non-linear tree growth, but it is important that we not forget how critical this issue is in the context of linear regressions.    I am glad to have experts around to keep us on track – Jeff Id

========================================

Guest Post by John F. Pittman.

Below is a response from a temperature sensitive species that has been well documented. Different authors were able to control the nutrition, conductivity, spacing, etc, because these are unicellular plants.  Actually it is two genera and multiple species. The curve represents an idealized single organism. This particular organism has a peak about 91% of maximum growth response and a peak response at about 28 C. So, now we can discuss the errors and incorrect modeling.

 

 

The first constraint of a correct statistically correct biological model is to consider what happens as more individuals are added. The first part we will look at is the area from 5C to 10C. As one adds more and more specimens, the curve will flatten towards the 0% line. The next area is the area above 80% that between 25C and 33C will tend to flatten towards the 80% line. The last is that as one adds more and more specimens the curve tends to go asymptotic to 40C. The reasons have to do with the temperature response of proteins and enzymes, and adaptation of the species wrt time, and acclimation of an individual wrt time, and of course, measurement variance.

 

Now, we can discuss what has been missed, by some, in JeffID’s “Sometimes They Forget” post.  First there is no linear response. Second, any growth measurement has two answers. This was one of the points of Dr. Loehle’s paper. Also, those not familiar with the creation of a skewed bell curve for biological study; the range of temperature can be larger or much smaller. Examples of this are fish that live in extremely hot trapped water systems in deserts, and fish that live in special niches in the arctic.

 

We are going to assume that the sorting that went on and goes on is close to being correct. One can assume the 6 sigma response of certain trees to be correct, but if you understand the model, it means it is worse than what we will discuss for being close to correct. The first point is JeffID’s comments about the de-amplification or amplification of the signal.  In his red noise examples it can be either. In the biological case it can only be one direction, de-amplification when a positive linear assumption is made. The linear model will not detect the temperature going down, since it will be below the threshold of the method’s detection limits at the low end. It will not be able to the temperature change at the top of the bell for the same reason. Finally, by not using the correct curve, there will be the higher temperatures, that will be read as lower temperatures further compacting the response. This is why the handle of the “Hockey Stick” is broken. The red noise series of JeffID and Steve McIntyre will not have the compaction of the series if these trees really are temperature sensitive. Also, where as Steve and Jeff’s results are a probability, using the linear assumption is a guarantee of compaction of the signal in the handle.

 

The proof lies in believe it or not, in the first email of the original Climategate. In this email, Hantemirov, IIRC, stated that sub fossil trees indicated that the tree line was some distance away and was some thousands of years ago, generally known to be warmer that the current warm period. This tells us that the area of interest is somewhere, in our example, in the 0C to 15C region. The other part of the proof is the disbelief by one of the responders in Climategate II, questioning the use of the linear assumption. This indicates that the authors knew or should have known that linear assumption should not be used. Finally, the sad or joyful part is that the divergence actually could be proof that one can use trees as a decent temperature proxy. There will still be de-amplification problems. However, a good decompression routine could be used to great effect on the initial results to obtain a more realistic temperature change profile. Finally, if you want to really understand why it matters and scientists should be skeptical of attribution, you need to read and understand the posts I wrote for JeffID when the first Climategate occurred.

 

John F. Pittman

81 thoughts on “Understanding Growth Response Modeling

  1. John Pittman, while obtaining as much information on proxies is always good, I think concentrating (and I am not saying that your thread introduction does that) on one aspect can make the picture look less complicated than it actually is. Your quadratic growth curve is one possible complication, although I would think you would have to apply it to specific temperature ranges and biological proxies.

    Obviously trees, for example, can respond to factors other than temperature as can be noted by looking at correlations of trees with temperatures and one another.

    It is also not as simple as losing the historical variation with regression methodologies as there are ways around that problem in my view.

    Your post does point to a general major problem with proxy use in temperature reconstructions and that is getting the cart before the horse. Much of analyses needed to validate proxy responses as thermometers have been left undone by those producing temperature reconstructions. In turn, I think much of this rush to judgment is caused by the urgency felt by too many scientists/advocates to influence policy on AGW.

  2. Generally what one does is to propose a statistical model and sampling procedure to reduce or eliminate the other factors. Take the hypothesis that ironwood trees grow best on the eastern slope of a piedmont hill. A sampling procedure would have random and full bio assay areas in both the spatial, eastern, and not eastern extent. The procedure would have how many specimens needed to be sought in each area for an area to count, age, may be a consideration as well. A good idea woud be to have a sampling that would eliminate that it was actually shading from the sun, rather than eastern as the actual response of the ironwood trees. Soil samples may be necessary due to it could be moisture or leeched nutrients that tend to be on the eastern part of that piedmont; and a sampling protocol for what it is and is not would be called for. That is the most relevant complaint of the improper use of dendro, which actually has proven itself useful. A statistician would need to be employed to help set up the determination of what one’s sampling has shown, and what it has not shown. Rather than the hand waving, that I have never seen done better than Nick and Jim to Roman’s valid points on the last thread, that is seen in the literature that Roman and others posted. My favorite quote was when Dr. Briffa was challenged about the divergence problem, he responded that they were working on it. To make the claims made, this would have had to have been solved already.

  3. John, I find your post contains too much arm-waving and is lacking in sufficient information for me to understand how the various points you make relate to each other. I would venture that the details are extremely important here.

    You show a graph of growth reacting to temperature, but I have no idea if this reaction takes place over seconds, minutes, hours or years. Nor do I understand how it might relate to an extended season`s growth for a tree where the temperature varies during that time period over a variety of (growing and non-growing) temperatures.

    I don`t understand why the curve “flattens” as one “adds more and more specimens”. Growth curves are supposed to represent average not maximum growth so in that context your statement wouldn’t make sense to me.

    What is a “skewed bell curve”? I presume that you mean a curve similar to the one above. I would certainly describe it as unimodal and skewed, but it is not what I would call a “bell curve”.

    In this email, Hantemirov, IIRC, stated that sub fossil trees indicated that the tree line was some distance away and was some thousands of years ago, generally known to be warmer that the current warm period. This tells us that the area of interest is somewhere, in our example, in the 0C to 15C region.

    If you look at the growth curve over this particular range of temperatures, you will notice that (particularly from 5 to 15 degrees), that curve is about as linear as you can expect a real world relationship to be. I would certainly not question the linearity assumption from looking at that particular growth curve.

  4. John, my comment was written before I saw your comment above. The use of the term “arm-waving” was purely coincidental. 🙂

    However, my point that understanding the details is important here still stands.

  5. The genera are nitrifiers in the graph. It is represented as an ideal specimen with the qualities I gave it. It is not a tree ring. You are correct about time. Where this is shown, in a way by its absense, are the negative numbers. You would need time to determine how long it took to reach maximum and the negative is how long it takes to die. That dimension is not shown, and the experiments were run as respiration, NH3 reduction, and biomass generation; and time run in respirometery was depletion or some other attribute. To explain the graph: Nitrifiers are known to almost completely stop growth about 15C, below about 5C they die. There is exponential growth from 15C to about 27C, peak growth is at 30C to 33C, somewhere around 35C there is an exponential reduction in growth, and almost all die off by 40C to 44C. Due to phenotypic and genotypic expression, if one repeats these experiments numerous times, one gets the slightly skewed bell curve that is an average of the specimens. One of the factors, I suppose one could say was arm waving, was the authors mentioning thermal acclimation of some individuals or sub species to temperature, but that is a known phenomena, and it does not detract from how I use the graph at work. I run a nitrification system composed of several units, and I use this graph, or the knowledge of it, to run this system.

    I used this graph to indicate problems with the claim of temperature sensitive trees. How is it defined? For nitrifiers, one can understand their general response, or even take a temperature and estimate how much reduction for an input of NH3 would be, or how much time it will take to reduce that NH3 if you have developed rate equations as I have when designing a bioreactor. There the data for time is used. However, in the systems as usually run, the input is constant in NH3 concentration and flow, and one can predict the effluent quantities. The other part is that these particular nitrifiers are on fixed film and one can assume a relatively constant number of nitrifiers. In these systems there is variance, but once you have the data, time to reach a concentration, loss of performance, etc, can be determined.

    And you need that temperature profile or keep it constant.

    I realize the graph was not a bell curve, but I did state it was an idealized individual. Though I have to admit, I expect persons that deal with biological phenomena to know that the “bell” curve is what one tends to get. That is why the quote from Climategate II was mentioned. I really didn’t want to repeat chapters of biology. That is why I armed waved, but did include the role of enzymes and proteins, adaptation and acclimation.

    A talk about tree rings would be different, I imagine. 😉

    1. OK,I get it, thanks. What you were demonstrating is the reaction for a specific type of biological specimen to a complete range of temperatures to indicate how the character of that response can change.

      Presumably, for tree rings, the growth process would be slower and for a given ring would integrate a function (derived from such a curve) applied to the temperatures during the growth period of a year. If that is the case, then one would expect some relationship with the mean temperature of the growth period, but the amount and form would not be obvious in a particular case. There could also be some carryover from the previous years which would also affect the result.

      1. And you need a baseline definition of what max growth is, and 2 independent variables to solve the equation backwards for temperature. We, engineers used to use Buckingham Pi Theorem to “cheat” our way to answer. Figure out the physical relation AFTER we got the right answer. 🙂

      2. Also, if you derive the curve from “first principles” biological in this case, one can easily understand that tree rings show a similar curve. At 0K no life; at 1000K no life. In between, but at the edge of growing, a ring of no width which is mentioned in the literature. Exponential growth, let’s use those six sigma tree(s) as the best example. Slowing down and eventual death, let’s use the divergence problem as an example of this. Add bunches of different genotypic and phenotypic expressions, viola, some form of a bell curve. Unless one can show that the width or density is a constant in one direction of an aspect to a linear relationship of temperature. Which looking at all the varaibles such as precipitation, nutrients, crowding, shadowing, soil porosity, etc., as a viable assumption. And would definetly question anybody who hand waved these concerns away or used a linear relationship such as the one you poked at.

        1. I suspect that the zeros (roots?) of the response function are narrower than that. But the most interesting question is really where the critical points and possibly the inflection points of the response curve lie.

          1. You can use a constant (+ or -1, or abs(TGM-T/(abs(TGM-T)) to help or an epsilon (+0.000001) to avoid depending on indeterminate answers.

  6. It seems to me that the greatest problems in looking at a linear response would occur if the temperature varies across a large range or the mean temperature is near an critical or inflection point. So in order for one to know if the linear assumption is appropriate one needs to know the specie’s response to a range of temperatures, one needs to know what the mean temperature the species would have encountered was, and the range, to make sure that at no point would the local climate have cross a critical point/inflection point of the response curve. The problem is that one is reconstructing the past climate’s range, one does not already know it. But to properly reconstruct the climate using tree rings or whatever with the linear model, one needs to know what the climate was ahead of time! One obviously cannot do that. Of course, one could work from the response curve rather than the linearity assumption. The problem is that the inverse of the response curve is not a function, and thus while one could reverse it on the tree ring data, one would not necessarily get a unique solution for the climate at any given time.

    1. Actually, there is a relationship, but be damned if I will team the team for free. It would need lots of work that according to their methods has NOT been done. It is not linear, but it is computable. It s not inflection points, but fitting the curve and decompressing. A lot of this has already been done, but why tell those who are always yelling about peer reveiwed “literahchure” where and how to find it. Especially for those who mistakenly tell those with 8 years of degrees they need to study for eight years of degrees.

      Remember that temperature response in living organisms is one of the most studied phenomena, next to nutrients (hint CO2) because it is easy. And what the IPCC states compared to what has been studied are 180 degrees opposite of what is known.

      And if that is not enough, the Bern model can be shown to ignore what history and commerce have recorded for more than 100 years.

      Other gripes but got to go.

  7. “Different authors were able to control the nutrition, conductivity, spacing, etc, because these are unicellular plants.”

    Where is the graph for each of the above items when the others are controlled?

  8. There are in several publications. Don’t kow how many are on-line. Don’t know how many different kind of graphs are available. Most experiments were run in respirometers that controlled temperatue to 0.1C accuracy and were calibrated to a NIST thermometer. The respirometer I used was developed from NASA technology that measured pressure changes. To demonstrate the actual uptake, one would use a D.O probe, and meta-bisulphite to convert to milligrams off the chart it drew. Look up Nitrosoma, Nitrobacter and Niitrifiers and Respirometery. I did this when it was on paper at a university library. Since then I have been studying or actulally running experiments in real time in a wastewater system for 22 years. I don’t know what the literature states now, but I strated out running at concentrations 2 to 3 times what the literature says one can, and are now up to 8 to 9 times what the literature stated in the 1980’s. The best all round discussion and equations can be found in Metcaff and Eddy: Water, Use and Reuse, IIRC.

    Also note that the graph is an idealized individual used for illustration.What one does in a respirometer id to setup controlled conditions so there wont be graphs, just statements that phophate at such, pH controlled at such, since pH of about 9.0 is ideal, etc. But to also us that graph, I could point out that the pH was too high or low and that is why that individual maxed out at about 90%.

  9. “. . . the divergence actually could be proof that one can use trees as a decent temperature proxy.”

    Perhaps, but that is a pretty optimistic statement. Have to have the same trees across a decent amount of geographical space. Have to control for all the other factors: nutrition, humidity, sunlight, pests, disease, soil quality, etc. Also, this example is with simple unicellular organisms. Might work for large multicellular or it might not. Also, most growth occurs during a particular time of year for most trees. Therefore, even if you identify a growth spurt linked directly to temperature it would not tell you anything about the overall temperatures for the entire year, just for the growing season. Lots and lots of challenges to be overcome. We are a long way from having decent treemometers.

  10. “. . . the divergence actually could be proof that one can use trees as a decent temperature proxy.”

    That statement, I would think, assumes that divergence is caused by, or related, to the down (right) side of the growth curve. I would also think that we have no current evidence for that assumption. I find it interesting to note that non biological proxies can show divergence. Divergence could mean that the proxies simply are not responding to temperature and particularly if the proxies had a selection bias.

    Also your curve shows two temperatures for a given growth rate. How do you know which one to select without the instrumental record to guide you?

  11. climatereflections:

    Also, most growth occurs during a particular time of year for most trees.

    In addition, trees have more than one growth form, and it will switch forms depending on changes in the environment. Yamal specimen yad061 is a candidate for that

  12. Bruce Here’s a pdf of a study comparing different influences on wood density. (Briffa was claiming for a while it was superior to tree ring size as a temperature proxy.)

    Note that the main effects are precipitation related (especially periods of drought). Temperature effect never achieves significance in any of the samples studies.

    Of course if there is a multi-decadal correlation between temperature and precipitation, you might expect an interaction term to be significant between temperature and MXD, but you’d need to correlate between your “independent” variables as well as your subject measurement to glean anything from that. (In any case, I think the correlation between temperature and precipitation is very weak. There is probably a much larger correlation between precipitation and total solar exposure–something they didn’t control for.)

          1. These are intra annual statistics. Inter-annual observations say otherwise. That’s it.

          2. No. They are the result of a multi-variance analysis where you track all of the causative factors influencing tree-ring (or MXD) growth. And what I was reporting was the conclusion from those measurements. When one does the sort of results I summarized above are typical of the primary cause of tree-ring growth.

            There are two reasons why you might find trees that respond to temperature only, one of these is you have “temperature limited growth” (that is the common argument for using certain trees in e.g. the Yamal population and not others). The second is that there is a coincidental relationship between temperature and precipitation that produces an apparent interannual correlation between temperature and precipitation.

            The problem with the latter type of relationship is there is no guarantee that the correlation will survive “out of sample”. The problem with the former is knowing from meta data when the tree is acting like treemometer and when it is not (as climate can change naturally, and this can make a tree that in sample is temperature limited be not temperature limited outside of the sampling interval).

            I’m also curious why you thought Layman Lurker’s expose was a demonstration of your pet theory that trees make good treemometers.

          3. Carrick and Phi, my own take on the divergence issue changed when I did that post. If Schweingruber mxd (the dominant mxd data archived) are good thermometers up to 1960 then diverge, then the difference between observed NH temps and the mxd should be stationary up to 1960 then diverge. The simple trend adjustment circa 1900 renders this difference stationary. IOW, good thermometers after adjusting with a trend. The 1960 break had no relevance at all. Briffa says in the emails that divergence is an artifact of standardization methods. I am not sold on that yet but have not had time to check that angle.

          4. Layman, thanks for your comments… You said

            The 1960 break had no relevance at all. Briffa says in the emails that divergence is an artifact of standardization methods.

            Of course you make any claims you want, as long as you don’t have to prove it to be true using quantitative analysis.

          5. Anon, read the post Phi links to. The trend adjustment not only improves the fit of mxd reconstruction with NH temp, it also renders the residuals stationary. This is true whether or not the reconstruction is truncated at 1960 or not.

          6. Hi Layman,

            That anon was me. I did read your post. Regarding “making any claim you want to” I was of course referring to Briffa’s claim “that divergence is an artifact of standardization methods” not to what you did, which was quantitative and included sources and data etc.

            I don’t see how how your posts relates to the issue raised here that the primary dependence of the tree ring and mxd metrics is on other causative agents besides temperature.

            Briffa’s MXD data was pre-screened after all to conform to the temperature record over some period.

            If you choose another set of MXD data at random, I suspect it won’t be a good correlate with temperature. In order for trees to work as good treemometers, you have to screen for sites that are likely to be “temperature limited” and you need a metric for determining when these proxies cease to be temperature limited. (IMO this is all doable.)

          7. Carrick I presume? I see what you are saying and I agree. Wrt the Schweingruber mxd, I think that reconstruction was a composite of most, if not all, of the archived series. But I can’t check that right now. These series are obviously pre selected sites where temp limited growth is hypothesized.

          8. Yes it’s me… what I’m interested in is how they do the temperature selection. Whether it’s via correlation or site selection. With Schweingruber, the data are gridded, which I’m not very happy about (I’d rather have the raw series). It’d be interesting to see how sensitive your conclusions are to changing how its gridded and the use of other data sets.

            I think there is a problem with the early temperature data, your analysis seems to confirm it, or at least that’s the way I read it, and I think it’s a mistake to use that data for reconstructions.

          9. IIRC, the Briffa “hide the decline” reconstruction did not screen by correlation like Mann 08 did.

          10. I do have a question that needed answering much earlier in the discussion and might merely show my ignorance of the subject. My question is: In light of some tree proxies using age adjusted tree ring widths and some using maximum latewood density (MXD) and that claims have been made that divergence in these proxies derives from the adjustment process, how do the MXD and tree ring width measurements and adjustments differ. It is my understanding that MXD is measured more or less directly as a cell density with X-rays and it is that density (the lesser density implying a shorter and thus colder growing season) that is assumed to relate to temperature.

            If my assumptions here are correct I would think that MXD requires no age adjustment unless the density measured depends on the tree ring width.

          11. Kenneth Fritsch:

            If my assumptions here are correct I would think that MXD requires no age adjustment unless the density measured depends on the tree ring width.

            I think the measurement protocol is pretty well spelled out here. Not sure why phi thinks Layman’s article addresses how you actually collect MXD measurements.

            Here’s the salient text:

            Densitometric measurements
            The two trees were felled after the 4 years of growth monitor- ing, and disks were cut where the dendrometers had been located From each disk, radial samples following the point dendrometer axis were sawn for densitometric analyses. The samples were conditioned to a 12% water content and resawn in the transverse plane to a thickness of 2 mm. X-Ray negative photographs of the disks were obtained as described by Leban et al. (2004). The resulting X-ray picture was digitized and processed with CERD software (Mothe et al. 1998). The wood density profiles obtained are for each ring and are based on 100 measured positions.

          12. One note is that MXD stands for “maximum density”. MD or MED is mean density.

            Briffa’s claims have to do with MXD being a good treemometer. It’s not clear to me the physiology is understood well enough to make such a claim or that it’s based on anything other than pure univariate correlational study.

        1. Carrick, do you agree then with my assumption that MXD requires no tree age adjustments and that, unlike the tree ring proxies, should be immune to artifacts arising from the Regional Curve Standardization methods or other standardization methods used for adjusting tree ring widths. (Actually you do not adjust tree ring widths but rather divide the tree ring width series by the RCS and obtain an index).

          When we see divergence in MXD series (and we do see plenty) it cannot be attributed to adjustments for tree age. I see Craig Loehle has posted here recently – I wonder what he thinks. I would think that MXD should on this account be a preferable proxy over tree ring widths.

          As for the LL effect, I think I need to restate it here and see if Layman Lurker agrees. The LL effect we are discussing here is finding generally that the difference series between the regional instrumental temperature series and a tree proxy series (MXD) yields what appear to be deterministic trends which can be both positive and negative and usually 2 or 3 in number. Different tree series result in different breakpoints for those deterministically appearing trends. The latest appearing trend has always been positive.

          The reason I asked about MXD adjustments is because I wanted to rule out an adjustment algorithm as the cause of the LL effect. Those linear trends in the difference series would appear to me to put suspicion on an artifact of some kind for the origin of this observation, but nothing simple comes to mind. I just cannot see how a divergence effect would be revealed as linear trends and give such concise breakpoints in the series.

          1. Hi Kenneth,

            I would agree with your assessment that because of the lack of an adjustment the divergence for MXD certainly couldn’t be explainable in those terms. I hadn’t thought about it that way.

            Craig’s explanation certainly applies though.

          2. Hi Kenneth. When I worked through that post, I was expecting to see white(ish) differences up to 1960 and then a trend.thereafter showing the divergence. IOW the trend break should have been at 1960 – not circa 1900. I am convinced now that there is no 1960 divergence in the Schweingruber mxd. Period. I will happily concede this point if someone can demonstrate how this should be interpreted otherwise.

            The only question left for me is what the cause of the trend differences are. I agree with you Kenneth in that it has to be an artifact of some kind. Two linear trends with a clean break at 1900? Not likely to be natural. I believe I demonstrated in the post that this difference is basically a straight line, and not stochastic noise that looks like a trend. One could actually model the full Briffa01 (untruncated) reconstruction using Jones annual NH temp series, the trends I calculated in my post, and a properly modelled white noise time series. That seems incredible to me. We have seen “bodges” and “tricks” and “co2 fertilization adjustments” and thermometers pasted onto uncooperative proxy series therefore I don’t blame Jeff for being suspicious that temps have been knitted into the reconstruction somehow as the trend adjusted fit seems almost too good. However my inclination is to either look for artifacts in either the temperature series or the proxy reconstruction methods as Briffa mentions in the emails. I have done some work since the post and I hope to continue exploring this. I am so busy it is nuts for me right now though.

            That will be my last comment on this as John has written a fine post and I don’t wish to get off on a side track discussion.

    1. Carrick are you aware that trees store heavy metals in their trunk. Things like mercury (from power stations) and lead (from tetraethyl lead) are laid down at the hard/soft wood interface.
      Oddly enough, these heavy metals give high X-ray absorbance, and this technique has been used by ecologists to track environmental heavy metal pollutants.
      As long as there are no changes in the atmospheric levels of heavy metals then you can theoretically use X-ray density as a thermometer; if humans have been introducing metals like Hg or Pb into the atmosphere, well then you can’t.

      1. Doc Martyn,

        “As long as there are no changes in the atmospheric levels of heavy metals then you can theoretically use X-ray density as a thermometer; if humans have been introducing metals like Hg or Pb into the atmosphere, well then you can’t.”

        The problem is that trees seem to suffer from a defect of density rather than an excess from the early twentieth century.

    1. Remember fellas, one of the points I was trying to make is that you to define what temperature sensitive is, and provide proff that you have it correct. Your arguments wrt Briffa ignore that it was his and Mann’s, Cook’s, etc to define and demonstrate.

      This is the main failing of dendrothermometry.

          1. Yep that is my point too. It’s the responsibility of the person using an instrument to demonstrate it functions as he claims, not the responsibility of his critics to show why it fails.

          2. But it IS their responsibility to respond ans answer valid criticisms. That is another failing with this group. The updown series, the cherry picking, the hand waving, etc. And what was Dr. Briffa’s response: we are WORKING on it. Yeah, right. Too bad he didn’t say and do “We are working with our critics on this problem, and would to like to thank them for their concern and their input.”

    1. Eric’s fascination with long-term (70+ year) i warming n the Antarctic is especially interesting, given that the IPCC itself admits that only the warming since 1970 is associated with anthropogenic forcing.

      So what’s his explanation for the warming of the Antarctic prior to 1970 (and the lack of warming post 1980?)

      I’d post there, but one gets tired of the disrespect that honest questions are greated with (as opposed to the carte blanche treatment given to ad hominems against people and groups who they view as “enemies”).

    2. Sunshine… I wouldn’t be surprised if the answer is in very-long period oscillations in the atmospheric-ocean system (e.g., the 56 year period that shows up in proxy data).

      It well could be that over periods of 25-30 years you could have the Antarctica cooling while the rest of the world warmed (in fact, if you go 1950-1975 or so, the evidence is it warmed while the rest of the world either stayed constant in temperature or even cooled.)

      1. Or that Greenland ice retreat occurred as much in the 1930s. Just cycles. Nothing to do with CO2.

        “Long-forgotten aerial photographs of Greenland from the 1930s, rediscovered in a castle outside Copenhagen, have allowed researchers to construct a history of glacier retreat and advance in the area. ”

        “Analysis of the images reveals that over the past decade, glacier retreat was as vigorous as in a similar period of warming in the 1930s. However, whereas glaciers that spill into the ocean retreated rapidly in the 2000s, it was land-terminating glaciers that underwent the fastest regression 80 years ago.”

        http://www.nature.com/news/rediscovered-photos-reveal-greenland-s-glacier-history-1.10725

        1. Sunshine: “And temperature in parts of the USA peaked in 1921 or 1934.”

          Remember though that as you decrease the area that you are measuring over, the relative magnitude of natural fluctuations increases. (And as you increase the area, these natural fluctuations tend to cancel out.)

          Not sure looking at small regions is instructive about global temperature.

          (The argument over the impact of a 0.8°C change in global temperature is a completely separate issue from whether it is occurring and how much of that can be attributed to human causes.)

          1. To put it in simplistic terms, there are a bunch of ups and down to the cycles. And the cycles are not in synch between every region.If you pick the right ones and lump them together you get pseudo-proof of an slight upward change in the average. But those individual cycles have nothing to do with CO2 and can’t be seen in smaller areas because it just doesn’t exist.

            If you can’t see a Co2 response in state sized regions, it doesn’t exist. Lots of those up and down cycles show up in the state graphs, but those are the same up and down you see in sunshine data and would see if more sunshine data was available for easy graphing.

            http://sunshinehours.wordpress.com/2012/05/25/heathrow-sunshine-vs-tmax/

            http://sunshinehours.wordpress.com/2012/05/15/more-sunshine-in-the-netherlands/

            http://sunshinehours.wordpress.com/2012/03/13/ebro-observatory-spain-and-bright-sunshine/

          2. I think you’re confusing existentialism with measurability now. 😛

            If you want more solar data go here .

            30 years of photovoltaic measurements from 239 sites in the United States.

            Have a go, see what you find.

          3. Thanks. Got anything more recent?

            “There were two entries for Arkansas. Data ends in 1990. There are multiple columns. I chose the first: “Flat-Plate Collector Facing South at Fixed Tilt=0″.

            For July the correlation was .629 and .685. Considering that I am comparing local sunshine to the whole states temperature, not bad. Too bad there isn’t more recent data.”

            http://sunshinehours.wordpress.com/2012/05/30/arkansas-noaa-temperatures-versus-some-old-sunshine-data/

  13. Late to the party. Another complication is a recent paper (sorry, at home so don’t remember title) showed that other biases can occur due to mixing tree ages. In brief, the trees that live the longest will have been slow growing when young (as I also argued in several papers years back). In a mixed age population, the younger trees will be faster growing on average, and the oldest trees will be used to reach farther back in time to when they were young, but slow growing. This will give recent years a boost in assumed growth rate compared to the oldest times, and voila, a hockey stick–it also plays havoc with the RCS method because a single age correction curve does not work for the whole population.

    1. Yes if the homogeneity assumption does not hold, then I believe RCS can create trend artifacts in a reconstruction. Two trends with a sharp break would have to be some type of modified RCS method. I don’t believe a force fitted negative exponential growth function could produce such an artifact.

    2. Once again, not properly defining what temperature sensitivity is and is not continues to raise its ugly head. I find it strange since faster growth in young specimens, even bacteria is so well known that this should have been the first consideration of a what-is/is-not stat model. This growth spurt is used in bioreactors all the time to keep efficiency up in the culture.

      1. John, don’t you agree that trees are more complex than single-cellled organisms. 😉

        Unlike single-celled organisms, they have multiple forms that they can take on (shrub-like character versus tree-like for example), strategies for dealing with damage from insects (limb loss, strip-barking, etc), changes in pattern depending on how optimal the environment they grow in.

        Small trees typically grow faster than big trees. Sometimes you have to wait a few years until they get their root system established (depends on soil prep and other things for planted trees as opposed to volunteers.)

        1. There is an answer to your point. It is that you have to define what temperature sensitivity is and is not. The other part is that one has to have enough samples of the “is” part to extract the signal despite certain changes. Drift such as temperature or other acclimation, and interference such as fertilization are usually handled with error bars. Something I most definetely do not recognize in the work. Take the hide the decline deal, have you looked at how large the CI’s would get when the calibration wont calibrate? Literally floor to ceiling. How can one support the nonsense NICK does with a calibration error of this magnitude?

          1. Now you’re getting into stuff I haven’t looked at in a few years. (So I need to go back and review.)

            What I can tell you is there isn’t a unique growth-rate versus time curve for a given tree species in a given location with a given temperature. Trees have hystereses, how they respond now depends on their accumulated history. So theoretically plant two genetically identical trees of the same size, girth, root mass, etc, transplant them to exactly the same growing environment and the two trees in general will have different growth patterns. Differences in the management practices of the growers the two trees came from will affect how they will perform in the ground.

            You really have to get into tree phenology to have a hope of really decoding the temperature record from the tree.

            I don’t think it’s possible to talk about it simply in terms of temperature sensitivity (and to the extent that it is possible, as some claim, each tree still gets assigned its own sensitivity… these behave as “weighting factors” which is why one very abnormal growing tree can significantly distort the reconstructed temperature for the sample population).

            Anyway that’s my take. If Craig is around (*nudge*), it’d be interesting to hear his comments.

            By the way I’m pretty sure Michael Mann ever took a phenology course. He just puts everything in the Mannomatic and lets it crank.

    3. “Once again, not properly defining what temperature sensitivity is and is not continues to raise its ugly head. I find it strange since faster growth in young specimens, even bacteria is so well known that this should have been the first consideration of a what-is/is-not stat model. This growth spurt is used in bioreactors all the time to keep efficiency up in the culture.”

      John Pittman, sometimes I have problems following these discussions, but I think it is important to point out that the RCS is a strategy to remove the effect of tree age on tree ring sizes. It is my understanding that a number of trees are required that reached a given age in different past calendar years. The thinking behind this strategy is to obtain an average growth curve over different calendar years for given tree ages such that climate effects cancel out and you are left with the age effect. Obviously there are some assumptions that can only hold approximately in reality; however, an early growth rate is factored into the RCS.

      Craig’s problem with the RCS is that it does not account for variations in early growth between older trees and younger trees in these samples because, I think, most younger trees do not make it to being older trees and the faster growing young trees are particularly prone to an early demise. What I wanted was a link or reference to evidence for this and how we would know that younger trees are growing at a rate that indicates an early demise.

      1. I don’t think it is “and the faster growing young trees are particularly prone to an early demise.” necessasrily. First more young trees will die as compared to older trees. Next take ring size and height, as a cubic funtion and that leaf and roots tend to be a area function. RCS I think is a clever attempt to find a way to standardize. But one of the problems is that the factors that may tend to make a tree long lived should be different than one that dies younger, or why else the difference. As to a reference, I will have to look but it may take awhile. I believe some of this work was done for forestry and lead to the development of culling trees at certain times or ages in monoculture pine forests. But it also highlights the non-stationarity that is probably a factor with the long lived. I don’t have forestry references at home or work.

  14. Craig Loehle said
    May 28, 2012 at 9:54 pm

    Craig, do you agree that given the MXD measuring process that what say in your post applies only to tree proxies using tree ring widths and the RCS age adjustement method (or some variation of it) and not MXD proxies?

    I have read in other papers what you suggest in the post above, but it needs to made clearer why the older trees grew slower at the same age as the younger trees in the more recent part of the series. I guess it is because faster growing trees would have died before reaching old age. It would be nice to see a link to that evidence. It would assume that the living young trees are all or mostly all going to die before old age. How would one determine whether a trees growth rate in its early years will lead to an earlier demise?

  15. If John & Geoff agree, I would like to submit this analogy, either in long form here or elsewhere as suggested.

    There is a clear, simple analogy that I will use to clarify some of the argument above. It is the chemical analysis of a natural material for a constituent element that we’ll label as X for brevity. Let’s go find X.
    In our beginning, there is no known way to determine the quantity of X in any sample type. So, for
    Step1, the chemist obtains or creates some pure or purified X and exposes it to a number of reagents selected with training and knowledge. In this example, the chemist finds that a reagent R changes from colourless to red when mixed with X. There is the basis for a colourimetric method.
    Step 2 is important for the dendro discussion above. The test is to confirm that X and only X causes R to change colour. This could be a limitless test, given that so many chemicals exist, so the specificity testing is reasonably limited to other components expected to be in the sample. For example, in the analysis of rocks, one would test those components known to form rocks, concentrating on those that have some similarity to X in either chemical behaviour or concentration. This step is often done substance by substance in large excess of the envisaged concentration of X, so that an ideal, simple, first order interaction says “R turns red only by reaction with X”, not by reaction with any other substance. (There are many books covering average major, minor and trace element abundances in rocks).
    Step 3 is to examine the relation between X and the colour change to red. In a typical ideal case, more X produces more red. This step is a multi-part process. First part, the reagent R is supplied in excess and many different concentrations of X are studied. A relationship between X concentration and red response is established via a colorimeter. An optimum amount of R is deduced, so that X gives the highest sensitivity. Let us say that there is a linear colorimetric response of maximum slope for a given R and various X at usual gross abundance. The next part is to show that this calibration, here the slope of the red response, is repeatable over time (with an eye on the stability of all reagents when bottles are opened, different batches of R, etc.) The next part is to investigate multivariate effects on the red. Does the calibration stay constant when another component S is added to the mix; and then if S and another substance T are together added to the mix, and so on until it become impractically large to do ANOVA type analyses of the interactions (if any) between components. It can be at this stage that one looks for side effects, such as a precipitate forming with a certain combination of additives, upsetting the proper function of the colorimeter. To proceed, we shall assume that we have absolute specificity and best sensitivity.
    Step 4 involves the preparation of a set of calibration standards. These are commonly synthetic, sometimes as simple as various concentrations of X in water or in a stabilising environment such as a mineral acid or alkali. OTOH, the standards might need to be made in a solution that mimics the main features of the test sample after it is solubilised. When a final analysis is performed, it is common to insert a dozen or so standard samples at the beginning and again at the end of a run.
    Step 5 is confirmatory. It involves primary standards, materials that have concentrations of X derived by other laboratories, by ‘round robin’ exchanges and/or by different analytical methods.
    This analogy lacks the time dimension that is involved with palaeo reconstructions. However, it demonstrates some procedures that should be present in palaeo work, these being:
    1. A reproducible relation is established between the subject of the test (that might be tree wood density within a ring) and the variable for which we seek response (that might be temperature).
    2. The relation is tested for specificity, first to one possible confounding variable at a time, then in a multivariate case.
    3. The method is tested for precision (such as repeatability from day to day of a calibration graph) and accuracy (by comparison with results from different labs, different instrumentation).
    4. An estimate of confidence is made, usually before the method is adopted for commercial use. Criteria (if any) are formulated a priori and with mechanisms understood, to reject any sample types that have been found not to conform with the calibration. An example might be that rocks with certain identifiable mineral content are atypical and unsuited e.g. carbonate rocks.
    5. If, as the method becomes routine, it is discovered that some results are questionable, the work is stopped until the cause of the error is solved, or the test is abandoned as unreliable.

    Some things are NOT done. These include using a calibration curve taken from another laboratory using similar, but not the same, instruments. They include failure of immediate discontinuation and dissemination of information on finding an error after all of the above steps have been worked trough. They include practices that can cause immediate dismissal, like the knowing reporting of a wrong result, the fabrication of results, the adjustment of results when no reason is known. In other words, there is ACCOUNTABILITY in the knowledge that not just the test can fail – or the operator or the whole laboratory can be failed by the market.

    HISTORICAL AND INFORMATIVE NOTE.
    Some of the most expensive rock and soil samples ever analysed systematically by many methods came from the Moon. After many months of lab work, the initial results were compared, notably first in a paper by George H Morrison 1971, see where I have filed his paper to my web site –

    Click to access MR%20Analysis%20Eval%20Morrison%201971.pdf

    In short, it was soon found that most laboratories entered the comparison far too optimistic about their capabilities, especially their estimates of precision and accuracy. These were among the best labs in the world at the time. Now, 40 years on, there is little to suggest that precision and error estimates would be much reduced. Mainly, this is because of the human element with its defiant belief that each scientist’s lab is better than the next. That continues in laboratories and in a wide range of science, not excluding dendroclimatology work. Indeed, much dendro work as reported is woeful by comparison, lacking discipline, proper calibration and ACCOUNTABILITY.
    Virtually all prominent dendrothermometric studies would fail the limits encountered in the analytical chemical laboratory, to the extent that when error bounds are properly calculated, the critical response curve cannot be separated from “no response”. Agreed, dendro is often a harder task, but that is no excuse for a departure from basic principles.

    1. Part of what should be included is interference. This occurs as the instrument(s) and the methodology(s) become known. An example of this was a physical instrument that measured NH3 concentration. It was found that as conductivity increased in the water sample, it would read higher by 10% than actual standard. Another was a physical NH3 device for air, but it drifted continuously, making calibration good only for small amounts of time. This information should be shared and problems discussed openly.

      1. Thank you, John,

        That type of event was meant to be covered briefly in my original by “The next part is to investigate multivariate effects on the red. Does the calibration stay constant when another component S is added to the mix; and then if S and another substance T are together added to the mix”. It’s rather similar to saying that tree rings can be affected by T, rain, nutrients, insects, diseases etc. These effects can combine in synergestic ways that are very hard to unravel by models in a lifetime of work.

        Apologies for the length above. Compression makes understanding harder. However, this is elementary, based on work I was doing undergrad from 1965 & postgrad in 1968, as were many other people. The processes were formalised in publications from my employer, the CSIRO. The overall scheme has stood the test of time and it is amusing to see the latter day “approximate” scientists tryng to justify excursions with “We investigating that and will report.” BEFORE that point, one issues a precautionary and withdraws previous data until the fix is made and published. One does not let it persist in a globally important document with “high certainty” like AR4.

        The inverted U growth curve that you give is excellent, thank you. Have you ever seen some work from the mid 1980s where Alcoa, the aluminium company, enclosed a mature tree including root ball in a suspended phytometer, East of Perth, West Australia, from memory about 15 m high, to study inputs and outputs? I’ve been unable to find the results, though I saw the tree.

        Confirmation of one side of the inverted U is easy. We used floor sweepings for our urea manufacturing complex to poison weeds. Very fast, very effective.

        1. No, I haven’t seen the study. A lot of this was done on trees under forestry research. One of the bio-physical mechanisms such as stem propagation and density I am familiar with but only cursory. My niche in biology was ecology and botany, though not bio-physical part of botany.

        2. The inverted u curve was certainly known in 1980 when I was enrolled in 2nd year Forestries at UBC. I know because I recall quite clearly it being explained to me by Dr. Worral: whose current area of research interest is “Tree Physiology; Tree Growth Mechanisms and Climatic Influence” according to the website http://farpoint.forestry.ubc.ca/fp/ .

          1. Thanks JT. I will look at this paper this weekend. While in Biology in the 70’s and 80’s at a major university, biology students were expected to get some form of bell curve and explain skew if present, and demonstrate that they attempted to find skew if not present. But the general u shape was expected. Those who in higher level undergraduate courses gave linear least squares results got “F” on their work.

Leave a comment