## Tamino’s Folly – Temperatures did Drop

Posted by Jeff Id on October 21, 2008

Ok, the truth is I didn’t care before if the downward trend is valid. I’m just a bit annoyed that Tamino (at Open Mind) and Real Climate won’t let me post on their blogs about it. In this post I demonstrate the folly of his conclusions.

Tamino has made a couple of posts on how the last 10 year drop in temperature is not statistically significant so it isn’t real. He went too far in his last one and began claiming it was a tactic of some kind of creature called a denialist to confuse and confound the public.

Let’s see what Tamino has been saying on his blog link HERE.

Some of you might wonder why I make so many posts about the impact of noise on trend analysis, and how it can not only lead to mistaken conclusions about temperature trends, it can be abused by those who wish deliberately to mislead readers. The reason is that this is

stilla common tactic by denialists to confuse and confound the public.

I just hate bad science. First he points out how Bjorn Lomborg made some comments about temperature decreasing, after placing the ever more popular label of denialist on him implying Lomborg’s statements were intended to confound and confuse the public. Heres the main point of what Bjorn Lomborg said.

They (temperatures) have actually decreased by between 0.01 and 0.1C per decade.

Ok, so graphs like the one below are the reason Bjorn Lomborg is a denialist.

I copied this graph from Digital Diatribes of a Random Idiot – A great unbiased site for trends (link on the right). Note the slope of -.0082 (.01C/month units or .00098 degC/year – Thanks to digitial diatribes comment below) in the equation on the graph. Most of us know this is actual data and is correct, in fact every measure is showing similar results. The earth stopped warming- a very inconvenient truth. So Tamino what’s the argument, why are the evil and uncooperative denialists wrong?

Statistics of course.

Here comes the numbers from Tamino.

The most natural meaning of “this decade” is — well, this decade, i.e., the 2000’s. So I computed the trend and its uncertainty (in deg.C/decade) for three data sets: NASA GISS, RSS TLT, and UAH TLT, using data from 2000 to the present. To estimate the uncertainties, I modelled the noise as an ARMA(1,1) process. Here are the results:

DataRate

(deg.C/decade)Uncertainty

(2-sigma)GISS +0.11 0.28 RSS +0.03 0.40 UAH +0.05 0.42 All three of these show

warmingduring “this decade,” although for none of them is the result statistically significant.

Ok Tamino has calculated GISS, RSS and UAH. One ground measurement and two satellite. For those of you who don’t spend their afternoons and weekends digging into this. ARMA is a fancy sounding method for what ends up being a simple process Tamino has used to estimate the standard deviation of the temperature. Sometimes it seems the global warming guys believe the more complicated the better, but no matter. He has a 2 sigma column which represents about 95%. He then goes on to say that because of the sigma 0.28 or 0.40 is bigger than the trend, the trend is not statistically significant. He repeats the comment below.

Let’s make the same calculation using data from January 1998 to the present:

DataRate

(deg.C/decade)Uncertainty

(2-sigma)GISS +0.10 0.22 RSS -0.07 0.38 UAH -0.05 0.38 Finally one can obtain negative trend rates, but only for 2 of the 3 data sets. But again, none of the results is statistically significant. Even allowing this dreadfully dishonest cherry-picked start date, the most favorable

Now Tamino claims to be a statistician so I can’t see how he made such a simple boneheaded error** **but if he wants to pitch softballs, I’ll hit em. Just to make sure he’s in good and deep here’s one more quote.

I’ve previously said “Those who point to 10-year “trends,” or 7-year “trends,” to claim that global warming has come to a halt, or even slowed, are fooling themselves.” I may have been mistaken; is Lomborg fooling himself, or does he know exactly what he’s doing?

So, Mr. Lomborg, we’re all very curious: how

didyou get those numbers?

### Wrong turns everywhere

The first and really obvious error Tamino makes is referring to the short term variation in temperature as noise. Noise in the context of sigma is related to measurement error. How can we determine the measurement error of the three methods GISS, RSS and UAH. Well the graph of the three is below.

The first thing you notice from this graph is that the 3 measurements track each other pretty well. The signal is therefore **not completely noise**. Well what is the level of noise? We have above 12 measurements per year times 29 years. So we don’t need ARMA or other BS we can simply subtract the data. I put the numbers in a spreadsheet and calculated the difference between RSS and GISS, RSS and UAH and UAH and GISS. With 348 measurments for each type of instrument I was able to get a very good estimate of standard deviation of the actual measurements. Again, no ARMA, just using the difference between the graphs.

GISS – RSS one sigma 0.099 Two sigma 0.198

RSS-UAH one sigma 0.101 Two sigma 0.202

GISS-UAH one sigma 0.058 Two sigma 0.116

These are actual numbers and are substantially lower than the estimated two sigma by Tamino but still bigger than the 0.1 C per decade although the two sigma GISS – UAH is within a 90% confidence interval already!

This isn’t the end though. Tamino ended his discussion there implying shenanigans and other things of those who see a trend.

### Both of our standard deviation calcs are for a SINGLE measurement NOT a trend.

This is a big screw up. How can a self proclaimed statistical expert miss this, it’s beyond me. Anyway, none of us is universally right every day but most hold their tongue rather than post a big boner on the internet. Well most scientists realize that when you take more than one measurement of a value you improve the accuracy. So being a non-genius, I used R to calculate what the statistical certainty of the slope is when taken over 10 year trends. Thanks again to Steve McIntyre for pointing me to this software. I don’t love it but it is convenient.

t=read.csv(“c:/agw/giss data/10 year variation.csv”, header=FALSE)

x = (1:length(t[,1]))

y=t[,1]

a=gls(y ~x)

confint(a)confint(a)[2,1]-confint(a)[2,2]

y=t[,2]

a=gls(y ~x)

confint(a)

confint(a)[2,1]-confint(a)[2,2]y=t[,3]

a=gls(y ~x)

confint(a)

confint(a)[2,1]-confint(a)[2,2]

What this script does is load the difference files i.e. GISS-UAH, fits a line to them and presents a number for the statistical confidence interval of the slope coefficient at 95 percent confidence which is about two sigma. The confidence of the slope of the trend is as follows

GISS – RSS Two sigma 0.00108 DegC/year

RSS-UAH Two sigma 0.001068 DegC/year

GISS-UAH Two sigma 0.0005154 DegC/year

Despite a standard deviation of .02 We have a twenty times more accurate slope measurement of 0.001degC/year !

## Conclusions

1. We can say with a high degree of certainty that we know the trend of temperature for any ten year plot to within .01 degC/decade.

2. We can say that temperatures have dropped this past decade, just as our eyes looking at the graphs had already told us.

3. We can also say that Tamino owes a few more apologies.

### He and Real Climate still don’t let me post on their blogs!

I wonder why?

## Raven said

Jeff,

Noise in climate science is weather – not measurement error.

Weather includes seasonal variations and events like El Ninos and La Ninas.

ARMA is an attempt to model weather noise.

I suggest you read through lucia’s blog if you want to find out more about modelling weather noise.

## Jeff Id said

Raven,

I have a full understanding of ARMA ARIMA or FARIMA. Been studying a bit.

I think you missed the point, by subtracting the three trends from each other I have isolated the measurement error from the weather. I then used the combination of ten years of values to determine the effect of the uncertainty on the slope.

## Raven said

Sorry, I guess I completely missed what you are trying to do.

So how do your uncertainties relate to the uncertainties that Lucia uses in her IPCC hypotheses tests?

## Jeff Id said

Raven,

It’s funny, CA did the similar analysis of the Santer paper, I believe all of these have the same fault. They take the full variation as uncertainty. This inappropriately expands the confidence interval until the model matches.

## Chris H said

I have to agree with Raven’s first post – Tamino’s intention was to show that weather variations swamp any 10 year trend that you might see. To all intents & purposes, you can ignore measurement error, since weather variations complete swamp them (over a 10 year period), as you have actually shown.

And sadly, Tamino does have a point – we need more data before knowing whether Global Warming (not necessarily man-made) is continuing, or whether it has truly abated. Given that (at least until very recently) the Argo network was showing that the oceans have continued to warm (post 1998), while the atmosphere temp have flat-lined, I am inclined to think that Global Warming is continuing apace.

One wonders what natural explanation GW could have, but historically speaking it may not be anything unusual – although this is hard to tell from “temperature proxies” (as you have demonstrated), human historical records seem to suggest that the MWP was real.

## Luis Dias said

Jeff, when you confuse “error noise” with “weather noise”, that’s a big red herring to me that you simply don’t have a clue of what you’re talking about. Sad. I thought this blog had some interest.

## vivendi said

If I understand correctly, Tamino has used the same basic data. Jeff is debating Tamino’s method of calculation. Neither Tamino nor Jeff have discussed the causes of the noise. So is who’s calculation is wrong?

## Jeff Id said

Chris,

This is the argument I expected. The confusion however is not on my part. The weather noise is accurately measured, the variation is also accurately measured so the trend is downward. There is a second point to my argument in Luis’s comment.

Luis stepped on the big turd about was —

“”I get a similar result, even with all the “weather” noise included!!! “”

I was going to do that tonight after a bunch of people stepped in it. I thought some people would figure it out though. The main difference is between one point and a bunch of data. Tamino used the SD of a single point to describe a trend — thats a no no.

What the three posters missed initially, is that while the weather does vary we cannot think of it as varying about some mean and the rest is noise. The variation is caused by natural processes which we measure, Tamino’s calc is faulty on two fronts, first assuming the weather noise is the error and second by using a single point to describe a trend.

The question is how well do we know the trend not about the future. Very common problem in engineering.

1 – We know the trend within a very small percentage of slope.

2 – We don’t know if it will continue for 30 years but the slope is currently negative.

## mugwump said

I have to agree with Luis at #6. The weather noise is real, and much greater than the measurement noise. Of course, the same reasoning can be applied at any scale to argue that even the trend over the 20th century may not be a trend at all but just an artefact of the “noise” of a longer term cycle.

Regardless, you are not doing yourself any favors with this one Jeff.

## Jeff ID said

Wow, I really can’t believe people don’t get this concept.

It’s so simple guys. How accurately do we know the slope?

We know it within +/- 0.001 degrees C/year.

Even with the full weather noise of the signal we know it within 0.002 degrees C/year. We’ll see that tonight. When you take a dozen measurements of a known error you improve the result. When you take 348 measurements you improve it a lot. When some of those measurements affect only the central portion of a linear fit, the effect on slope is very minimal.

I don’t understand why people don’t get this.

## Diatribical Idiot said

Jeff, great post, as usual.

A couple notes:

On the chart you have shown from my site, the -.0082 is not the slope on a per year basis, it is the monthly change in anomaly, where the anomaly is stated on a per 0.01 C basis. So, in terms of change per year in degrees Celsius, the actual slope needs to be divided by 100 and multiplied by 12. The actual annual temperature change, then, is -0.000984, which is essentially flat. Other shorter-term measures show more significant cooling, but the purpose of the graph above was simply to show how far back in the data one can go to show a non-warming trend.

I think one thing that may be confusing is thatwhat you are showing by looking at the consistency in the measurements is that there is very little measurement error in the temperature data. This is different from acknowledging that the data itself has variability. The question is whether or not standard deviation is a good measure, and I think the answer is no. Tamino’s analysis falls apart for one of the very reasons he criticizes Lomborg. If you look at his designed example of an ARMA analysis, it is based on a linear trend and random noise. He domnstrates that, under this scenario, there can be periods that show a negative trend within a long-term increasing trend. And he’s right. But the problem with assuming that temperature follows this same pattern is absurd. Ther are numerous cycles impacting weather. We don’t ahve to agree in the impact of all these things to know that this is true. Even if estimated, a true standard deviation would account for the effect of known contributors to temperature. The fact that it may be difficult to quantify these contributors does not make the standard deviation measure any more correct.

To illustrate, suppose some process follows a sine curve. If we know that it follows the sine curve, we can properly estimate where the curve lies in the data set. Now, suppose the sine curve lies along an increasing trend line. The best fit trend line may well be true, but with a high standard deviation (or low r-squared) along the simple linear trend. If we properly recognize the sine curve, we can adjust the deviations from expected mean at each given time and dramatically reduce the error.

Since Tamino’s exercise fails to consider any contributors to temperature, his standard deviation does not represent uncertainty at all. It represents a combination of uncertainty along with the impact of known (at least theoretically) contributors to temperature.

Not sure if this helps explain things or not, but I thought I’d give it a shot.

## Diatribical Idiot said

Sheesh. I really should proof my posts before submitting…

## WhyNot said

I have read this blog continuously since its inception and have not taken the time to review all the paleoclimatology papers, proxies, etc. I am not an expert climatologist. However, I do understand math, its meaning and the implications of its application to real world data.

This is excellent blog. Why, because the analysis of the data is correct and the methodologies used are correct. Therefore the results are correct.

So WhyNot begin to try to explain to all the people that severely missed the point. By the way, I did read Tamino’s blog, and his conclusion about the data is completely wrong. His results and conclusions on this particular analysis remind me of high school and definitely not college work. They get a result and don’t understand the meaning of the result or the methods used to produce the result anc come to a conclusion that is wrong. However, it is typical of the major majority of the population, very frustrating for me as you might have gathered. Sorry on to the point.

In most instances, there are several methods used to make the same type of measurement. As an example, lets measure distance. One can use a ruler, a caliper, a laser, etc. Each will have its own accuracy or if you wish you can call it a SD. If I take 100 people and use the same caliper to measure the diameter of a quarter and average that data, the average will be the diameter of the quarter within the accuracy of the caliper. Based on this data, I can then say the diameter is X +/- SD. The SD value will be dependent upon 1, 2 … sigma’s. Now do this to 1000 random quarters made with in the last century. What will happen? The diameter X will vary, be noisy, however the SD will not change. That is a very significant and important point, the SD will not change.

So why then does Tamino SD of 2-sigma change through out varying periods? Very simply, it is NOT the SD of temperature, it is the SD of the temperature noise and it is a garbage SD to try to make his point. I bet you say, “you just made our point”. To the contrary. Creating a trend line as shown in the first graph has mathematically accounted for the noise in the signal. The accuracy of the trend line then is entirely dependent upon the SD of the instrument or instruments. If the SD of the instruments is significantly smaller than the slope of the trend line, then we know the trend line to be accurate and TRUE. The slope is .008 the SD is .001.

Finally, the SD of weather is a CONSTANT and not a changing value. A changing value in SD indicates a lack of sample data. An analytical proof of this is beyond the scope of reply.

Just to point out a very glaring yet un-observed point, if Tamino’s method of thinking is correct, then his SD of .28 to .42 would completely eradicate any notion of global warming since in the second graph, the worst anomaly is (sight averaged) around .45C. Using his logic that we can not trend, then we can definitely say since 1979 temperature has been flat and there has been no global warming because it is within the SD of weather.

WhyNot go figure that one out!!!!!

WhyNot “cherry pick”?

## WhyNot said

Accidentally posted before complete. I wanted to say;

WhyNot “cherry pick”, it seems to be the TREND in paleoclimatology.

## Chris H said

Jeff,

While it is possible that Tamino may have made some mistakes (I have not checked), I am still very much of the belief that you are missing a crucial point too. It seems to me that there are several classes of variations/noise/errors, and that you are mixing some of them up:

1. Measurement error, which your blog post quantifies as being quite small.

2. Weather noise, by which I mean “short term” effects (say 30 years or less), such as El Nino & the Pacific Decadal Oscillation (PDO). This is what most climatologies mean by “weather”.

3. Climate variation, by which I mean “long term” effects (say 30 years or more).

It is Climate variation that concerns Global Warming, but this is obscured by Weather noise in the short term (say a decade). This is what Tamino tried to prove (perhaps incorrectly), but my point is that you are trying to prove something ENTIRELY DIFFERENT.

## Chris H said

climatologies = climatologists

## Jeff Id said

There are two posts Tamino did. The first one I don’t disagree with and have been complimentary about. It involved the variations in weather modeled by ARMA and it was clearly and convincingly demonstrated that the noise level can give short term down trends in a longer cycle. I read it a long time ago, no argument here. He made some bad conclusions about the validity of the downtrend and failed to note that the downtrend could also be real. Unlike me, though, he cares which side wins. I only care about the bad science and corrupt government.

In his recent post he blasted Bjorn Lomborg for noting a downtrend claiming that it was a deliberate attempt by anti-warming guys to obfuscate the issue. I couldn’t disagree more. Tamino then presented some horribly flawed math to state his case.

I simply proved here that the trend is real and within the 95% confidence limits of the measurement noise level nothing more. Lomborg was correct, Tamino is wrong.

Actually in Tamino’s post he stated that the ten year trend is known with a certainty of .2 to .4 sd. I have clearly shown that the ten year trend is known to 0.001. I will show tonight when I have time that with the full (incorrect) variation included the trend is known to about 0.002 (haven’t completed the math yet).

I am making no claims about longer term trends being false, I only make the claim that this short term trend is true. I also make the claim that it is known to a high degree of certainty which is where besides the obvious failures in Tamino’s math, I disagree with him.

Does this mean the trend will not continue– no. Does it mean that the uptrend has stopped -no. Does that then mean the uptrend will definitely continue – no again. These issues are are not addressed in any way by my post. It does mean that there is a statistically significant downtrend or flattening over the last decade.

This argument I make is therefore independent of the weather issue. That is why I am frustrated with some of the responses.

When people say there is a down trend or flattening this century THEY ARE CORRECT. Where they would go wrong is to claim from this data that the overall up trend won’t continue. They would be equally wrong to claim that the overall trend will continue as Tamino nearly does. Tamino went wrong by claiming it was false and the data was too noisy to detect it.

This is FALSE mathematically and logically.

BTW:I like #13 where he points out that by Tamino’s false 0.4 SD the entire global warming trend is invalid. Funny stuff really.

## omnologos said

I think one has to understand the warmers’ mindset.

Since they take as Truth that an increase in CO2 will bring warmer temperatures, given the fact that CO2 appears to be increasing, therefore in their minds temperatures must be increasing, regardless of what measurements say.

If measurements don’t show an increase in temperatures, then obviously that must be due to “noise”, that is defined as “everything that prevents temperatures from following the True upward Path”.

I have stopped arguing with warmers on this some time ago. I tell you, it is absolutely pointless.

## mugwump said

Jeff, forget about “measuring the trend” for a moment, because there really is no such things a

thetrend.First and foremost, both yourself and Tamino are fitting (different) linear models to the data.

If the data are generated by a linear model plus noise, it is valid to make objective claims about trends based on a linear fit.

However, the temperature clearly is not generated by a linear model plus noise (that’s obvious just by looking at it). So a linear model does not match the underlying process, and therefore what constitutes a valid interpretation of the slope of a linear fit over a short period is up for debate.

You have shown that measurement noise is largely irrelevant to the interpretation of the linear fit, since it has little impact on the fitted line. Tamino is arguing that because the “weather” is highly variable, estimating the trend from a linear fit to a decade’s data is also invalid. Both of you are correct, but you are talking about different things.

Note that it doesn’t matter whether the weather or climate is signal or noise; either way it makes the underlying process nonlinear (either nonlinear signal or nonlinear noise).

A much more interesting approach to linear modeling of the temperature series is to throw in additional independent variables (ie, in addition to time): eg, variables representing El Nino events, Solar insolation, Volcanoes, etc. A linear function of those variables is much closer to the true temperature process. This is the approach taken by Douglass in this paper.

## Jeff Id said

When Bjorn Lomborg says 0.1 degC/decade he referred to a trend. He is arguing his old post because he knows his latest was rubbish.

## Luis Dias said

Look, Jeff you seem a bright guy. You still don’t get it. When you write this:

…I can easily agree with you. But that’s not what Tamino is measuring. He’s measuring what he sees is weather variation. Because the weather variation is so great, one can easily cherry pick a starting date and a finishing date and get his own chosen trend. This is only possible due to what Tamino refers as “weather noise”, and the question of speaking about “weather noise” is even possible or not is irrelevant to the point. It’s a technicality that doesn’t help you very much.

The fact that the climate has “decreased” its temperature “is obvious” only when you pick 1998 as starting point. If you pick 2000, then it isn’t obvious

at all, nor most of the dates that are before or after. It’s why its called “Cherry Picking”. And it is, don’t deny it. Lucia has a lot of blog posts describing these problems in the correct way.Your 0.00000001 measurement error is completely irrelevant to the discussion at hand. The fact that you still

thinkit is relevantis the red herring I am talking about.Cheers.

## Jeff Id said

Luis,

Bjorn was attacked for his statement about temps dropping, Tamino showed some faulty math to prove Bjorn was wrong by saying temps have dropped this decade. I showed correct math to demonstrate Tamino’s mistake (something I would have considered doing on his blog if I was allowed to post there).

Tamino’s point about the normal weather variation causing downtrends (on a different Tamino post! than the one above) IS VALID. But I have not been able to reproduce his results yet.

This is entirely separate from the recent Tamino post where we KNOW TEMPS HAVE FLATTENED TO A HIGH DEGREE OF CERTAINTY. Weather noise or not.

Very frustrating.

When you make the comment that I don’t understand the difference between instrument and weather noise in your first post, it left me concerned that I didn’t word my post well enough. Now I have re-explained it several times and it isn’t getting through. — The question of whether Bjorn is correct that temps trends have flattened within our measurement error is related to instrument measurement error only.

Whether you understand me or not, adding the weather variation affects my 95% confidence of 0.001 degree/yr by two times. Tamino’s incorrect sigma is two times my sigma value or 0.002degC/year, and makes no difference to the conclusion above!!

My entire calculation goes out of its way to isolate temperature trend uncertainty from weather noise. How is it possible that I don’t understand the difference? Sheesh.

BTW the 1998 date was picked by Tamino and Bjorn NOT ME. So I do deny I have done any cherry picking, unequivocally. In doing so you are indirectly attacking me for making conclusions related to longer term trends where NO SUCH CONCLUSION EXISTS.

Please read more carefully.

## Diatribical Idiot said

Jeff, not that you need my help at all, but I jsut commented on the Watts site with what I think is a pretty simple, more layman-ish boiling down of what it still seems like a lot of people are not getting about what you are showing.

By all means, let me know if I am off-track, but I think we’re on the same page. Hopefully it helps more than hinders.

## Jeff Id said

#23

Of course, you got the point well. No surprise considering your background.

Someone got a short response from Tamino regarding his mistake. He chose to bash Watts Up and me. I posted it there.

## Chris H said

@Jeff

“My entire calculation goes out of its way to isolate temperature trend uncertainty from weather noise. How is it possible that I don’t understand the difference?”

It seems that I may have misunderstood something in your analysis (I don’t claim to be an expert in statistics or climate), so I will defer further comment until/if I have time to closely re-read everything relevant.

## John F. Pittman said

Jeff, IMO, you needed to add a statement, that neither you nor Bjorn Lomborg made a projection or a prediction. I think too many people think that everything has to be, or is in reference to a projection/prediction. Of course, understandable in one way since Tamino was talking of ARMA and then you did. The other problem is that you stated “Tamino has made a couple of posts on how the last 10 year drop in temperature is not statistically significant so it isn’t real”. I think this helped set the stage that people misunderstood. They looked at the “statistically significant” and not the “so it isn’t real”. AGW/sceptic blinders I guess.

BTW, I get tickled when AGW bloggers go out of their way to try to differentiate the two words meanings, when if you read what is said in the IPCC, if a CO2 scenario occurred as outlined, then the long term climate trend would have to match or the model would be wrong. Otherwise, they could not say what they do, and conclude that it global warming of such and such is likely.

## hswiseman said

Hi Jeff,

I read your calculations as showing that the three measurement methods are non-spuriously correlated and taken together are an accurate method for taking the earth’s temperature over the last 10 years. This reasonably accurate temperature taking method has a demonstrable negative trend over this time horizon. It is tempting to use this data to choose sides in the debate-I would prefer not to. The minor differences in RSS and UAH are dwarfed by commotion created by those who want to bash Spencer while praising the merits of RSS. UAH/RSS set the lower temperature boundary with GISS setting the upper limit. It might be interesting to use this technique on the instrumental record taken as a set together with satellite taken as a set and then combo of instrumental and satellite trend, which should give a fair representation of the total trend.(After this point, I am just thinking out loud, take it for what its worth if anything at all-there may be no magic in any of these numbers). Do it for as long as the data allows (30 years has been referred to as a full climate cycle), calculate a definitive longterm trend. One could then test the trend at points in time against something like a variable moving average or derivative of the trend (2years/5/10/etc.)to find durations of the moving average or derivative the trend that significantly correlates to the Sat/Instrument trend/derivative of trend for a given duration (10 year moving average of the trend might generally correlate to 7 year trend or some such construction).

## Jeff Id said

Alright,

I take responsibility for apparently poor description of my post. Too many people assumed the wrong conclusions from it. I apologize for that.

I have posted a comment from digital diatribes of a random idiot which I think clears up the meaning a bit.

## Chris H said

@Jeff

Sorry, but after re-reading everything (inc. a couple of Tamino’s post), I still think that you & Tamino are trying to prove different things (i.e. talking at cross-purposes). Where you & he part company is when you say:

“The first thing you notice from this graph is that the 3 measurements track each other pretty well. The signal is therefore not completely noise.”

After this point you & he are talking about completely different things, and therefore you cannot claim to have disproven his point (since you haven’t actually addressed it at all, but rather have proven SOMETHING ELSE – something unrelated to what he discusses).

If you don’t like calling short-term variations “noise” then don’t. Call it (say) “weather variations” instead. But that does not stop ARMA from modelling “weather variations” in a convincing fashion – see here:

http://tamino.wordpress.com/2008/09/12/dont-get-fooled-again/

Just to repeat once more (before I give-up this particular discussion), your objection seems to be based on a particular (and highly specific) interpretation of the word “noise”. Where-as I (and seemingly Tamino) are taking a more practical view – if “weather variations” can be accurately modelled as ARMA, then it seems reasonable to label it “noise”. Just don’t get hung-up on that particular label when trying to extract it from real temperature data.

I have also posted this reply on your follow-up blog item:

https://noconsensus.wordpress.com/2008/10/23/a-better-explanation-of-taminos-folly/

## Bob B said

Chris H, Let me take a shot at what I think Jeff is trying to say.

The climate data could very well contain “noise”

Three data sets pretty much have measured the “same” signal+noise

Jeff is only now looking at the slope of the trend line produced by a great many samples of the signal plus noise. The data itself is accurate and not “noisey”

From the many data samples one can accurately produce a trend line.

The trend line does indeed show a decrease in temp over the last decade which Lomborg asserts

## Jeff Id said

Separate the two Tamino posts in your mind. The points are separate. Post#1 is not addressed by me. Post #2 is.

I’m going to list the errors in your comment.

1 – “Sorry, but after re-reading everything (inc. a couple of Tamino’s post)” —— First problem, I only address one post not the entire body of Taminos website.

2 – “After this point you & he are talking about completely different things, and therefore you cannot claim to have disproven his point (since you haven’t actually addressed it at all, but rather have proven SOMETHING ELSE – something unrelated to what he discusses).” ——– AGAIN I HAVE DIRECTLY ADDRESSED TAMINO’S SECOND POST ONLY. A POST WHICH IS FOCUSED ENTIRELY ON WHETHER THE DOWNTREND IS MEASURABLE IN THE NOISE. IT IS MEASURABLE EVEN IN THE FULL NOISE. (CAPS FOR BOLD – NOT YELLING)

3 – “If you don’t like calling short-term variations “noise” then don’t. Call it (say) “weather variations” instead. But that does not stop ARMA from modelling “weather variations” in a convincing fashion – see here:” —— AGAIN, ARGUING MY POST WITH THE WRONG POST. I DON’T ADDRESS THIS POST AND ACTUALLY AGREE WITH IT.

4. “ust to repeat once more (before I give-up this particular discussion), your objection seems to be based on a particular (and highly specific) interpretation of the word “noise”.” —— AGAIN NOT CORRECT IT DOESN’T MATTER HOW YOU INTERPRET NOISE, YOU GET THE SAME RESULT AS ABOVE WITH THE FULL NOISE. I DIDN’T USE FULL NOISE BECAUSE IT IS NOT A CORRECT CALCULATION. I HAVE POINTED OUT REPEATEDLY ABOVE THAT FULL NOISE DOES NOT PREVENT THE EXACT SAME CONCLUSION PLEASE READ IT.

5. if “weather variations” can be accurately modelled as ARMA, then it seems reasonable to label it “noise”. ——- I WANT TO ARGUE THIS POINT BUT IT WILL SIMPLY CONFUSE THIS DISCUSSION FURTHER. SO I WILL SAY AGAIN AND AGAIN, EVEN IF IT IS INCLUDED (INCORRECTLY) IT DOESN’T AFFECT MY RESULT. REALLY IT DOESN’T MAKE ANY DIFFERENCE TO THE CONCLUSION, NO DIFFERENCE, NONE, ZIP, NADA. READ ABOVE.

Please, please, stop mixing the posts together as an argument it makes my head hurt. Look carefully at the second post, understand that he is saying we can’t know the trend because the math is too noisy. Then understand from my post that the math is clearly not too noisy.

That’s all there is here, nothing more. Nothing between the lines, no Mrs Cleo crystal ball prediction for the future, no argument about how hot or cold it is outside… just that.

## Jeff Id said

Thanks Bob,

I was writing my response. You did a better job than me.

## Diatribical Idiot said

“Just to point out a very glaring yet un-observed point, if Tamino’s method of thinking is correct, then his SD of .28 to .42 would completely eradicate any notion of global warming since in the second graph, the worst anomaly is (sight averaged) around .45C. Using his logic that we can not trend, then we can definitely say since 1979 temperature has been flat and there has been no global warming because it is within the SD of weather.”

I was pondering this point this morning, because I’m a loser who thinks way too much about these things. It is a very important point, and I think is easily misunderstood.

The use of Standard Deviation to say that the trend can possibly be +/- the indicated trend is flat out wrong. This is easily seen exactly by the example you provide that, if true, then it amtters not what the length of the trend line is or the number of the observed data points. But that isn’t true. A trend with 360 data points of 0.1 with a 2-sigma value is very different from a trend of 0.1 with 60 data points and a 2-sigma value. The r-squared of the former will indicate a better fit than the latter.

The uncertainty calculation actually reflects the uncertainty of the individual data points. 2-sigma is really a measure of the confidence level that an individual data point is reflective of the fitted trend line. It is not a confidence interval of the trend line itself. This is like saying that the standard deviation about the mean is a confidence interval of the mean. That makes no sense. SD measures dispersion about the mean, and says nothing about the mean itself. If one data point is close to the trend line, you can state with a high degree of confidence that it is a reasonable value given the trend line. The further away it gets, the less likely that one point is reflective of the trend line.

So, Tamino should actually welcome a correction here, because his own results could easily be misapplied to a longer trend line with similar results.

A good way to see this would be to run a large number of stochastic processes assuming a trend line slope, and modeling a dispersion about this trend line with sigma values that Tamino shows as the Normal sigma parameter. You would see a range of probably trend lines doing this, and absolutely none of them would approach the outer range of his 2-sigma value, as he uses it.

## Jeff Id said

#33

It’s really good stuff isn’t it. When I wrote this post I was laughing because I thought people would get it and pile all over Tamino, instead I got some big discussion about his old post.

I should have called it – “Tamino Single Handedly Wipes Out All of Global Warming”

Maybe I will.

## Mika said

But the 2-sigmas in Tamino’s post are the uncertainties of the trends: “So I computed the trend and its uncertainty (in deg.C/decade)”.

And indeed, given the noise model, the uncertainties in Tamino’s post seem to be very close to those given by statistical software.

## Diatribical Idiot said

Well, I did a real quick estimate myself as follows:

2000-current GISS slope = 0.09659 (in terms of 0.01 degrees Celsius per month).

Starting at January 2000 and calculating the y-intercept, the squared difference between the trend line values and the observed values, divided by n, and taking the square root, I get a sigma-ish value of 0.1311. 2-sigma is 0.2623. I may have used an additional month compared to Tamino, but the results are nevertheless in line with his findings.

And my measurement above is a measure of the dispersion about the trend line. It is not appropriate to use it as a confidence interval for the trend line. It is appropriate to assess the reasonableness of a given observation as it relates to the trend line at a given point.

So, unless the actual CI of the trend line is coincidentally about the same as the CI of the observed values, I’m not sure that he did what he thinks he did.

## Jeff Id said

Mika has a good point about 2 sigmas but I also noticed the same thing as #36. The calc seems more in line with a single point two sigma value.

Mika, if the sigma of a single point has a value similar to Tamino, how come software is returning that kind of uncertainty for the slope of 340 individual measurements? It doesn’t really make sense.

Can you tell send me the code of your calcs?

I suspect you may be confusing the shift in slope of the ARMA random noise model with what we actually know from temp measurement. These are entirely different things than Bjorn’s comment which relates only to how well do we know the actual trend in the last decade. The answer is that we know it quite well we just don’t know where it will go next – That is my argument with Tamino.

He actually makes the claim in his comments that we don’t know the trend for the last decade, this is absolutely false. What we don’t know is the future trend. Everything for the last 100 years is reasonably well known (with some consideration for the instruments).

BTW: I was able to replicate Tamino’s ARMA post(with a fake linear upslope added in) pretty reasonably. Still no problems with that one. If you add in a fake sine wave or any kind of “real” change in longer term direction you get the same results also, Tamino left that little point out.

## Mika said

Jeff, the problem is probably that you are modelling weather + measurement error as white noise. The function gls in R requires you to specify how the residuals are correlated. See the documentation of gls (argument correlation): “Defaults to NULL, corresponding to uncorrelated errors.”

To reproduce Tamino’s work, you need to do something like

fM <- gls(temp ~ time, correlation = corARMA(p = 1, q = 1))

I didn’t get exactly the same results for the 2-sigmas as Tamino, but they were pretty close. The best-fit trend rates were a bit different, and actually OLS (using function lm or not specifying the autocorrelation in gls) gave rates closer to those reported on Tamino’s site.