the Air Vent

Because the world needs another opinion

• JeffId1 at gmail dot com

Everything you need to know about Climategate. Now available at Amazon.com

• Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 187 other followers

Anomaly Aversion

Posted by Jeff Id on March 17, 2010

This is just a short post on why people like Roman and Tamino are interested in calculating the offset for temperature anomaly calculation and why I’m interested in applying it to global temperature.  I hope it can put to rest some of the anomaly aversion that exists in skeptics.

I’ve already finished the global temperature calculations and am just working on documentation and verification.  My guess though is that there are some readers who don’t understand the obsessive efforts in offsetting anomalies.   First, there are plenty of people who consider themselves to be data purists, who sometimes state that only raw temperature should be used to calculate trend.  In their view, anomaly is a processing step and is therefore suspect.  While the purity and minimal processing is a fantastic goal, the anomaly calculation is required to compute accurate trends when data starts and stops at different points in a seasonal cycle and is equally critical when data is missing.  Consider that a typical temperature curve looks like Figure 1.

Figure 1 - Typcial GHCN temperature station

You can see the gaps in the series.  When you consider that we’re trying to detect tenths of a degree, it’s not difficult to imagine that if gaps mostly happen at the tops of the pseudo-sine wave your trend will be altered.  Therefore, it’s appropriate to find another method.  In this post ‘anomaly’ is calculated by averaging all the Jan’s together, all the Feb’s …etc.. and subtracting the monthly average.  The result of anomaly calculation is shown in Figure 2.

Figure 2 - Typcial GHCN temperature station anomaly

The data gaps are closer to the mean temperature of the series now, and when you consider the vertical scale, they will have a reduced effect on trend compared to Figure 1.   Think of anomaly as the deviation of each January from the average January and each February from the average Feb.  If 1 C of true warming occurs and we have a complete record with endpoints on the same months as the start, we will get close to the exact same trend with either raw data or anomaly methods.

The result is  still not exactly the same between raw and anomaly because trend is a sum of squares of deviation from the fitted line (that’s another topic), but it’s going to be close enough considering the other errors in the dataset.  Very close in fact.   The anomaly is a better method in any case because the extreme hot and cold months of summer and winter won’t have as great an influence on least squares trend.

So that said, one of the properties of anomaly is that each series is centered (average = 0) on the vertical scale around the timeframe in which it was calculated.  In this case, the anomaly is calculated for the entire length of the available data such that it’s centered around the mean of the entire series.  In HadCru temperatures, there is no code for offsetting anomalies.  They are simply averaged together to create a global trend.  So why would Roman, Tamino and guys like Ryan, Nic, SteveM and even I spend so much time working on proper combination of temperature anomalies to look at trend? It’s because there is a substantial improvement in trend accuracy to be made.

Below is the simplest example I could think of.  A perfect linear trend measured from two temperature stations.   Both stations measure the exact same temperature trend, the record is over a time period of 100 years. One station starts in 1900 at an absolute temperature of 0.001C having a linear increasing trend of 0.12C/Decade until 1980, the second starts in 1960 but is at a higher altitude and measures the same noiseless value as the first plus an offset of 0.5 C, the trend is of course the same, and the second station continues to the year 2000.

Figure 3 - Two perfect temperature stations with offset of 0.2C

To say it another way, both stations have exactly the same slope, with an offset.  The bottom station of Fig 3 is the same as the top except that it’s been cut short and had an offset of 0.5 C added to it as you would expect from two close by stations with the bottom one in Figure 3 at a higher and slightly colder altitude.  If we take a simple average, we get Figure 4. – sorry for the title.

Figure 4 - Station 1 and 2 raw averaged

We have defined the trend of both stations as 0.12 C/Decade, yet when they are averaged as raw, the offset due to altitude creates an incaccurate and more positive trend of 0.179C/Decade.

If you can read a bit of code, both stations are derived from the exact same series, the second with a 0.5 value added on.  So we really do know the true trend is exactly 0.12C/Decade.

```<pre>pp=(1:1200)/1000  #create 1200 month series called pp
pp=ts(pp,start=1900,deltat=1/12)   #turn it into months from year 1900-2000
pg=window(pp+.5,start=1960)		#second series starts in 1960 -  0.5C OFFSET
pp=window(pp,end=1980)			#first series ends in 1980- 20 years of overlap with same values
oo=ts.union(pp,pg)			#join into two column time series
```

The following code centers the two series about their means as would happen with an anomaly calculation.

```cm=colMeans(oo,na.rm=TRUE)        #calculate mean of each series
oo[,1]=oo[,1]-cm[1]            #center row 1 to simulate anomaly
oo[,2]=oo[,2]-cm[2]            #center row 2 to simulate anomaly
```

It subtracts the mean ‘cm’ from each column of ‘oo’ so series 1 and 2 are plotted in Figures 5 and 6. Note the perfect 0.12 C/Decade trend of each series (plotted individually below) despite the raw average calculated in figure 4.

Figure 5 - Station 1 offset to be centered around zero

Also take note that the mid point is centered at the halfway point in the series.  It’s the same with an anomaly calculation, the series become centered around zero.

Figure 6 - Station 2 offset to be centered around zero

So now that we have centered the series according to a process which is similar to anomaly calculation, what would the trend from a simple no-offset average look like?  This next step is equivalent to the methods of the Phil Jones CRUtem series and likely equivalent to GISS.   I’ve never heard of an offset being used in the GISS series but haven’t done it myself.  — Say I’m 99 percent confident that GISS uses simple averages of anomaly as well.  Since we know that simple average of raw temperature stations causes problems, what does simple average of anomaly mean?

Figure 7 - Anomaly Temperature Series Averaged

In this case our two 0.12C/Decade perfectly continuous trends get sawtooth steps in them.  So, if we just average up-sloped anomaly series together the trend is reduced. I hope some of you who may not have considered this effect, take the time to think about this.  In every single case where the true trend in data is an upslope, the introduction and removal of temperature series in simple anomaly averaging, works to REDUCE the trend of the TRUE data.  — It’s a very important point- for a couple of reasons.

Next though, I looked at the RomanM version of offset series calculations.  This method insures that the series are regressed to a constant offset value which re-aligns the anomalies.  Think about that.  We’re re-aligning the anomaly series with each other to remove the steps.  If we use raw data  (assuming up-sloping data), the steps in this case were positive with respect to trend, sometimes the steps can be negative.  If we use anomaly alone (assuming up-sloping data), the steps from added and removed series are always toward a reduction in actual trend.  It’s an odd concept, but the key is that they are NOT TRUE trend as the true trend, in this simple case, is of course 0.12C/Decade.

So let’s use the RomanM data “hammer” on our non-seasonal data.  It’s non-seasonal because it’s a perfect linear trend and Roman is a step beyond the concepts in this post in that it offsets not only by series with a single offset value but twelve offsets  by month.  In the example above, each month has a perfect noiseless trend and therefore the same offset for each month, so his sophistication is not required.  However his stuff hammers this nail just fine.  The code uses least squares fit to calculate the best match of one series to another and determines which value to add (plus sign)  to each series for a best match to each other.  It’s an offset calculator.

As with a proper hammer, the code call is very simple.

``` offsetvalues=temp.combine(oo)      #call Roman's offset function call
plt.avg(offsetvalues\$temp,main.t="Row Average",x.pos=1910,y.pos=0.2)  #Plot it
```

So when we calculate the best possible offset of each  series together, the result is shown in Figure 8.

Figure 8 - Offset Temperature Average

And there we go, a perfect trend by offsetting the two series to match.

Now if you’ve followed the logic above, consider this point.  Phil, Climategate, Warming Is Doom, 22 million dollars of grants, Jones, has not used offset anomalies.  I’m an engineer and as my portion of our recent Antarctic publication, and self appointed skeptic of month for November 09, I calculated the continental trend using area weighted offset anomalies.  This method increased the trend from 0.05 (simple average) to 0.06.  Consider that Ryan and Nic employed a sophisticated iterative algorithm with the intent of determining the proper offsets and weights for the Antarctic surface station anomalies based on weather patterns.  And finally consider that IF the offsets of anomalies are not used, and your data has a natural up-slope, YOU ALWAYS get a lower trend by simple anomaly average.  We’re busted!!  The skeptics/denialists/disinformation spreaders are working to actively and endlessly to increase the trend in surface temperature.

So, we are forced to realize that this is the method for creating the proper trend, as Tamino did a good job of,  and in the case of an upslope in general data, the trend is definitely going to be greater than simple averaging.  Knowing further, that the climategate boys are head over heels advocates for massive warming, and knowing that models predict more warming than temperature measurements do, what does it mean when they don’t figure out to do a proper anomaly offset, but the evil denialist skeptics do?  How would Santer 09 read if CRU was done with offset anomalies?

However, the offset methods are a more accurate representation of the temperature trend.  Again, skeptics of AGW must remember the lesson that the advocate crowd has thrown to the wind.  We do not get to choose the results of math!! Personally, I would much rather work with good math and true results, no matter what they say, than the Mannian hockeystick.

1. joshvsaid

Sure, as long as you realize that you are creating a statistical quantity, the trend in the average of anomalies. This is not a temperature trend. It’s the trend in the average of anomalies. One cannot point to the trend in the average of anomalies and say “the Earth’s temperature is increase x deg/decade”. One can only point to the number as defined by its statistical process and say that the number generated by that process is increasing by x deg/decade.

I am not arguing that the trend in the average of the anomalies is a bogus number, but fundamentally the test of a statistical product is reality. Does this statistical trend mean anything? Does it predict any observable quantities? Does it correlate with local measurements of temperature and other climate data?

2. Nick Stokessaid

Phil Jones describes what they use in Jones amd Moberg 2003. They use the standard Climate Anomaly Method, which Jones also used in his pioneering ’80’s papers. The key is to use a standard common basis period, and stick to it. The difficulty is, of course, that some stations may be missing data there.

In your case you could choose the period 1960-1980. If you subtract each station’s average for that period, they will then of course superimpose exactly, and with no sawtooths when you add them.

3. curioussaid

Nice post Jeff – thanks for a simple graphical explanation of the issue. I’d seen Roman’s posts but not taken the trouble to bottom them out.

4. Jeff Idsaid

#2, Yup, Series which have partial data in the window will also create the sawtooth. Using this method, it really doesn’t matter if you only have partial data. The anomaly window doesn’t matter either as I understand it, but that’s outside the scope of this post. Roman really did come up with a good solution.

5. Jeff Idsaid

#1, This is an attempt to create temperature trend of the best possible resolution. While anomaly isn’t strictly temperature, it is able to determine temperature trend.

6. papertigersaid

suppose you had a positive trend of 0.179C/Decade +/- 0.336, that is to say the error bar is wider then the trend.

Would this be an up slope, a down slope, or is it too vague to tell?

7. Jeff Idsaid

#6, Still awake but barely. Trend is trend, the confidence interval determines how confident you are that the shorter term variance is not causing the trend.

8. M. Simonsaid

Chiefio is doing some very good work on this. He points out that a station starting in a cool period can bias the slope up. And one starting in a warm period biases it the other way.

Now if you KNEW what the underlying signal form was for certain (you are feeding a sine wave with known distortions into the apparatus say) you might be able to tease out the trends to a small fraction of the sine wave.

But here you need to ESTIMATE the underlying signal. And from that ESTIMATE the trend. Depending on the S/N it may or may not be possible.

Now you get the complication that you have a number of underlying signals of unknown frequency and phase and you have a problem on your hands.

Good luck. You will need it.

9. Jeff Idsaid

#8 as I understand it, the pseudo sine wave fits better and worse depending on the time period you look at. In areas where the fit is worse, the standard deviation of the average increases and the error bars would correspondingly increase. Whether the signals have a frequency or not doesn’t matter then, it’s all part of the error in trend as calculated by the SD of the anomalies by year.

10. Al S.said

I’m not understanding the fig 3 graph–the slopes aren’t really the same, but the text seems to say they are.

11. Jeff Idsaid

#10, An email warned me of that impression, check the Y scale. I have to make two separate graphs to make it work, perhaps tomorrow morning I’ll get it done.

12. steven moshersaid

Nice work. The brilliant thing about Roman’s method is you actually dont have to do “infill”

I think there is an interesting test there.

I think we can also say that you dont have to do TOBS. there is an interesting test there as well.

13. AndyLsaid

“our recent Antarctic publication”

What’s the latest? Have I missed this in all the excitement?

14. Geoff Sherringtonsaid

Two intractible problems remain.

First, some of the data from other countries is pre-processed to various degrees over various times by the host country before sending to the Giss and NOAAs and CRUs of the world.

Second, at least one significant country hints at deleting outliers by methods whose methodology is unclear. It seems a bit circular for the home country to delete data because it is spuriously anomalous (let’s say, when compared with adjacent sites) and then for you and Roman to fill it back in again with averaging approximations.

Australia has created frequent un-numbered versions of some important stations as restudy of early adjustment proceeds. I do not think (though I am not sure) that Giss and CRU have used each new revision, though there is evidence that they have recorded some, poerhaps as monthly data. Indeed, there are examples where data no longer used by Australian authorities because of unacceptable quality are still used by the global collectors. Can’t overcome that with math massages. Needs the clock to go back to zero again and the whole world to be teated consistently.

I’m sorry this is such a negative post. Silk purses and sow’s ears. But I do applaud the aim to get better results from the existing data than was done before.

15. joshvsaid

“#1, This is an attempt to create temperature trend of the best possible resolution. While anomaly isn’t strictly temperature, it is able to determine temperature trend.”

No, I’ll say it again, it’s the trend in the average of the anomalies. It is a statistical product, not a temperature trend.

Really, I get the points you are making about combining multiple stations, seasonal variability and noise. Don’t get me wrong. I get it, I really do. But in the end it’s a statistical combination of data. It’s not data itself. It is not a temperature trend. If it’s useful at all it must be checked against real world observables.

16. UFO storm « TWAWKIsaid

[…] Anomalies and climate data, Climate trends in data, South America – a new world cup in climate hockey, The decline – adjusted to be the new warm, Alarmist’s reviewed in Nature, Greenhouse gas theory does not give us a catastrophe, […]

17. Jeff Idsaid

#15, Claiming it’s not temperature trend because of an unstated effect is a bit extreme. Perhaps you can explain, what the difference is between this data and trend in temperature because right now I have a difficult time imagining an improvement on it?

#14, Geoff,

What you point out is problems in available data. You mention infilling, certainly if data is available it is preferred to have it, although elimination of spurious data may have been necessary. The available data is a side issue from this post, what this method does is combine what we have in a high quality fashion.

18. RomanMsaid

#2 Nick Stokes

Phil Jones describes what they use in Jones amd Moberg 2003. They use the standard Climate Anomaly Method, which Jones also used in his pioneering ’80’s papers. The key is to use a standard common basis period, and stick to it. The difficulty is, of course, that some stations may be missing data there.

You are missing the entire point here. If there are no missing values, simple averaging works. You don’t need anomalies or CAM or RSM or FDM or any complicated machinery. It is exactly the missing value problem that all of these methods attempt to accomodate.

Read some of what the Jones and moberg referenced by you has to say about CAM:

The major disadvantage of CAM is that stations must have enough years with data within the 1961–90 period in order to be used, although even this constraint can be overcome by judicious use of neighboring series and other periods (e.g., 1951–70 and 1951–80; see Jones).

Improvements to the base period should also mean that monthly averages for 1961–90 will sum to zero, for many more grid boxes than was evident in the earlier(Jones) analyses. In this earlier study normals were calculated based on at least 20 years within the 30-yr period. As the 1981–90 period was more likely to contain missing temperature data, normals calculated for 1961–90 were often biased slightly cold so anomaly averages (for 1961–90) calculated for the hemispheres were slightly positive (by 0.018–0.058C depending on the month).

Unless ALL of the values are available for the common period, we either have to throw away data and scramble in an ad hoc “judicious” fashion or we end up with possibly biased results. What kind of a solution is that? I’d like to see them calculate proper sampling standard errors from ANY of these methods.

In your case you could choose the period 1960-1980. If you subtract each station’s average for that period, they will then of course superimpose exactly, and with no sawtooths when you add them.

Why does one need to be ad hoc in such an analysis? I doubt that you have bothered to understand what is going on in the method Jeff is using.

Why does one not use a single year where all stations have no missing values for the anomaliziation period? Why use twenty or thirty years? You and I both know the answer. The fewer the years in the common period – the more the uncertainty and the higher the autocorrelation produced in the anomalies. So optimally, using the longest common record would be the best choice. In practice, this may actually be the empty set so that might not work.

We can calculate anomalies in pairwise fashion using all years for which common data are present for each pair, however, the anomalies calculated for a station at a given time could depend on which pair was used. Although looking at the initial model that I gave in my post does not immediately indicate this, the relationship to the pairwise sum of squares shows that the estimation method chooses “optimum” anomalizing values for each station to minimize the overall pairwise differences. When all of the stations are present for a period such as CAM uses, the method will utilize not only the information from that period, but also information from all other times when only some of the station values are present to estimate results.

I would suggest that you could be more constructive in your comments by trying to find and address possible flaws in what Jeff is doing rather than arm-waving a methodology that has many warts. Indicating how, in a specific example case, one can somehow “adapt” the other method in that limited case is not particularly relevant either.

19. Bad Andrewsaid

I agree with Joshv.

Measuring is measuring. Calculating is calculating. The twain will always be different.

Andrew

20. RomanMsaid

Geoff, this is a different issue which can not easily be fixed by analyzing the data. However, I see a glimmer of light on the topic.

Several years ago, when I became interested in the topic, all changes to the temperature records were referred to as “quality control”. This referred not only to data cleaning (which may be necessary), but to what has now become known as “homogenization”. The latter is actually ill-defined data manipulation done in a subjective manner and based in many cases on possibly spurious relationships.

The fact that these have become separate issues is a good thing which IMHO occurred as a result of pressures to open the analytic processes to public scrutiny. I think what remains to be done in this direction are the further steps of fully documenting the QC application and making the unhomogenized data accessible. I don’t see any problem in people providing metadata on necessary adaptations (TOBS, station changes, etc.), but they should leave the way to accommodate these problems to the user of the data.

21. mrpkwsaid

OK, I was with you until figure # 3
Where/what is the .5C offset from/for?

22. AMacsaid

Re: joshv (Mar 18 08:03) #15 —

I’m still getting my feet wet on this stuff, but I don’t understand these lines of objection to “anomalies.” It seems to me that, when discussing rising/falling/steady temperatures, “anomalies” are a superior concept to “temperatures”.

Assume I buy five digital recording thermometers from Radio Shack that are precise to within 0.1 C, and properly calibrated and thus accurate to within 0.1 C. I place them around my yard. They’ll likely give somewhat different readings. What’s the “right” temperature at the moment? I dunno. If I do a 24-hour integration or take hi-lo measures, what’s the “right” average temp for that day? Dunno again.

But if I’m interested in changes over time — say, I’m curious whether my yard has gotten warmer or colder since March 2005 or March 1910 — then I can use an anomaly-based method to address that question, without getting stuck on solving how to calculate or pick the “right” temperature from my five starting instruments.

Of course, there are a lot of potential problems, with respect to holding locations steady, changes in shading, nearby construction, thermometer replacements…

But these don’t seem like disadvantages for anomaly methods as compared to temperature-not-anomaly methods, because the same caveats would hold true in either case.

Am I missing something?

23. kdk33said

As long as anamolies are calculated for individual stations – the january reading for station A is “anomolized” using the average station A january reading – then yeah, OK.

But filling in missing data by inference from nearby data (space or time); calculating complicated global averages from non-uniformly distributed readings (space and time); correcting existing readings by inference from other readings… the whole exercise devolves to worthless.

Long lived, quality data stations with no adjustments. These are the only data that matter. Anamolies calculated directly from these – yeah, OK.

Just my opinion, and I’m not a climate scientist, nor am I doing the work.

24. kdk33said

and if the answer turns out to be:

the claimed temp increases are smaller than the real world measurement error, then that’s the answer.

25. John Knappsaid

Jeff

Interesting post. Very informative. A question though. I notice that Roman’s technique uses months to get the anomolies. However the calandar month and the seasonal month do not match up year to year. There is a seasonal drift over time, clearly this is true every 4 years because of leap year.(ie This years January data is Jan 1-31 and three years from now the comparable period would be Dec 31-Jan30. though I may have shifted the day the wrong way) I vaguely remember reading that there was a seasonal drift in relation to our calandar over longer time periods as well. In months that are undergoing rapid seasonal changes it would seem to me that this might add a spurious temperature signal one way or the other. Do you think that these effects would over the year average out or that they would be to small to worry about or that they don’t exist at all?

26. mrpkwsaid

# 25
I would suspect that there would be no effect in the long term otherwise wouldn’t we be having March in mid summer over a period of a few thousand years?

27. Carricksaid

Kdk33:

But filling in missing data by inference from nearby data (space or time); calculating complicated global averages from non-uniformly distributed readings (space and time); correcting existing readings by inference from other readings… the whole exercise devolves to worthless.

But none of the standard reconstructions do this.

go over to this thread and search for “blending”. Lots of good content there.

28. Carricksaid

AMac:

I’m still getting my feet wet on this stuff, but I don’t understand these lines of objection to “anomalies.” It seems to me that, when discussing rising/falling/steady temperatures, “anomalies” are a superior concept to “temperatures”.

That of course is my impression too.

Climate cares mostly about temperature trends, not temperature. You can do full temperature reconstructions, and I suspect accurately, but it would be a lot more work.

Then when you did a least squares fit to:

T(y) = T0 + trend * y

and T0 absorbed all of your hard work, you’d probably say, “…”

29. Bad Andrewsaid

“Climate cares mostly about temperature trends, not temperature.”

Carrick,

Perhaps I am misunderstanding, but don’t things in our world happen at absolute temperatures?

And the climate doesn’t ‘care’. ???

Andrew

30. Kenneth Fritschsaid

Jeff ID, the work that you and RomanM are doing is important in making incremental sense of the temperature data sets. I have been working with the GHCN adjusted data set on a 5 x 5 degree grid basis using, of course, temperature anomalies and for periods of times which maximizes the number of grids available for analysis. Currently I am concentrating on the 1950-1990 period.

Being (too) impatient I have skipped directly to what I find the critical result that we are all, in the end, looking for: temperature anomaly trend and its confidence intervals. I started by calculating all the trends for the stations with complete or near complete records over the period of interest for the entire globe. I intended to go from there to doing a bootstrap using some probability distribution in conjunction with the bootstrap and calculating CIs.

I am a little uncertain about using a trend regressions and expecting a normal distribution of such trends. What I found was that grid trends did not fit a normal distribution using the Shapiro-Wilk (SW) statistic. Within a grid the stations normality could often not be rejected using SW, but I found on plotting these trend distributions that they often showed 2 or 3 distributions combined. I went back and constructed artificially some combinations of normally distributed trend means using various means and standard deviations. What I found was that the SW is not efficient in rejecting for normality where multiple distributions are combined when the standard deviations are large – as they were for stations within a grid.

I have proceeded to taking all the trend data for stations with the more populated grids and comparing them all pair-wise by regression one versus the other temperature anomaly wise and then regression of station differences versus time. What I found to this point is that station to station correlation can be high by measure of r or R^2, but that there can be statistically significant differences in the trends.

Plotting the station differences has sometimes shown a good correlation over a beginning time period, and in fact a near one to one correspondence, and then the differences will diverge with the shape (peaks and valleys) remaining the same for the paired stations. This situation provides an apparently good correlation but with significantly different trends.

I need to do much more analysis, but when someone shows good correlations between nearby stations (and without the CIs) I would caution against allowing that relation to imply anything about the differences in (the all important) trends.

31. Chucklessaid

#22

Amac, I think you might need to think about your example? You have postulated multiple thermometers covering your area of interest, all calibrated and stationed as you want them, and under your control.

Well, you just read off whatever you want, whenever you want, and use the real data.

The right temp is what you read, it is real, you have no need of averages or anything else.

32. Tonybsaid

Great post Jeff and very thought provoking

I’ve never liked the concept of a global temperature so tend to concentrate on individual stations, in particular those with a very long temperature record-prior to 1850

http://climatereason.com/LittleIceAgeThermometers/

Concentrating on the small- rather than big- picture makes me see all the things that can affect the individual station-which after all the data must ultimately come from each time in order to calculate an anoamaly.

A global anomaly is merely a local station times (say) 1000

However all an individual station is designed to do is measure the micro climate around it. Move the station and the micro climate is likely to be completely different. Looking at the older stations I note that some of them have physically moved up to 15 times (the record before I got bored) So how does the anomaly calculation take into account the 15 different micro climates recorded and relate that back to the original record from which the trend will start?

This makes me think that calculating the anomaly is rather meaningless as the calculation syurely assumes that the station data is consistent, but this is clearly not so.

Surely the anomaly is only practical in a perfect world where the station has proper consistent readings, doesn’t move and doesnt have other factors to influence it, such as a large city inconveniently growing up around it?

I think the idea is a mathematicans dream but not sure that in the real world it can be attained. Is station movement to a different micro climate accounted for in the calculations?

Tonyb

33. kdk33said

Carrick

You’re claiming that the global average surface temperatures are calculated using raw data from stations uniformly distributed around the globe that have been in existence from the beginning to the end of the record and haven’t been adjusted/corrected/homogenized/blended/whatever.

My understanding is exactly opposite. But I only know what I read and I don’t read everything and I’m not a climate scientist.

34. AMacsaid

It seems to me that, at worst, anomalies are “non-inferior” to temperatures.

Suppose my five thermometers give average readings as follows.

March 2005: 10.0 – 10.0 – 10.5 – 11.0 – 11.0
Average of all 5 averages: 10.5
I’ll define “anomaly baseline” as this March 2005 average, i.e. 10.5
Thus, March 2005 anomaly: 0.0

March 2010: 9.5 – 9.5 – 10.0 – 10.5 – 10.5
Average of all 5 averages: 10.0
March 2010 anomaly: -0.5

Change over the 5-year period by raw temps is (10.5 – 10.0) = -0.5
Change over the 5-year period by anomalies is (-0.5 – 0.0) = -0.5

You can point out that I’m complicating matters by adding unneeded arithmetic steps. But that doesn’t invalidate the anomaly method, that’s just a practical consideration. It might be outweighed by other practical considerations.

You could also point out that a temp of 10.0 means something in terms of absolute temperature while an anomaly of -0.5 does not. But as long as I retain the history of how I calculated the baseline anomaly, I still have the information. (10.5 – 0.5 = 10.0).

There are other, separate arguments, many concerning transparency. Examples are accessibility of records, how, when, and by whom adjustments were made, infilling, compensation for UHI, site quality, station drops and adds, flagging “obviously” bad data…

But it seems to me that such procedures can be handled well or poorly, whether or not we are talking about temperature or anomaly. At least when the records are reliable, consistent, and stable.

As Jeff Id and Roman M (and Lucia and others) have recently shown, there are major methodological advantages to using anomalies to look at temperature trends over time, given the deficiencies of the historical records.

So why not use these methods?

35. AMacsaid

Ugh, belated proofreading of AMac #34

Change over the 5-year period by raw temps is (10.0 – 10.5) = -0.5

36. Jeff Idsaid

#30, Thanks Kenneth,

I don’t deserve much credit for this one, Roman did the cool part. My hope was that people would understand that anomaly isn’t the enemy of trend and realize why offsetting anomalies must be done for a proper trend. Also, it’s an interesting point that climate scientists, so desparate for trend to match models, haven’t figured out that they can get more trend in a justifiable fashion this way. I think it points to a lack of statistical understanding.

When we did the Antarctic work, I spend weeks trying to figure out the best offset method – it had to be very simple as it was a confirmation of result, not a new paper. The trend went from 0.05 in a simple average to 0.06 C/Decade but how is it that I can figure that out so easily. How is it that flaws in Mann’s 08 and 09 paper are equally as transparent. I’m not that good, a lot of climate scientists say -no opinion- but it only takes a second. So maybe they really are that bad.

37. Anomaly Regression – Do It Right! « Statistics and Other Thingssaid

[…] for several years, but until now I have not found a particularly relevant time to do it.  In his recent post at the Air Vent, Jeff Id makes the following statement: Think about that.  We’re re-aligning the anomaly series […]

38. RomanMsaid

If one is intent on calculating anomalies, they are likely to be better off calculating them after combining the series rather than before since the requirement of a “common anomaly period” can be relaxed substantially thereby allowing more data to be used in the estimation procedure.

However, Jeff might want to read the post that sent the automatic pingback comment in #37 before throwing up his hands on underestimating trend … or it may be worse than we thought. 😉

39. vjonessaid

Very clear explanation. I’ll make an effort to try to follow Roman’s code now (although any code is foreign to me). Playing with anomalies myself (here) certainly made me appreciate the strength of using them, but also highlighted how stations with a large temperature variation can suffer more error with missing months.

40. Chucklessaid

#34,

Amac, absolutely, as usual, I agree with most everything you say. My caution was about your example. Your example posits a perfect situation with complete data, known calibrations etc etc etc.

Anomalies and their usage are to compensate for shortfalls, errors, omissions etc. You don’t have any, therefore it is insane to use anything but the real data.
So with your 5 thermometers in your back yard, you could map them directly to carefully sited positions and do any and all analysis you wanted on all the readings. If you need 12 readings a day, you take them. If you want to take 10 readings for each one and average them, go ahead.

You don’t need an artists impression of the data, you have the real thing.
When this not the case, yes absolutely, we need the best methodologies we can get.

41. Nick Stokessaid

Re: RomanM (Mar 18 08:56),
Roman, what I was pointing out is that the sawtooth effects, to which Jeff devoted considerable discussion, are due to combining anomalies calculated with reference to different base periods. All it proves is that you can’t do that. You must use the same period.

And yes, that brings difficulties – you often can’t find a common base period where all stations have adequate data. That’s a problem, as I noted. But still, you have to use a common base period.

42. RomanMsaid

But still, you have to use a common base period.

I believe that Jeff’s point (with which I heartily agree) is that this is simply NOT necessarily the case.

43. Jeff Idsaid

Roman’s method is insensitive to the variation of anomaly over time. As I understand it, he found the best fit for the coexisting data. The base period is no longer an issue.

In his current post linked above #37, he’s got a good point about the stair-step nature of anomaly which I had not considered. Until you feed perfect data into an actual anomaly calculation, you don’t see (visually)the effect b/c there is too much noise.

44. Nick Stokessaid

Re: RomanM (Mar 18 16:11),
Well, a common base period is necessary for the correct application of the CAM. And as I said, it’s simple and it fixes the sawtooth problem in this case. Jeff’s alternative is not simple.

45. RomanMsaid

Nick, I think that you are missing the forest for the trees.

It may very well be necessary for the CAM, but the method that Jeff is using can deal with many cases combining series where the CAM fails due to the lack of a common overlap period for all series AND it can deal with this one too. It is not being proffered as solely a solution to the “sawtooth” problem.

46. steven moshersaid

RE 20.

Thanks Roman. The method for accounting for missing data and changes such as TOBS is best left to an analyst. By that I mean preserving all the raw records and the metadata that motivates these adjustments. Visit Lucias site where the issue of TOBS has now come up again. I’ve been pointed to an interesting paper by Vose on this. Anyways, nice work. I’m sure some may continue to defend an inferior method, when the best choice is just to pick the better method and show the case that the inferior method is not a substantial issue. I can’t explain why this approach is not taken without discussing sociology. It’s much easier to say “great method Roman” we’ll use it, but others seem to opt for “we’ll use an inferior method and wave our arms that the difference makes no difference without actually calculating the difference.” we realize that the result of our obstinence creates an erosion of trust, but trust us. or something like that

47. Nick Stokessaid

Re: steven mosher (Mar 18 18:15),
Steven, this post does not establish that the CAM is inferior. It misapplies the method using varying anomaly periods, and compares that with Roman’s method.

48. Jeff Idsaid

#47, Nick, There is nothing misapplied here. I did a fake anomaly calculation to demonstrate the effects. Roman’s method doesn’t require anomaly periods because it finds the best offset by month.

What I mean is, that even if you have a baseline period, steps can and do get created. Many stations are way to short for a baseline, but this method can use the info if you calculate offsets. It’s a demonstration of why we would use offsets at all.

49. TGSGsaid

Thanks for this post. I hated the “anomaly” idea before, can like it now that I know why it’s being used.

50. Peter Dunfordsaid

Forgive me, I am groping my way towrds understanding here.

I think I finally get why you use anomalies. Figures 3 & 4 show that trend of the average of a a number of individual data series becomes the trend of the individual data series PLUS (I think) the average of the differences between the start / end points of the various series of data! I notice also that the 0.5 degree step between the series translates into 0.06 overall in the trend, which is the step difference times by the trend. With some reasonable rounding.

I’ve always been anti the use of anomalies instead of actual values. I think it is because anomaly maps tend to use a very small range of temperatures to display variation. Small, that is, in relation to the variation of actual temperatures. They therefore assume, in my opinion, undue significance compared to actual cyclical / natural variations in temperature, and thus appear more significant than they probably are.

Back to the topic.

You say that from a proper combination of anomalies you can get a significant improvement in trend accuracy.

But you also say that:

the introduction and removal of temperature series in simple anomaly averaging, works to REDUCE the trend of the TRUE data.

If I understand, this means the more discrete elements into which the various series is broken up (ie. the more saw tooths to the data) reduces the trend, not the removal of some of the series?

But am I reading this right? The implication of the above and of figure 7 seems to be that the more series you add (or the more individual series are broken down into sub-series), the flatter the trend becomes. So, if you could increase the number of series being averaged towards infinite, then the trend of all the series would tend towards horizontal, towards zero? The 0.12 degrees per decade becomes forever closer to zero the more series you add?

Or, does it NOT trend that far, merely towards the “true” trend?

But the implication of Fig. 7 is that the results of loads of data series with the same rate of rise per decade, when properly averaged together, will not show the underlying trend of the data. If every thermometer on the planet as showing 0.12 degrees C per decade increase, average them all together and you can get, for example, .06 degrees C / decade?

51. Jeff Idsaid

#50,

I notice also that the 0.5 degree step between the series translates into 0.06 overall in the trend, which is the step difference times by the trend. “

This is just a coincidence, had the step occurred in the middle or directly at an end, it would give a different trend reduction.

The implication of the above and of figure 7 seems to be that the more series you add (or the more individual series are broken down into sub-series), the flatter the trend becomes.

I like to use extremes as you do to grasp ideas. If there were 10,000 stations all only 1 year long the anomaly method would center them all around zero. When you averaged them, there would be no trend. If you were able to calculate a proper offset, your trend would be recovered. So you are right that the trend would tend toward zero slope.

The implication of averaging a lot of short series with an upslope is absolutely a reduction in actual trend. Offsetting by Roman’s method can restore trend, so why don’t the climate warminista’s do it?

It’s an easy target for climatologists to increase trend, and what’s more it’s mathematically correct.

52. Top Posts — WordPress.comsaid

[…] Anomaly Aversion This is just a short post on why people like Roman and Tamino are interested in calculating the offset for temperature […] […]

53. RomanMsaid

#52

WordPress seems to have this as #99 on today’s Worpress Top Posts – just behind Anderson Cooper of CNN.

Way to go, Jeff! 🙂

54. SteveBrooklineMAsaid

I can see Nick #47’s point. Your example subtracts different period means from the two series, and so is not representative of CAM. That being said, I think Roman’s method is clearly superior. It works without requiring an overlap period, as Roman states above e.g. #18. I disagree with Nick about Roman’s method not being simple. It is simple, principled, and easier to work with than CAM. It’s based on well known least squares theory, and is even easy to code.

I don’t know if it is clear from the text, but it is not necessary to put in anomalies into Roman’s code. You could put in the raw data. You will get figure 8 out either way, up to a shift, i.e. the slope will be the same.

A final point, Tamino’s method would also produce figure 8. The difference in weighting between Tamino’s and Roman’s method makes no difference in this toy example.

55. hjbangesaid

Good climategate summary article, in of all places, Penthouse.

Just like in my younger days, I subscribed to Playboy, for it’s articles.

http://penthousemagazine.com/features/an-inconvenient-fraud/

56. steven moshersaid

Nick, I predict that even if CAM were proven to be inferior, logically, statistically, empirically, morally, pragmatically, computationaly that
Someone, ( not you of course ) would argue that Roman’s method should not be used. Even if Roman’s method showed more warming with a tigher CI, some people would argue that CAM was ok. Such is the nature of the debate for some people. Not you I trust. You might be a hard opponent to convince, but in the end you will agree that the best method should be used.

57. Tim Lsaid

well if your results are a higher trend that is fine because you need to add/subtract the UHI factor. which will bring it back to its correct amount!
look at it this way… we know that many errors exist in there massaged data, so there may indeed be errors that cancel each other out!!!!
two wrongs don’t make it right!!!!

58. Peter Dunfordsaid

Would this explain the drop in number of thermometers in GHCN, to prop up the trend?

59. Jeff Idsaid

#57, That’s right. The math is the math, we don’t get a choice, but data quality is a real and separate issue.

#58, I don’t think the GHCN dropout was to prop up trend. I think it’s just that pathetically monitored. There is no excuse to scream for trillions in taxes, yet not monitor the instruments which measure the primary ‘justification’ for the taxes. Where are the climate scientists on this issue. They have blogs, papers and officials ears, they should be screaming for better data monitoring.

60. M. Simonsaid

Jeff #59,

They have data and methods they are comfortable with. What happens if they change?

People might get the impression that they are not always “right”. Bad for funding.

Don’t worry. It is the same in physics but physics is harder to understand.

Take inertia. F=ma right? Well no. That is empirical. What is m? Well Feynman comes at it from Maxwell – m is at least in part electrodynamic. He comes at it from quantum electrodynamics – same result so it seems. It has never been resolved. So some guys – Woodward and also March and others are trying to do experiments. And some fault may or may not be found with their experiments. Now what do “real” physicists say? It is all crackpottery. Why are they even wasting their time with this.

So here we have physicists capable of doing math that makes by head hurt and they can’t get past the simplicities of high school physics. Feh.

You want my opinion? The whole scientific enterprise is rotten to the core. All of it.

So who do I trust? The empiricists. Engineers. Which is not to say scientists are not useful from time to time. But their claims are far beyond their real knowledge.

61. M. Simonsaid

Let me make clear that March and Woodward are doing experiments to try and detect the electrodynamic nature (in whole or in part) of mass.

62. Chucklessaid

#61,

M.Simon,

Trying to detect the electrodynamic nature of mass?

I didn’t even know they were Catholic. Should electrify the liturgy.

I’ll get my coat…

63. Keith MacDonaldsaid

Excellent post – I’m wondering if it has anything in common with Nick Barnes’ work?
See: http://clearclimatecode.org/

64. Thermal Hammer « the Air Ventsaid

[…] trend for the whole dataset.  The offsets are required to align anomalies with each other and represent a substantial improvement over typical global instrumental temperature […]

65. skysaid

Maybe something’s wrong with my browser, but the trends don’t look “identical” in Fig.3, with the second station visibly trending upward more strongly! In any event, if both strictly linear time series are “anomalized” wrt to the same “base period,” their anomlies should be identical in the overlapping stretch. The offset completely drops out in this idealized example.

In any real case, of course, one does NOT have linear trends as an inherent feature of the data series. It is the very-low frequency components that determine the fitted “trend.” When considered on a global basis, those components vary considerably from region to region. Kenneth Fritsch is entirely correct in pointing out that linear trends can diverge strongly without affecting the overall correlation between station records. This is because linear trends add relatively little to the total variance, which varies strongly over the globe.

66. David Jonessaid

You say “this next step is … likely equivalent to GISS” and then describe something that is not like GISTEMP (which you call GISS).

Has your understanding improved since this post, or would you like a recap?

67. Jeff Idsaid

#66, Feel free to discuss it David, this blog is about learning. Expecting me to ask is a little silly though. I put this thing up here, it gets read by thousands of people. Of course I want critique – duh?

68. David Jonessaid

Er, I wasn’t expecting you to ask, I was genuinely asking whether you would like to know more about how GISTEMP combines temperatures and anomalies. There’s no need for me to explain if you’ve already learnt it.

If you want a critique then my chief criticism is that Figures 4 and 7 illustrate a procedure that, as far as I know, no-ones uses or recommends, and that your notes about GISTEMP suggest a lack of understanding that I hope you’ve now corrected for yourself.

69. Jeff Idsaid

This post was a very simple demonstration on why offsets are required for anomaly combination. You are critiquing the point. Crutem does almost exactly this method, they just calculate the anomaly over a shorter window. It minimizes the effect but inside that window, these effects occur. From your statemenst on GISSTEMP it sounds like it also suffers from the same problems.

70. micro6500said

Interesting, I’ve not been here before. I went about it a different fashion, I create basically a 0 offset string for every station, but just taking the day to day change in min and max, as well as I calculate it based on a solar cycle period from min temp to min temp, then I calculate a rising and following night falling value. Then I average them together by specified areas. I’ve added about any combination of geo box analysis you’d want(4 points). And you can go do to any cell size you want, do it in bands, etc. I’m working to take 1×1 grid results and then aggregating them into larger areas. I’ve also added Wet and Dry Enthalpy, Solar forcing at toa for each station, and adding temps while converted into SB flux values, then converting back to temp (this is really interesting) and include all the other attributes in the gsod dataset (wind, rain, dew point, etc).
What I end up with is a whole bunch of zero reference temperature change strings that describe every station. I don’t infill, so I have a parameter for how many days a station has to have by year to be included.
Then I generate reports for both daily record of change, and annual averages of a collection of station. Plus I concentrated on min and max changes and not average temp.Oh, I’ve also added some interesting things like climate sensitivity of the extra-tropics based on the seasonal variability, it gives you a good idea of how effective solar actually was warming the surface and it goes through a very large swing at the higher latitudes.

The best place to start is here, your presentation is far better than mine, good job.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/