# Roman on Methods of Combining Suface Stations

This weekend I’m a bit busy again, however Roman M has done an interesting post on combining surface station anomalies.

## Combining Stations (Plan B)

In his post on combining  station monthly temperature series, Tamino proposes a method for calculating an “offset” for each station.  This offset is intended to place the series for the stations all on at the same level.  The reason that makes such an accommodation necessary is the fact that there are often missing values in a given record – sometimes for complete years at a stretch.   Simple averages  in such a situation can provide a distorted view of what the overall picture looks like.

In his proposed “optimal” method, Tamino suggests  a procedure based on least square methods to calculate a set of coefficients for shifting a station record up or down before calculating a combined series for the set of stations.  The starting point is a sum of squares (which for calculational and discussion purposes I have rewritten in a slightly different form):

$SS = {1 \over 2}\sum\limits_{t = 1}^N {\sum\limits_{i,j = 1}^r {\mathop \delta \nolimits_i (t)\mathop \delta \nolimits_j (t)\left( {(x_i (t) - \mu _i ) - (x_j (t) - \mu _j )} \right)^2 } }$

Here,

i (and j)  = 1, 2, …, r, = the station identifiers for a set of r stations

t = 1, 2, …, N, = the time (i.e. year and month) of the observation

xi(t) = the observed temperature at station i at time t

µi = the offset  value which is subtracted from station i

δi(t) = indicator of whether xi(t) is missing (0) or available (1)

In essence, the Sum of Squares measures the squared differences between the “adjusted” temperatures of every pair of available stations at each time with each pair weighted equally.  The estimated offsets minimize this sum.  The mathematically adept person will recognize that the solution for the individual µ’s is not unique, however the differences between them are.  Tamino chooses to set µ1 = 0 to choose a specific solution.

Read the rest here

## 22 thoughts on “Roman on Methods of Combining Suface Stations”

1. Clark says:

If you were to replace an empty reading at station A, from another existing reading at station B, you first have to establish that there is some correlation between stations A & B… no?

But if there is a correlation, why do you need to substitute it? I could only imagine that it would be to complete the grid. But since I’ve never seen any error functions calculated or displayed for gridded data, why don’t you just ignore the missing data point?

Where I learned how to display engineering data, sorry I wasn’t a scientist, you had to first always explain your “uncensored” data. Then you could go on to show anything that you subsequently did with the data. Given the state of climate science today, this thesis ought to be follow reverently.

As a denier, I’ve never understood the lack of original data display or the lack of error function evaluations. Pardon my naïvety.

2. BarryW says:

The idea of combining stations I find troubling since you’re trying to ‘guess’ at adjustments to correct for issues like different altitudes. But what most people are trying to discern is the overall trend. What if each station time period where nothing changed were treated as a unique station and a trend line computed for that subset. Then instead of combining each anomaly you used the set of trends to determine the overall trend for a time period. Wouldn’t account for UHI but would that allow you to combine the data without adjusting the actual values? I’m not sure if this is mathematically valid but I wanted to throw it out there.

3. Ausie Dan says:

I like Clark’s idea of just ignoring any stations with missing data for that month (and reducing the divisor by N, the number of missing stations for that month). This will distort the average “global temperature” but as long as N is small, the distortion will be undetectable, amongst 1000 plus station data.

When you do have data for that station, you will get a minute tick up or down in temperature, depending on that station being above ot below average. (If there are many holes in the data set then that station should be terminated.)

The keepers of the three global temperature records muck about with the raw data so much that it becomes alphabetical soup, even if there is no conscious or unconscious bias in their work.

4. Kenneth Fritsch says:

How can we test the RomanM method of combining station series given that some of the data are missing against other methods that might be applied to the same problem?

5. Carrick says:

Kenneth Fritsch:

How can we test the RomanM method of combining station series given that some of the data are missing against other methods that might be applied to the same problem?

Same approach that Chad is using..namely use Monte Carlo to simulate the effect of effects like missing data on your algorithm.

6. Geoff Sherrington says:

One of the problems with missing data, like outliers as well, is this. The data compliler might have taken out a value because it was extrordinarily high or low or short or long. If you replace it with a calculated guess like an average value, that is rather defeating the purpose, n’est-ce pas?

The only thing to do with missing values – provided they are not rigged to be missing – is to not include them in the calculation. Anything more is a guess.

7. RomanM says:

This is not about “replacing the empty reading”, “trying to ‘guess’ at adjustments to correct for issues like different altitudes”, or “adjusting the actual values”. It is about solving the problem that is caused by missing values in the records.

Let me give a simple example. Suppose you have three station series in a region where we would like to calculate the average temperature. The values in the series look like:

Station 1: 0, 0, 0, …
Station 2: 2, 2, 2, …
Station 3: 10, 10, 10, …

The average temperature is 4 at each time point. Now what will happen if any of the measurements at a given time is unavailable? If we just average the remaining two values, our result would be either 1,5 or 6, depending on which value is missing. Nothing has changed in the actual average (it is still 4) – the error is a result of the sole fact that the value was not available.

The problem can be solved by noticing that the difference between each station in the average is constant in time. Station 1 is always 4 below the average, station 2 is 2 below and station 3 is 6 above. We shift all the data up (or down) by these amounts (only within this calculation process – the original archived data will remain unchanged) :

Station 1: 0+4, 0+4, 0+4, …
Station 2: 2+2, 2+2, 2+2, …
Station 3: 10-6, 10-6, 10-6, …

The shifted data still averages 4 at each time. However, if one (or even two!) values are missing at at given time, the average of the remaining shifted values is still going to be 4, regardless of which values are missing.

In reality, the station values will vary depending on local conditions, instrument variation, etc.. The values (0,2 and 10) written above will represent mean station values and What my example indicates is that this method can produce statistically unbiased estimates of the average temperature of the stations. This means that adding incomplete records to the procedure can usefully lower the uncertainty of the end result without introducing an uncorrectable bias error as would be done by simply averaging the available results.

Error bars can be calculated for the combined result at each time point. A further byproduct of this methodology is the ability to go back to each series and examine how the series differs from the unbiased combined result over time to see if there are irregularities in the station record.

8. Kenneth Fritsch says:

Roman, your example uses the station temperature. What if one were using temperature anomalies or do we assume that missing temperatures precludes an anomaly calculation?

9. RomanM says:

#9 Kenneth

I haven’t examined this in detail, but from a quick glance at the math, I can see that there are some difficulties with using anomalies.

From the model for the data:

xi(t) = τ(t) + µi(m(t)) + εi(t)

so that the anomaly for a given month looks like ( ave(xi(T)) means “average over a time period T” )

anomi(t) = xi(t) – ave(xi(T)) = τ(t) – ave(τi(T)) + εi(t) – ave(εi(T))

The offsets disappear since anomalies are calculated separately for each station and month combination. Now, if the same time period T is used for all of the calculations then the term ave(τi(T))is just the same constant for every month and every station and is not a problem to deal with. The only complication that will arise is in calculating error bounds for the estimates because the “error” structure has become more complicated.

However, if it is not possible to calculate the anomalies relative to the same time period (when all stations must have available observations simultaneously), then you have now introduced a bunch of new values which are calculated from the temperature “signal” and this creates a new set of difficulties to solve, if possible.

My opinion is that it ain’t gonna work so good.

10. RomanM says:

Darn, all those subscript i’s are no longer subscripted…

11. RomanM says:

#10

Aarrgh. Didn’t think about it enough: ave(τi(T)) now becomes the new “offset”. However, the error term structure is still complicated. I guess that I’ll have to run some comparisons.

12. EW says:

If I may…during my layperson’s readings about combining station records or homogenizations, I found this:

http://www.met.hu/omsz.php?almenu_id=omsz&pid=seminars&pri=12&mpx=1&sm0=0&tfi=stepanek

where the author uses some sort of homogenity testing used in climatology before combining the data and he developed a software “AnClim” for the purpose of homogenizations and time series analyses. Is that relevant to the purpose of this article?

13. RomanM says:

#13 EW

No, they have made a whole project out of it with so many moving parts that it is only peripheral to the simpler matter at hand.

14. RomanM says:

#9 Kenneth (again)

Re-thought the whole thing from the viewpoint of the original Tamino SS (after dinner wine 😉 ): The anomalising constant becomes combined with the original offset to form a new one when the SS is minimized. This will get subtracted from the anomaly to get the original result. The end result is the same as for the raw temperatures. This would not be the case for the “single” offset.

(Note to self: Stop thinking in public…)

15. Re 13. I download anClim a while back I believe.. cant remember too much about it except he had a nice presentation that explained a lot

16. EW says:

I went through Stepanek’s PhD Thesis (in Czech) to learn something about the homogenization subject and got an impression, that doing this or constructing reliable regional time series from station data is a lot of work, metadata about equipment changes and station moves must be consulted, station data processed through that cited homogenization test, these who don’t pass should be thrown out…

Therefore I’m constantly puzzled by infilling, adjusting and grid-averaging algorithms used for getting global temperature anomalies, which do no such things. It makes me rather sceptical.

17. Ok, not to detract from the math (Combining Stations (Plan B) looked neat to me) but Geoff Sherrington over at CA clued me in to look at what Warwick Hughes has been up to. Well, this applies:

A graphing of nearest neighbors. I point it out just so anyone who doesn’t want to do Roman’s math can follow along with their own ‘bouncing ball’, maybe even for their own locality.

My take on this recent development is that Phil lost the data but a friend of Warwick found it via inter-library loan. Is CRU clown-school or what?

18. RomanM says:

#17 EW

Therefore I’m constantly puzzled by infilling, adjusting and grid-averaging algorithms used for getting global temperature anomalies, which do no such things.

IMHO, you are making an “apples and a whole fruit basket” comparison here. No one denies that the process of going from a collection of raw station records to a single combined temperature series is a complex process which involves many necessary steps. What I have looked at is basically just one of those steps – taking a set of series which may have already been treated as you describe and turning it into a single regional record. The method that I am looking at would be integrated into the overall procedure and would take any previous treatment of the data into account.

I would add that in fact this algorithm can also help to evaluate the effect of station changes on the recorded temperature series and can be of value in diagnosing possible UHI effects. For example, comparison of a given station to the combined series of the remaining stations could provide information as to whether there may be a trend or a quantum change in the station data which is not present in the combined series.

19. Kenneth Fritsch says:

Note to self: Stop thinking in public…)

In my view, thinking in public takes someone who is sure of oneself and I think it adds to the blogging experience.

I would make an exception for myself from this observation as I often think out loud in hopes of someone setting me straight on a topic with which I am not so familiar. I am long past the age when I would embarrass myself with a question.

20. kenneth, I think aloud all the time on blogs. it’s like brainstorming. hopefully not a brain drizzle or brain fart, but in those cases just excuse yourself.