the Air Vent

Because the world needs another opinion

Trends in the Antarctic

Posted by Jeff Id on September 6, 2009

To further the documentation of the offset calculation, I’ve put together the R code with comments and a couple of blink comparisons of the raw anomaly data and the offset anomaly data.

Offset area 100 index

Click to see Blink Comparitor

The index value is simply the serial number indicating which satellite gridcell was reconstructed. Remember the algorithm works by looking at each satellite gridcell and finding the closest station with non-missing values. The offset is calculated by taking the existing values of the two closest stations and matching the mean of the earliest possible values up to a time of 60 months for smoothing. Each color in the graph above is a different surface station at a different location so the offset calculation is quite critical in determining an appropriate trend.

Offset area 1000 index

Click to see Blink Comparitor

Sat index 1000, chosen at random just as 100 was.

The difference in trend and algorithm were discussed in this post:Area Weighted Antarctic – Offset Reconstructions

The R file can be downloaded HERE.

——

I’ll try to describe the various sections to make the operation of the simple reconstruction clear the auto-formatting in wordpress would drive a monk crazy so if you’re having difficulty download he R file above. If you don’t have the free, easy to download and use R, it is a text file so you can use notepad or something.

First the circle distance function which calculates the great circle plot for earth using lat/lon coordinates. This function uses a spherical planet assumption and returns the distance in miles between a coordinate and a coordinate vector.

circledist =function(lat1,lon1,lat2,lon2,R=6372.795)
{
pi180=pi/180;

y= abs(lon2 -lon1)
y[y>180]=360-y[y>180]
y[y<= -180]= 360+y[y<= -180]
delta= y *pi180

fromlat=lat1*pi180;
tolat=lat2*pi180;
tolong=lon2*pi180

theta= 2* asin( sqrt( sin( (tolat- fromlat)/2 )^2 + cos(tolat)*cos(fromlat)* (sin(delta/2))^2 ))
circledist=R*theta
circledist
}

The next section is simply a quality control (QC) check to determine whether two stations have the same coordinates. Since we’re looking for the closest station with available data it is important that stations not be exactly on top of each other otherwise we couldn’t choose the closest match.

gnd.coord=all.idx[2:3]
surfcgrid=gnd.coord[-form.list[[11]],] #separates stations which are only on the satellite grid from the rest
statnumber=length(surfcgrid[,1])

##################################################################
##### this section insures that none of the selected stations have exactly the same coordinates
##### this is only a qc check
##### From this I modified clean air station McMurdo and Scott to show higher resolution of actual coord
##### this fixed the two overlapping coordinate instances and insured that no station was exactly on top of any other
##################################################################
##C like fuction to check all stations
samecoord=array(FALSE,statnumber)
for(i in 1:statnumber)
{
for(j in 1:statnumber)
{
if(i !=j)
{
if (surfcgrid[i,1]==surfcgrid[j,1] & surfcgrid[i,2]==surfcgrid[j,2])
{
samecoord[i]=TRUE
}
}
}
}

if(sum(samecoord)!=0)
{
print (“Error, more than one station at same coordinates”)
}

There were 4 stations with the same coordinates detected, Clean Air and Amundsen_Scott are both south pole stations, however by careful examination of the histories you find that Clean Air is a special station located a mile distant and designed to have the absolute minimum pollution, it’s an interesting story if you do some searching in google on Clean Air Anterctic. My solution was to change the coordinates of clean air to the more accurate versions (more sig fig’s) found elsewhere lat/lon -89.99, 179.99.

McMurdo and Scott were the same story and also have interesting histories. I change the Scott coordinates to lat/lon -77.51, 166.45 which locates the data slightly differently from McMurdo and the function above runs without detecting any additional problems on the 64 on grid ground stations our group has decided to work with.

This next function creates a 600 x 5509 value matrix filled with the index value of the closest station to that AWS station for each year for each satellite coordinate used in Steig08. It looks simple but idiot that I am the first version was 3 times longer for the same function — It takes a while to figure out the best way to do things in R and I still have a couple of chunky for loops which make this algorithm slow enough that it takes 10 minutes to run.

###################################################################
## This function looks through all satellite grid points and determines
## the closest available ground station from each satellite grid point
## some points may be equidistant to two or more stations
## This function takes a long time
## The indices of the identified stations for each gridcell of each month
## are stored in stationindex
###################################################################
stationindex=array(0,dim=c(5509,600))

for(j in 1:600)
{
grndmask=!is.na(Yo[j,]) #Yo is raw station data
for (i in 1:5509)
{
dist =circledist(surfcgrid[,1],surfcgrid[,2],sat.coord[i,2],sat.coord[i,1]) #dist is a vector of all stations
stationindex[i,j]=which(min(dist[grndmask])==dist)[1] #stationindex is the minimum distance to the first station of the smallest value
#more than one station can have the same distance from a point
}
print(j) #debug line which gives progress of the algorithm
}

The result as I explained above is an array ‘stationindex’ which has the index number of each AWS station for each year at a grid point. In the second blink comparator above, satellite gridcell number1000 has 5 different ground stations used to create the series. The ground station indices are shown in the legend and are easily named except that it doesn’t fit in the plot when the list get’s long.

The next section is more complex and is also far shorter than the original version. Fortunately despite its increased complexity it runs in a couple of seconds. This runs through an array of 64 x 600 surface stations where much of the data is not available (NA). By starting at teh beginning month Jan 1957 and setting all available surface stations to zero we have a group of data which is defined. As the algorithm progresses, new stations become available. Before they can be used an offset is calculated by the existing overlapped data between the new station and an already offset older station. The maximum accepted overlap is 60 months of values. This was chosen in the previous post linked above and the result is relatively insensitive to a range of maximum overlap values.

#######################################################################
## This function calculates the offsets for each ground series based on overlapping series data
## It works by starting at the first month of data (Jan 1957) and for each series with values in the first
## year the offset is zero. The algorithm moves then to the second month,as new series come in the next closest
## station having an already defined offset is located and the overlapping data of that station is used to offset
## the new station. Each station in turn receives an offset only once whether values drop in or out.
#######################################################################

maxoverlap=60 #maximum number of months of overlapping data to be used in calculating offsets
offset=rep(NA,ncol(Yo)) #Offset vector initialized to NA

offset[!is.na(Yo[1,])]=0 #initialize surface stations with data in first month to zero offset

for (i in 1:600) #cycle through each year
{
mask= is.na(offset) #create station T/F mask from data which already has an offset
newmask=((!is.na(Yo[i,]))-!mask)>0 #T=1 F=0 so if Yo = 1 and offset =0 it’s a new station having data for the first time
if (sum(newmask)>0) #if we have more than 0 new stations started then
{
for(j in which(newmask==TRUE)) #more than one new station may have started so j in new stations
{

dist =circledist(surfcgrid[j,1],surfcgrid[j,2],surfcgrid[,1],surfcgrid[,2],)
#dist is a vector of all station distances
hh=order(dist[!mask],decreasing=FALSE) #hh eliminates all stations which don’t have offsets from dist
namask=is.na(Yo[i,hh]) #find stations with no data values – offset stations can drop out later so check
hh=hh[!namask][1] #remove stations with no values and select the first one
stationindexb=which(dist==dist[!mask][hh])[1] #find the station index from the selected hh station
newstation=Yo[,j] #Set new station index for easier reading
oldstation=Yo[,stationindexb]+offset[stationindexb] #Set old station index for easier reading add offset in
maskb= (!is.na(newstation) & !is.na(oldstation)) #create T/F mask for all mutually existing monthly values

if(sum(maskb)>maxoverlap) #count existing monthly overlap values and if greater than max
{
oldvals=oldstation[maskb][1:maxoverlap] #truncate to maxoverlap
newvals=newstation[maskb][1:maxoverlap]
}
else
{
oldvals=oldstation[maskb] #else accept whatever is available
newvals=newstation[maskb]
}
#print(sum(maskb)) #debug routine/sanity check
offset[j]=mean(oldvals)-mean(newvals) #calculate offset as a difference between unfiltered anomalies
}
}
}

After that the offset data and non offset data can be used to create reconstructions of the whole continent. Yoc is the offset reconstruction.

Yoc=t(t(Yo)+offset) #Offset vector is now calculated, add it to the surface station record
Yoc=ts(Yoc,start=1957,deltat=1/12) #make time series

rec.area=array(NA, dim=c(600,5509)) #Allocate reocnstruction memory

for(j in 1:600) #for each year find the offset anomaly and assign value to reconstruction
{
rec.area[j,]=Yoc[j,stationindex[,j]]
}
rec.area=ts(rec.area,start=1957,deltat=1/12) #Make time series for temp reconstruction

rec.area.allslope=rep(NA,5509) #allocate slope memory

for (i in 1:5509) #calculate slopes # 10 for decadal trends
{
rec.area.allslope[i]=coef(lsfit(time(rec.area),rec.area[,i]))[2]*10
}
mean(rec.area.allslope) #print mean of slopes

#plot results
par(oma=c(1,1,1,1),mar=c(1,1,3,1))
labels=c(“”,”Area Weighted Reconstruction\nShort records offset by nearest station 60Month Overlap”)
plt.map(dat=rec.area.allslope,rng=.7,labels=labels)
#savePlot(“c:/agw/offset area/Antarctic Area Temperature Trend.jpg”,type=”jpg”)

The results of all this fanciness are a change in continent trend from simple anomaly averaging of 0.052 to an offset version 0.067 C/Decade plus or minus about 0.10 C/Decade. So for all that work we managed to find the cetral trend of warming for the Antarctic by an increased amount of 0.015C/Decade.


12 Responses to “Trends in the Antarctic”

  1. rob r said

    Is that the warming trend from the very begining of the temperature records (circa 1957)? If the record just happened to start by chance in a natural (non-anthropogenic)decadal-scale cool period could there be a slight bias in the 0.067 C/Decade (plus or minus about 0.10 C/Decade)trend?

  2. michel said

    Jeff, I posted on Lucia wondering if she would address this, and wonder if you would too? Is the Arctic actually warming? There is I think a recent article to that effect, and Tamino has posted his usual stuff arguing that it is. If you have time to do a critical objective account it would be much appreciated.

  3. michel said

    Jeff, I posted on Lucia wondering if she would address this, and wonder if you would too? Is the Arctic actually warming? There is I think a recent article to that effect, and Tamino has posted his usual stuff arguing that it is. If you have time to do a critical objective account it would be much appreciated.
    Sorry… forgot to say great post – can’t wait to read your next one!

  4. Jeff Id said

    I’ll check out Tamino’s recent advocacy as soon as I get a chance. I’ve read the article and SI and have strong opinions about this paper. It’s basically mathematical artifacts and cherrypicked data compiled together from other math artifact papers. The whole thing is a ridiculous pile of mush based on unproven proxy data that doesn’t even look like temperature, it is such a large pile it is hard to take apart. The main paper is based on dozens of other results which contain some of the same math errors. How do you argue against all of it?

    A better question might be- how do you argue for it though. The correlation value has been abused beyond its limit and the primary names in paleoclimatology have made it into an artform. The fact that it makes peer review relates to the second question.

  5. Ryan O said

    Yet, amazingly, the same people who use correlation to sort through proxy data refuse to admit it in evaluating the reconstructions.

  6. Tom Fuller said

    Jeff, (thanks for stopping by and commenting on examiner.com), would you be able to put up a chart that compares the proxy findings with other data sources that are perhaps a bit more orthodox? It might be useful, if it’s easy to put together.

  7. Jeff Id said

    Tom,

    You’re asking the right question. The problem is that nobody has invented an orthodox method and the current methods are worse than people understand.

    You are a reasonable reporter who is looking for a clear perspective of the situation, however there are a group of scientists who intentionally make the situation less clear. It may sound like tinfoil hat time but in reality these people are very well paid in both salary and fame for making these papers. I will be doing a post soon on the result and have downloaded some data from the paper already but other things are taking too much time. CA is an honest source for info in the meantime. We would tear SteveM up otherwise. hehehe

  8. timetochooseagain said

    Tom-actually, the thing is that the Steig et al. results really didn’t differ that much from previous results-never mind the hype:
    http://www.worldclimatereport.com/index.php/2009/01/30/antarctica-again/

  9. Kenneth Fritsch said

    Is that the warming trend from the very begining of the temperature records (circa 1957)? If the record just happened to start by chance in a natural (non-anthropogenic)decadal-scale cool period could there be a slight bias in the 0.067 C/Decade (plus or minus about 0.10 C/Decade)trend?

    The warming trend (as determined by the trend being statistically greater than 0 at 95% confidence level) does not exists if one starts the series as published by Steig et al. (2009) much beyond 1965 – as I recall. Of course, the trend of 0.067 c/decade plus/minus 0.10 C/decade is not statistically different than 0 for the 1957-2006 period. That trend with those CLs would probably not get you a publication in Nature and I would hope certainly not a front cover depiction.

  10. Kenneth Fritsch said

    I should add that starting in 1957 can be rationalized by that noting that that date coincides with the start of temperature records in manned stations in the Antarctica. I think if there were a cyclical character to Antarctica temperatures then starting in a valley like the 1957- 1965 period presents a biased view. Of course, with only 50 plus years of records and with considerable missing data determining multi-decadal cyclical character is difficult.

    We do know, from none other than Steig from his ice core data, that the warmest decade of the twentieth century in West Antarctica was 1935-1945.

  11. timetochooseagain said

    10-The Antarctic Oscillation was in a positive phase then-could have something to do with it:

  12. Salut!
    Just wanted to share with you a outstanding experience I had with Red Zeppelin. I hired their San Francisco photographer and received an outstanding service! They have great web design.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

 
%d bloggers like this: