Closest Station Antarctic Reconstruction

Update down below:

—-

In my last alternate reconstruction of Antarctic temperature I used the covariance of satellite information to weight surface stations. While the reconstruction is reasonable I found that it distributed the trends too far from the stations. This prompted me to think of a way to weight stations by area as best as I can. The algorithm I employed uses only surface station data laid on the 5509 grid cell locations of the Steig satellite reconstruction.

This new reconstruction was designed to provide as good a correlation vs distance as possible and the best possible area weighting of trend, it can’t make a good looking picture though but for the first time we can see the spatial limitations of the data. The idea was to manipulate the data as little as possible to make where the trend comes from as clear, simple and properly weighted as possible.

The algorithm I came up with works like this.

Calculate the distance from each of 42 surface stations to 5509 satellite points store them in a matrix 42 x 5509.

For each of 5509 points find the closest station and copy the data to that location. If there are missing values infill those from the next closest station looking farther and farther until all NA’s are infilled.

This is what the spatial distribution of trends looks like.

id-recon-spatial-trend-by-distance-weight-1956-2006
Figure 1

You can see how the trends are copied to the points of each polygon from each surface station. There’s quite a bit of noise in the graph but it seems that like temperatures are grouped reasonably well together.

The code looks for the above plot takes about 20 minutes to run.

#calc distance from surface stations to sat grid points
dist=array(0,dim=c(5509,42))

for(i in 1:42)
{
dist[,i]=circledist(lat1=Info$surface$lat[i],lon1=Info$surface$lon[i],lat2=sat_coord[,2],lon2=sat_coord[,1])
}

Circledist is Steve McIntyres’s great circle function with slight modifications.

circledist =function(lat1,lon1,lat2,lon2,R=6372.795)
{
pi180=pi/180;

y= abs(lon2 -lon1)
y[y>180]=360-y[y>180]
y[y<= -180]= 360+y[y<= -180]
delta= y *pi180

fromlat=lat1*pi180;
tolat=lat2*pi180;
tolong=lon2*pi180

theta= 2* asin( sqrt( sin( (tolat- fromlat)/2 )^2 + cos(tolat)*cos(fromlat)* (sin(delta/2))^2 ))
circledist=R*theta
circledist
}

Then I wrote a function to get the closest distance greater than a value ‘mindist’ I pass. The first call for the grid number ‘ ind’, mindist is set to zero and the closest station is returned. If the closest station has missing data, I infill what it does have and pass the distance from the closest station to mindist and get the second closest station returned. The process is repeated until all values are filled.

getnextclosestdistance = function(ind=0,mindist=0)
{
tdist=dist[ind,]

while(min(tdist)<=mindist)
{
mind=min(tdist)
if (mind<=mindist)
{
tdist=tdist[- (which(tdist == min(tdist), arr.ind = TRUE)[1])]
}
}
g= which(dist[ind,] == min(tdist), arr.ind = TRUE)[1]
g
}

This is the loop function that fills the array.

recon=array(NA,dim=c(600,5509))
recon=ts(recon,start=1957,deltat=1/12)

for (i in 1:5509)
{
lastdist=0
while(sum(is.na(recon[,i]))>0)
{
dd=getnextclosestdistance(i,mindist=lastdist)
lastdist=dist[i,dd]
mask = is.na(recon[,i])
recon[mask,i]=anomalies$surface[mask,dd]
print (paste(i,lastdist))
}
}

After that all that’s left is the plotting algorithms by RomanM SteveM and Jeff C which I’ve shown before.

The next graph is the trend calculated from all 5509 grid points.

id-recon-total-trend-by-distance
Figure 2

The trend is again positive by 0.052 C/Decade, this time it is on the outer edge of the stated 95% confidence interval of Steig09 of 12 +/- 0.07C/Decade.

Like before I also looked at the trend from 1967 – 2007.

id-recon-spatial-trend-by-distance-weight-1967-2006
Figure 3

id-recon-trend-closest-station-1967-2007
Figure 4

So from this reconstruction temperatures have dropped since 1967 at an average rate of 0.31 C/Decade. These results are similar to my previous reconstruction which looks like this.

The Antarctic, an engineers reconstruction.

id-recon-total-trend
Figure 5

id-recon-spatial-trend-1956-2006
Figure 6

And from 1967 – 2007

id-recon-trend-1967-2007
Figure 7
id-recon-spatial-trend-1967-2006
Figure 8

While I was initially happy with the engineers reconstruction, I found that station trends were not well localized by linear correlation weighting. (The correlation vs distance was not good) While peninsula station information stayed localized, the rest of the continent spread widely.

The trends shown match my last reconstruction reasonably well but in my opinion these are of superior quality.

Certainly the Antarctic temperatures have been flat or insignificantly cooling/warming in general for the last 40 years while 50 years ago there were lower temps recorded causing a very slight upslope in the 50 year trend. This is confirmed by the fact that sea ice has grown during the last 30 years among other observations.

The Steig 09 paper seems to be an artifact of the mathematics more than an actual trend. Amundsen Scott is the south pole data. The surface measurement is visually clean and has a downtrend for the full length of the data. This cooling is represented by the blue polygon in the center of the antarctic in this reconstruction.

TCO keeps asking me if I’ll post a trend higher than Steig. Every reconstruction I’ve done has reduced the trend from Steig 09. Every change no matter how small has resulted in a trend reduction from Steig 09, even the attempt to match Steig 09 has resulted in a slight trend reduction. I’ll say it now for the first time. In my opinion the paper is flawed and has an exaggerated warming trend due to bad mathematics. Temperature distributions on the continent are a result of artifacts in RegEM and not supported by the natural weather patterns as they were presented.

As an example which is pretty clear. Steig’s paper shows warming across the entire Antarctic. Here’s a plot of the ground data at the south pole.

south-pole-temp-1957-2007
Figure 9

A reconstruction cannot ignore a trend this strong. So TCO, it isn’t up to me. As Gavin likes to say, the data is the data. This data just cannot support Steig’s conclusions.

——-

Update, hat tip to David L. Hagen,

Here is an independently generated version of the Antarctic temp trends done in 2008 which looks amazingly similar to Fig 8.

This map of Antarctica shows the approximate boundaries of areas that have warmed or cooled over the past 35 years. The map is based on temperatures in a recently-constructed data set by NCAR scientist Andrew Monaghan and colleagues. The data combines observations from ground-based weather stations, which are few and far between, with analysis of ice cores used to reveal past temperatures. (Illustration by Steve Deyo, UCAR.)

This map of Antarctica shows the approximate boundaries of areas that have warmed or cooled over the past 35 years. The map is based on temperatures in a recently-constructed data set by NCAR scientist Andrew Monaghan and colleagues. The data combines observations from ground-based weather stations, which are few and far between, with analysis of ice cores used to reveal past temperatures. (Illustration by Steve Deyo, UCAR.)

Article here.

Climate Models Overheat Computer analyses of global climate have consistently overstated warming in Antarctica, concludes new research (5/10/2008)

34 thoughts on “Closest Station Antarctic Reconstruction

  1. What you have constructed is a pixelated Voronoi diagram. It used to be used to estimate ore reserves from drill-hole data. I’m sure there must be R and Mathematica packages to construct it directly.

  2. It seems like you’re throwing a lot of correlation away, some of which could be relevant to solving the problem. Sure, if all that matters is the nearest station, fine, your approach will work. But if there are higher level inferences (such as weather patterns), than you have cut those out of the equation. If you have a good algorithm, it should converge to the trivial solution (nearest neighbor weighting) anyhow if that’s all that really correlates. But if there’s more to it than that, it will be retained.

  3. 4. There might be correlations to non-nearest neighbors that are significant. If so, you will see it in the size of the correlation coefficient. If correlations with faraway stations do not happen physically, you should just see this with low coeffcients…not needing to clip them out of the reconstruction arbitrarily.

  4. “I’ll say it now for the first time. In my opinion the paper is flawed and has an exaggerated warming trend due to bad mathematics. Temperature distributions on the continent are a result of artifacts in RegEM and not supported by the natural weather patterns as they were presented.”

    This may very well be the case. But nearest neighbor constraints may not be the best way to look at the problem either.

    One can say that as performed, the overall algorithm gave the wrong answer without having to throw out the fundamental concept of what they were trying to do.

    In this most recent work, you’ve almost gone to the extent of just doing a station only reconstruction. (Yes, there is still some input from the sat observations, but you’ve thrown away a lot, by only looking at nearest neighbor correlations.)

  5. The algorithm doesn’t care about correlation, it just applies the nearest surface station for each gridcell.

    Think of it like area weighting of surface stations. sum ((area polygon 1 * surfacestation 1) to (area polygon n * surfacestation n))/sum(area polygon 1 to area polygon n).

    All data is used as is, this makes it very similar to the other reconstruction but in that case the area assignment was determined by correlation to satellite data rather than distance from the thermometer.

  6. Jeff in looking at your output maps above something occurred to me about the climate the Antarctic peninsula.

    The biggest problem I see with Antarctica in either yours or Steig’s reconstructions is the treatment of the continent as a single climate zone, when in fact the climate of the peninsula has a significantly different set of temperature and precipitation norms than the majority of the main continent.

    Going back to basic climatology one can recall the Köppen climate classification system. Antarctica has been classified as EF

    EF =Ice Cap Climate – All twelve months have average temperatures below 0 °C (32°F)

    There has been some discussion that the Aleutian peninsula might be better served if newly classified as EM (Maritime Polar) This would separate relatively mild marine locations such as Ushuaia, Argentina and the outer Aleutian Islands like Unalaska The climate of Unalaska from the colder, continental climates. The mean annual temperature for Unalaska is about 38 °F (3.4 °C), being about 30°F (−1.1°C) in January and about 52°F (11.1°C) in August. With about 250 rainy days a year,

    Contrast that to interior Alaska temperatures which are not moderated by the presence of the sea. Fairbanks for example has an mean annual temperature for Fairbanks is 26.9°F (-2.8°C) and with 106 rainy days.

    Using the Unalaska to Fairbanks comparison, the Antarctic peninsula would be a candidate for this new “Maritime Polar” (EM) classification IMHO.

    In support of that, here is a seasonal temperature map submitted to Wikipedia by Stoat’s William Connelly

    Note how in winter the Antarctic peninsula is completely at the other end of the temperature scale from the interior just as we see in the Unalaska to Fairbanks comparison. In the summer, the effect is less, but the Antarctic peninsula agrees mostly with the sea temperature band surrounding the Antarctic continent.

    Another piece of support evidence that the Antarctic peninsula climate is vastly different than the interior continent is precipitation, the other half of the Köppen climate classification system.

    Here is a map of Antarctic precipitation:


    This is a map of average annual precipitation (liquid equivalent, mm) on Antarctica

    Note once again that in terms of precipitation the Antarctic peninsula climate is also vastly different than the interior continent. It seems the Antarctic peninsula is an outlier when compared to the rest of the continent. The peninsula gets 400-600+ mm of precip while the interior gets 0-100mm.

    As Köppen understood, places that are connected geographically and politically aren’t always connected by a common climate. No another factor that you pointed out in this article:

    https://noconsensus.wordpress.com/2009/02/15/aws-gridded-reconstruction/

    Note that we have the majority of weather stations in Antarctica on the peninsula in your grid cell C, a total of 11. No other place in Antarctica comes close in the number of weather stations. Further, that grid cell also happens to be the one where the climate diverges from the interior of Antarctica the most.

    So why is the obviously different Antarctic peninsula climatic zone being considered in the Steig study at all? The answers are: 1) it is connected geographically to the continent so that when saying “Antarctica is warming” the statement is true. 2)Treatment of the Antarctic peninsula climate zone as an outlier likely ruins the premise of the study in the first place.

    Of course the counter argument would be that: “Antarctica is classified as one climate zone, thus our analysis in robust” but my counter argument would be that we could also likely find the same results from a study of the USA if we had the majority density of weather stations in the study based in the Florida keys and south Florida, with a remainder around the coastal cities of the USA and maybe a few in the interior. Could we accurately derive the climate trend of the USA from such and arrangement? Me thinks not.

    To test this, I’d like to see what happens when the interior and the peninsula are are treated as separate climate zones. You could pick a delineation line right at the base, or go further out the peninsula, I doubt it would make much difference given the station weighting. Produce separate outputs showing the continent versus the peninsula.

    I’ll bet the results will be obvious and telling.

  7. Your precipitation plot is pretty interesting. I did a post of the sea ice trends by gridcell point at the beginning of Feb.

    https://noconsensus.wordpress.com/2009/02/07/gridded-antarctic-sea-ice-trend/

    The graphics aren’t very good because I didn’t know R as well but look at the second graph where the ice decreased.

    Also from the temperature plot, you can see that the temps are ocean based temp. There is only a slight wave in the isotherm lines over the peninsula.

  8. Jeff:
    Along the same lines as what you are doing but with trying to account for spatial correlations you can use a kriging algorithm. There probably is one available for R.

  9. David,

    You’re right that I could. There are a variety of these methods which do the job. My goal was to show as clean and un-disturbed a representation of trend as could be done. Interpolative methods in this case would hide the limitations of the data without improving the quality of trend.

  10. Jeff – good couple of posts. I took a few days off from Antarctica and have been reading over this post and the engineer’s post. The thing that strikes me is the same range of trend values keep coming up. Going all the way back to the gridded RegEM input back in February, we keep getting .05 to .07 deg C/decade regardless of what’s been tried. Like what you show here, the slope is always flat (or slightly negative) from 1967 forward.

    I understand TCO’s point about seeing some posts where the trend increases, but there have not been any. I’m sure we could torture the data to get one, but what’s the point? All the methods used so far (area weighting RegEM input, area weighting RegEM output, increasing number of PCs, distance weighting infilling value, etc.) have been chosen to improve the fidelity of the recon, not change the trend. The fact remains that all of the methods do the same thing in reducing the trend overall with no warming from 1967 on.

    Why is this happening? I think it is becoming very clear that the early speculation was correct; the trends from island and peninsula (20 of the 42 stations) were smeared across the continent using Steig’s RegEM methodology. I think that all of the alternate methods we have tried end up doing the same thing, i.e. reducing the influence those stations in proportion to the area they represent. The resulting recon is the just what everyone thought it was before the flashy picture on the cover of Nature; warm peninsula, cold continent.

  11. Jeff,

    I’m not sure it’s from the peninsula. I think it’s from inverted correlation but I have no evidence yet to show it. I’ve spent some time single stepping in R to try and understand the math and IMO you wouldn’t want it determining the outcome of a product that actually needs to function. There’s no verification whatsoever in the paper and whenever we’ve looked trends change.

  12. #13 & #14

    Jeff Id / Jeff C – you are both right. I it is the inverted correlations (and rank 3 constraint) which causes trend to be smeared spatially, but I agree with Jeff C. that the warming temps used in the smearing must come from the peninsula particularly pre-1982. If you adjusted the temp trends for the peninsula downward by 1/2 and re-did your replication, my hunch is you would see a dramatic drop in warming for the reconstruction. I think Nic L’s test run adding a single psuedo-series shows both the spatial smearing issue you speak of and the fact that temps for this smearing is sourced from input data with no recognition of geographical limitations.

  13. Sorry but my post here is reversed. I will separate it out as much as can be

    “It’s like they maximized the trend with bad math. I guess I shouldn’t be surprised.”

    Jeff, you are correct.

    first we need more input (from others) on what you have here.
    second what is the distance (length) of the peninsula?, do a ring around the antarctic,(ocean & peninsula ) graft plot , temps. vs. continent.
    We can then prove a slight warming away from the pole, and a slight cooling over the pole, thusly get some real discussions on why this might be. see below
    this would help cover the complaints of #9
    wattsupwiththat said
    April 12, 2009 at 8:29 pm
    #9
    Well we see bias here don’t we??? lol
    I thinks he likes to see cooling some where? Me thinks so. LOL

    After seeing your honest math and approach , i get a scent of what might be an answer to the temps……. yes here it goes….. the oceans are the source of the temps we have seen.
    70% of the earth so would be 70% of the forcing, and would explain the melting of the ice(arctic) in the summer and low building in the winter. now as the oceans cool we will see the reversal. what causes ocean heating, cooling? different study.

    T….C….O…. reconstruction

    Were there any negative correlations to nearest neighbors and did you throw those away or retain them?

    It (would) seem (as) you’re throwing a lot of correlation away, some of which can be relevant to solving the problem. Sure, if all that matters is the nearest station, (then)fine, your approach will work. But if there are higher level inferences -(such as weather patterns)-, then you have cut those out of the equation. If you have a good algorithm, it should converge to the trivial solution of nearest neighbor weighting, anyhow if that is all that really correlates, But if there is more to it, then it will be retained.

    4. There might be correlations to non-nearest neighbors that are significant. If so, you will see it in the size of the correlation coefficient. If correlations with faraway stations do not happen physically, you should just see this with low coefficients …not needing to clip them out of the reconstruction arbitrarily.

    “I’ll say it now for the first time. In my opinion the paper is flawed and has an exaggerated warming trend due to bad mathematics. Temperature distributions on the continent are a result of artifacts in RegEM and not supported by the natural weather patterns as they were presented.”

    This may very well be the case. But nearest neighbor constraints may not be the best way to look at the problem either.

    One can say that as performed, the overall algorithm gave the wrong answer without having to throw out the fundamental concept of what they were trying to do.

    In this most recent work, you’ve almost gone to the extent of just doing a station only reconstruction. (Yes, there is still some input from the sat observations, but you’ve thrown away a lot, by only looking at nearest neighbor correlations.)

    Does it flow better? Tco has a point to try to maintain the correlations attempt, but forgets the large amount(s) of throw away of sat. data, do to cloud cover, and detection going on in the supplemental programing(80%?). jeff can you detect the actual amount of tossed data?

    #1 this is the name i remember from schooling,(1644 by René Descartes)
    He is accredited as the father of analytical geometry.
    http://en.wikipedia.org/wiki/Analytical_geometry
    This is the math class I hated the most from ALL other class I attended! 224 credit hours, them 4 were A very big head ache!!!!

    Typo… below fig. 4
    So from this reconstruction temperatures have dropped since 1967 at an average rate of 0.31 C/Decade. These results are similar to my previous reconstruction which looks like this.
    should read?
    So from this reconstruction temperatures have dropped since 1967 at an average rate of (minus add)0.31 C/100years(Decade remove?). These results are similar to my previous reconstruction which looks like this.

    my thinking, more work on the peninsula-continent. separate this/these as facts . get the righting cleaned up, and show old map by steig et al vs yours.

    Just now I had a thought the CO2 resonance factor maybe bad in the Emodel if it is 100% of the globe.
    Does water emit the correct value IR. for a CO2 molecule to resonate? 2380
    http://en.wikipedia.org/wiki/Infrared_spectroscopy
    “-water produces a broad absorbent- across the range of interest, and thus renders the spectra unreadable without this computer treatment” so does water send out an IR. ?

  14. “I understand TCO’s point about seeing some posts where the trend increases, but there have not been any.”

    That was not precicely my point (in this case), although I would DEFINITELY level it against McIntyre, who says he is using his blog as a scratchpad, showing trials, but only shows them in one direction and tries to build PR. My point was subtley different here. It was more for Jeff to watch out for bias in hypotheses (google the method of multiple hypotheses for a nice article). IOW, does he basically already suspect the reconstruction is no good and look for ways to shoot it down (only). And I was really trying to pind down just one essential alleged flaw, the negative correlations. Think about it this way, if we just change a single flaw (for each of the ones that JEff alleges) what does it mean if that single factor helps make the trend bigger. I wanted to make sure that Jeff was not solely looking for ways to drive the trend down and get him to at least CONSIDER that he might run a variation and said variation might drive the trend up.

    “All the methods used so far (area weighting RegEM input, area weighting RegEM output, increasing number of PCs, distance weighting infilling value, etc.) have been chosen to improve the fidelity of the recon, not change the trend. The fact remains that all of the methods do the same thing in reducing the trend overall with no warming from 1967 on.”

    1. I think more PCs is a fundamentally different thing than the area weighting. With more PCs, you are keeping more information in the problem (removing a constraint). With area weighting, you are adding constraints and might be goutting the method of teleconnection correlations.

    2. Throwing in the 1967 comment is very kitchen sink. The authors already recognize very little recent warming. Let’s stick to the methodology and settle that one way or another instead of lumping everything together and having a fallback position and all that. I hate seeing our side do that so often. It’s so bloggerly amateur lawyer gamish instead of analytical dissaggregatorish.

    “Why is this happening? I think it is becoming very clear that the early speculation was correct; the trends from island and peninsula (20 of the 42 stations) were smeared across the continent using Steig’s RegM methodology. I think that all of the alternate methods we have tried end up doing the same thing, i.e. reducing the influence those stations in proportion to the area they represent.”

    Could be. But there is a difference between reducing the influence any which way you can (area) and having a good reason for it (PCs). IOW, it is not a priori “good” to reduce the influence of the peninsula. Heck, if there is some bizarre correlation with the interior, you would want to keep it. A little more agnosticism would do you good.

    “The resulting recon is the just what everyone thought it was before the flashy picture on the cover of Nature; warm peninsula, cold continent.”

    This sounds like a flourish. Who cares what “everyone thought it was”. A lot of people thought it was “no one knows”. Some still do. I actually agree that the cover presentation (and perhaps the times chosen) have some aspects of PR (e.g. not showing recent behavior, not correlating with CO2, etc.) That said, this is a separate issue. Don’t lump it all together like a little hack blogger internet maggot.

  15. Jeff, this sort of polygonal decomposition is standard practice in calculating ore reserves. Another standard practice (which is reflected in some more general statistical methods) is to “cut” extreme high values (the “nugget” effect) as the area of influence of a “nugget” is usually smaller than the nominal area of influence.

  16. #17

    TCO, the area weighting issue – whether it is input data or output – has nothing to do with interfering with legitimate correlations even if they are on opposite sides of the continent. Properly weighted pre 1982 input data means that RegEm understands that 20 of 42 surface stations orginate from less than 5% of the continent and not 48%. Rank 3 processing and the issue of HF correlations vs. trend (qualifier: still awaiting Jeff’s post showing the disconnect between HF and linear trend correlations) have to do with properly constraining spatial distribution of trend and eliminate spurious distance correlations (shown by the scatterplots of recon vs real data).

  17. #19, Thanks for reminding me. I actually forgot and got sidetracked with these other things, that could be a good project for tonight.

    I also owe data to someone.

    BTW, You’re right about correlations I have no idea what I could possibly be leaving out in this method. People didn’t like my much prettier positive correlation analysis so I moved on to this.

    Steve M points out that individual stations can bias the reconstruction and it may be reasonable to eliminate some of the fliers. Anthony Watts also had a similar suggestion about the peninsula.

    Incidentally I also redid this analysis last night but instead of looking to the next closest station for infilling missing values, I just left them missing. The trend dropped even further to something like 0.04.

    There’s a lot of discussion on this post at WUWT.
    http://wattsupwiththat.com/2009/04/12/a-challenge-to-steig-et-al-on-antarctic-warming/#comments

  18. Layman:

    It’s a good comment. Unfortunately I don’t follow it all (honestly my fault, not yours) as I have been lazy and the presentation is hard to follow across threads. I guess I thought Jeff was using area weighting to mean that he essentially just uses nearest neighbor effects. If instead, you are making some pre-gridding of the data and allowing correlations (if significant) across the continent, but constraining the number of correlators, that might be fine. It’s at least different from what I thought he was doing.

    There’s probably some trade-off of number of stations in an area versus usefulness of the information. For instance, if you did a good political survey and had 100,000 people from the West 50% (pop based) and 200,000 from the East 50% (pop based), you probably would just want to make overall prediction based on adding the independant estimates of each part (let’s assume that the math works out so that 1000 people gives a good survey in general). However, if you had 999 people surveyed in the West and 1 in the East, you would likely want to base your entire prediction on the West survey and on the historical correlation of West to East. As an exercise for the reader, one can imagine various permutations.

    This whole thing sounds trivially simple, so I’m sure someone has thought through how to amalgate the data available for the best prediction even given that some areas are under/over sampled. But not me, since I’m lazy and uneducated. 😉

    P.s. I’m still worried about any method where you throw out the negative correlations.

  19. #21, Some day you should let us know your background.

    I think you’re missing the point of this method. It doesn’t use correlation at all. It simply averages the data together based on area weighting. The area in this case was determined by closest station to each of 5509 gridcells.


    On the engineer’s reconstruction it also simply average based on area weighting combined with correlation weighting. The area used (which gridcell applied) depended on a positive correlation to surface record. I then weighted each trend by its correlation to that point.

    Negative correlation wasn’t thrown out, it was used as an indicator that the gridcell was not related to the far away temperature station. As you get farther away, correlation drops and my method deweighted the trend according to quality of correlation.

    You know a negative correlation surface station is away from the gridcell by the distance vs correlation plots. Using negative correlation wouldn’t make any sense in the case of my other reconstruction because it would flip the temp curve upside down and add it to the non-related gridcell.

  20. #22

    WRT the “engineer’s reconstruction”, have you tried progressive smoothing of the monthly anolmaly data for your weighting coeficients yet? At some point – seasonal, 2 year, 5 year or whatever – it should filter enough HF noise and converge on the linear trend. It may take different time scales of smoothing for different regions but you could get more meaningful spatial and temporal data processing but still ultimately connected to trend instead of HF.

  21. By converging, I mean correlations of “x” time scale smoothed data should become closely related to corresponding linear trend correlations.

  22. Just a reminder…we skeptics have been extremely critical of using correlates that just correlate on a long trend, but not with wiggle matching. The likelihood of spurious regression goes up, since you’re essentially lowering the degrees of freedom.

  23. 22:

    I’m not necessarily reacting to
    “this method”, but in some cases to just things that you’ve said in the text. Like if you said, throw the negative correlations out, but you actually threw high freq out (of both positive and negative), my crit would be with your remarks (the logic of them) more so than with your specific analysis.

    It’s actually a fair amount of work to parse through an entire post (as I did with the earlier one, where I had lots of comments). Even worse, is if there is stuff from other posts, etc. It’s one of the reasons why finished papers (at least white papers) are easier to react to…and why we should not expect working scientists to read these blogs as if they were journals.

  24. Jeff Id
    Your Fig 1 and Fig 8 are very similar to that published by

    Climate Models Overheat Computer analyses of global climate have consistently overstated warming in Antarctica, concludes new research (5/10/2008)

    “We can now compare computer simulations with observations of actual climate trends in Antarctica,” says NCAR scientist Andrew Monaghan, the lead author of the study. “This is showing us that, over the past century, most of Antarctica has not undergone the fairly dramatic warming that has affected the rest of the globe.

    Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models

    Climate Models overheat Australia, AGU and
    Climate Models Overheat Antarctica, New Study Finds, May 07, 2008, NCAR “
    See: Antartica Temperature Trends Figure.

    Monaghan, A. J., D. H. Bromwich, and D. P. Schneider (2008), Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models, Geophys. Res. Lett., 35, L07502, doi:10.1029/2007GL032630.

  25. my post was too long before, I still in learn mode!!!!!!!!!!!!!

    Typo… below fig. 4
    So from this reconstruction temperatures have dropped since 1967 at an average rate of 0.31 C/Decade. These results are similar to my previous reconstruction which looks like this.
    should read?
    So from this reconstruction temperatures have dropped since 1967 at an average rate of (minus add)0.31 C/100years(Decade remove?). These results are similar to my previous reconstruction which looks like this.

  26. Jeff, this is very good. I think Jeff C #13 in his final para is exactly right. The various reconstructions you have here all show a peninsula warming of about 0.4-0.5 degrees per decade which is consistent with the observational data. Steig et al had 0.11, which is completely inconsistent with the data, by about a factor of 5.
    IMHO your work has now proved that Steig et al got their false result by spreading the peninsula warming over a wide area, as many of us have been saying for some time. When are you going to write this up as a paper?

    I think Fluffy is right, there is a 0 missing in the text below fig 4.
    But the overall trend for the whole continent is a rather meaningless number. The main thing is that the peninsula is warming and the rest of the continent isn’t – which of course was all well known and agreed until Steig et al came along.

  27. Abercrombie&Fitch Womens are fashion in recent years. It is favored by many women for its unique design. On our site, you will see many different styles and colors. We hope to make your buying experience a great one! If you are interested in our Abercrombie&Fitch products,please feel free to contact us,we will give you our best service.We are looking for long time business relationship,and your satisfaction for abercrombie fitch hats is our aim.

Leave a comment