the Air Vent

Because the world needs another opinion

AWS Gridded Reconstruction

Posted by Jeff Id on February 15, 2009

Guest post by Jeff C

Jeff has done an interesting and impressive recalculation of the automatic weather station AWS, reconstruction of the Steig 09 currently on the cover of Nature.

Warming of the Antarctic ice-sheet surface since the 1957 International Geophysical Year

Jeff is an engineer who realized that the data was not weighted according to location in the original paper. He has taken the time to come up with a reasonable regridding method which more appropriately weights individual temperature stations across the Antarctic. It’s amazing that a simple, reasonable gridding of temperature stations can make so much difference to the final result.

—————————————————————–

Jeff Id’s AWS reconstructions using his implementation of RegEM are reasonably close to the Steig reconstructions. The latest difference plot between his reconstruction and Steig’s is quite impressive. Removing two sites from his reconstruction that were erroneously included in initial attempts (Gough and Marion) gives us this chart.difference

It is clear Jeff is very close as the plot above has virtually zero slope and the “noise level” is typically within +/- 0.3 deg C except for a few outliers (that’s the Racer Rock anomaly at the far right as we are using the original data). Although not quite fully there, it is clear Jeff has the fundamentals correct as to how Steig used the occupied station and AWS data with RegEM.

I duplicated Jeff’s results using his code and began to experiment with RegEM. As I became more familiar, it dawned on me that RegEM had no way of knowing the physical location of the temperature measurements. RegEM does not know or use the latitude and longitude of the stations when infilling, as that information is never provided to it. There is no “distance weighting” as is typically understood as RegEM has no idea how close or how far the occupied stations (the predictor) are from each other, or from the AWS sites (the predictand). Steig alludes to this in the paper on page 2:

“Unlike simple distance-weighting or similar calculations, application of RegEM takes into account temporal changes in the spatial covariance pattern, which depend on the relative importance of differing influences on Antarctic temperature at a given time.”

I’m an engineer, not a statistician so I’m not sure exactly what that means, but it sounds like hand-waving and a subtle admission there is no distance weighting. He might be saying that RegEM can draw conclusions based on the similarity in the temperature trend patterns from site to site, but that is about it. If I’ve got that wrong, I would welcome an explanation.

I plotted out the locations of the 42 occupied stations used in the reconstruction below. Note the clustering of stations on the Antarctic Peninsula. This is important because the peninsula is known to be warming, yet only constitutes a small percentage of the overall land mass (less than 5%). Despite this, 15 of the 42 occupied stations used in the reconstruction are on the peninsula.

42-locations

Location of 42 occupied stations that form the READER temperature dataset (per Steig 2009 Supplemental Information). Note clustering of locations at northern extremes of the Antarctic Peninsula.

I decided to see what would happen if I applied some distance weighting to the data prior to running it through RegEM.

DISCLAIMER: I am not stating or implying that my reconstruction is the “correct” way to do it. I’m not claiming my results are any more accurate than that done by Steig. The point of this exercise is to show that RegEM does, in fact, care about the sparseness, location and weighting of the occupied station data.

I decided to carve up Antarctica into a series of grid cells. I used a triangular lattice and experimented with various cell diameters and lattice rotations. The goal was to have as many cells as possible containing occupied stations, but also to have as high a percentage of the cells as possible contain at least one occupied station. I ended up with a cell diameter of about 550 miles with the layout below.

regrid

Gridcells used for averaging and weighting. Cell diameter is approximately 550 miles. Value in parenthesis is number of occupied stations in cell. Note that cell C (northern peninsula extreme) contains 11 occupied stations, far more than other cells. Cells without letters have no occupied stations.

I sorted the occupied station data (converted to anomalies by Jeff Id’s code) into groups that corresponded to each gridcell location. If a gridcell had more than one station, I averaged the results into a single series and assigned it to the gridcell. Unfortunately, 14 of the 36 gridcells had no occupied station within them. Most of these gridcells were in the interior of the continent and covered a large percentage of the land mass. Since manufacturing data is all the rage these days, I decide to assign a temperature series to these grid cells based on the average of neighboring grid cells. The goal was to use the available temperature data to spread observed temperature trends across equal areas. For example, 17 stations on the peninsula in three grid cells would have three inputs to RegEM. Likewise, two stations in the interior over three grid cells would have three inputs to RegEM. The plot below shows my methodology.

shaded-cells

Shaded cells with single letter contain occupied stations. Cells with two or more letters have no occupied stations but have temperature records derived from average of adjacent cells (cell letters describe cells used for derivation). Cells with derived records must have three adjacent or two non-contiguous adjacent cells with occupied stations or they are left unfilled.

I ended up with 34 gridcell temperature series. Two of the grid cells I left unfilled as I did not think there was adequate information from the adjacent gridcells to justify infilling. Once complete, I ran the 34 occupied station gridcell series through RegEM along with the 63 AWS series. The same methodology was used as in Jeff Id’s AWS reconstruction except the 42 station series were replaced by the 34 gridcell series.

For comparison, here is Steig’s AWS reconstruction:

antarctic-aws-recon-steig

Calculated monthly means of 63 AWS reconstructions using aws_recon.txt from Steig website. Trend is +0.138 deg C. per decade using full 1957-2006 reconstruction record. Steig 2009 states continent-wide trend is +0.12 deg C. per decade for satellite reconstruction. AWS reconstruction trend is said to be similar.

And here is my gridcell reconstruction using Jeff Id’s implementation of RegEM:

jeff-c-recon1

Calculated monthly means of 63 AWS reconstructions using Jeff Id RegEM implementation and averaged grid cell approach. Trend is +0.069 deg C. per decade using full 1957-2006 reconstruction record.

Although the plots are similar, the gridcell reconstruction trend is about half of that seen in the Steig reconstruction. Note that most warming occurred prior to 1972.

Again, I’m not trying to say this is the correct reconstruction or that this is any more valid than that done by Steig. In fact, beyond the peninsula and coast data is so sparse that I doubt any reconstruction is accurate. This is simply to demonstrate that RegEM doesn’t realize that 40% of the occupied station data came from less than 5% of the land mass when it does its infilling. Because of this, the results can be affected by changing the spatial distribution of the predictor data (i.e. occupied stations).


49 Responses to “AWS Gridded Reconstruction”

  1. Bernie said

    Jeff & Jeff:
    Nicely done.
    The caveats and limitations are also a nice touch.

  2. Ian said

    Posted on CA also.

    Well done JJ09, not sure, and really don’t care whether it halves the slope, or doubles it, this is proper science, look at a paper, find a few station errors, consider the whole approach, notice the difference between coastal and inland sites, east/west is significant and then do the proper science. Why the hell can’t the professionals do the same….

  3. Crashex said

    A very interesting post.

    One editorial remark, I’d call that a ‘hexagonal’ lattice, not a ‘triangular’ lattice.

    Why not an “RS” cell for the one that was left blank on the west coast?

    Since temperature is a continuous parameter [i.e. 3 degrees always exists somewhere between 1 degree and 5 degrees], would the value for each cell be better estimated as a value along a line stretched between any two adjacent stations, with some correction for the position on the line as opposed to the average? Just a wild idea. Obviously somewhat more difficult to model and not necessarily any more accurate, but lines radiating out from the pole station to each coastal station would fill in all the interveneing cells..

  4. TCOis banned...why? said

    It seems like you are doing double infilling, by creating temps for the gridded areas where there was no station prior to RegEM. I wonder how much of your results are from that? Also, why not just compare direct distance based infilling to RegEM?

  5. Pierre Gosselin said

    So, basically it’s choose the calculation and gridding method that gives the result you want.
    The claimed warming is so small that it’s beyond the margin of error – insignificant. If the CO2 greenhouse gas hypothesis were true, then Antarctic warming ought to be accelerating, and not decelerating as you assert.

  6. Bill Illis said

    Great material Jeff C and Jeff Id.

  7. Chris H said

    Very neat. Although I can think of one argument Steig might use against this analysis: By combining multiple stations into a single grid cell value, you are reducing the amount of information that RegEM has to work with, and therefore *of course* it will produce a different result.

    Personally I would actually be much more inclined to believe the above reconstruction versus Steig’s, even if that is not what you are claiming. But yes, the sparseness of the data does make one wonder how good any such reconstruction will be. I’d be interested to know which grid cells are most important to RegEM’s trend, and which have little effect.

  8. Chris H said

    @Pierre Gosselin
    I would disagree with how you arrived at your conclusion. The approximate 0.1C (per decade) trend is only “beyond margin of error”, because they chose to state the on a “per decade” basis. If they did it “per year” then it would be 0.01C, which is certainly not measurable, yet the source data has not changed. The reality is that this trend is based on about 50 years of (RegEM) data, so the actual trend is really 0.5C per 50 years – something which should be measurable with enough accurate measurements.

    Valid reasons for disputing the trend could include:
    * Non-satellite date is very spare (both temporally & spatially).
    * Non-satellite data may be unreliable (see diaries of stations getting snowed-over, etc).
    * Satellite data shows cooling overall (only peninsula has continued warming) IIRC (don’t quote me!).

  9. Jeff C. said

    “One editorial remark, I’d call that a ‘hexagonal’ lattice, not a ‘triangular’ lattice.”

    I believe triangular is the correct terminology as the cell to cell centers form a triangle. I could be wrong on that.

    “Why not an “RS” cell for the one that was left blank on the west coast?”

    I had decided that if there were only two adjacent cells with measured data, they needed to be non-adjacent to infill the cell. The thinking was that if I was going to average only two cells, they should be on opposing sides of the cell, not skewed over to one side. This only affected that one cell.

    There are probably other methods of weighting that are more precise but my main objective was to see if weighting makes a difference at all. I’m working on making the code a little more user friendly to allow easy changes to the weighting methodology.

  10. NeedleFactory said

    Crashex wrote in #3:
    One editorial remark, I’d call that a ‘hexagonal’ lattice, not a ‘triangular’ lattice.

    The central points of the hexagons form triangles; thus
    a “triangular lattice” gives rise to a “hexagonal tiling.”

  11. Jeff C. said

    “It seems like you are doing double infilling, by creating temps for the gridded areas where there was no station prior to RegEM. I wonder how much of your results are from that? Also, why not just compare direct distance based infilling to RegEM?”

    I agree regarding the double filling, but I’m not sure how else to proportionally weigh stations like Vostok or South Pole (Amundsen-Scott). These are the only true interior stations and have no others in proximity for around 700 miles or so. Huge portions of the interior have no temperature measurements and would be thus under represented in the reconstruction without infilling the interior cells. If RegEM allowed us to assign a weighting to unequally-sized cells (based sq miles represented) we could do that, but it doesn’t.

    I’ll try some variations on the infilling and post the results. Probably won’t get it done until tomorrow as my wife is already upset for putting this together on Valentine’s day.

    Thanks all for the feedback. I would like to try and refine this over time and

  12. MattN said

    I agree with Pierre. .069 is nothing but noise.

    AGW theory requires that cold regions over land away from moderating oceans will warm the quickest (like Antarctica). And that clearly is not happening.

  13. MattN said

    BTW: http://news.yahoo.com/s/ap/eu_belgium_antarctic_polar_station

  14. Jeff C. said

    One other comment regarding infilling and weighting. We are really taliking about two types of infilling here. The first type is infilling temperature values from missing dates, the second type is infilling temperature values from missing locations. My contention is RegEM was designed to do the former, not the latter. However, it was being used for spatial infilling despite have no knowledge of the locations of the predictor data (which are biased to the peninsula).

    By weighting the data input to RegEM, I was attempting to de-embed the spatial infilling from the final results.

    I’m not trying to say this is the correct reconstruction, just that RegEM is sensitive to the changes.

  15. Larry Huldén said

    I don’t know if it is relevant but in the upper “hexagonal” map the two lowest hexagons seem to be placed too far to the left. They don’t fit the other in the constellation. If they were “correctly” placed one step to the right the stations would however still fit within the hexagons.

    Good job !!

    best wishes
    Larry Huldén
    Finnish Museum of Natural History

  16. John F. Pittman said

    from TCO Y

    Isn’t this whay RegEM does? Create infill data that is the same. It does not contain new information.

    From #7

    Are you really reducing information? The size of an extant parameter that may be and should be influence by the spatial distance from the sea or from the south pole, mountains, or altitude should have an equalizing function of some sort lest the small number of total stations for a continent be unduly influenced by a small area with many measurements. Using an average in a grid cell is a simply approach. If you have enough data of these averaged points losing one or two should not matter, and could be tested. If it does matter, it would indicate that it also should matter for the Steig paper as well since proximity to either the coast or the pole, as well as altitude are known to effect temperatures. Or am I missing something?

  17. John F. Pittman said

    It seems like you are doing double infilling, by creating temps for the gridded areas where there was no station prior to RegEM. from TCO Y

    Isn’t this whay RegEM does? Create infill data that is the same. It does not contain new information.

    From #7 By combining multiple stations into a single grid cell value, you are reducing the amount of information that RegEM has to work with, and therefore *of course* it will produce a different result.

    Are you really reducing information? The size of an extant parameter that may be and should be influence by the spatial distance from the sea or from the south pole, mountains, or altitude should have an equalizing function of some sort lest the small number of total stations for a continent be unduly influenced by a small area with many measurements. Using an average in a grid cell is a simply approach. If you have enough data of these averaged points losing one or two should not matter, and could be tested. If it does matter, it would indicate that it also should matter for the Steig paper as well since proximity to either the coast or the pole, as well as altitude are known to effect temperatures. Or am I missing something?

    Reposted. Sorry lost some info trying to use the blockquote.

  18. Jeff Id said

    Jeff,

    I’ve been thinking, the peninsula stations and the island stations can’t have much to do with the interior of the continent. I wonder what will happen if these gridcells are not used in the reconstruction. In the last gridcell graph that would include cell A, B and C. I don’t know if that leaves enough data to run the analysis but presumably if the reconstruction is valid, it should give a similar result.

  19. Layman Lurker said

    #18 Or maybe Jeff(s), just weight only the peninsula stations?

  20. #18. Why don’t you see what happens without A,B, C and D. The “vortex” S of 70S is surely what’s of interest here.

  21. Terry said

    Im just wondering, would it not be easier to use a proprietary gridding software such as SURFER, where you can chose the gridding method eg triangulation, inverse dist, nearest neighbor etc. It just requires x,y,z ascii files or xls. There is also a freeware package called Quickgrid that is not as clever as surfer but still useful.

  22. John N. said

    Re Chris H. #8 http://noconsensus.wordpress.com/2009/02/15/aws-gridded-reconstruction/#comment-2394

    I think you are correct. It seems to be wrong to claim any trend for a time period by simply dividing a longer period by 10 years for a decade, 100 years for a century, etc. because the uncertainty is not provided on the decadal level.

    Seems like a decadal trend should include assessing each decade in a dataset, then assessing mean of all decades, and providing the SD to determine whether the mean decadal change is meaningful. It could be done with multiple starting years to avoid cherry picking start/end years.

    Otherwise don’t we lose the ability to consider the error on e.g. a decadal scale?

    So either we say the anomaly is n for x period, or report for smaller periods by breaking the data into smaller periods and assessing, with error bars.

    Jeff C., this should be simple to do, but I would be happy to show what I mean with the data if you posted/emailed it.

    Of course I could be missing some elemental concept of regression analysis (not my strength.)

  23. Matt said

    I have a similar question as others about dicing up the data. Apart from a whole continent warming, it would be interesting to see what a color coded “warming” map would look like (ie your version of the cover of Nature! Is western Antarctica warming as much as Stieg claims? ie are R, S, EST grid cells warming? That seems to be the central “surprise” of Stieg’s paper.

  24. Harry Eagar said

    Mighty impressive. I think even Al Gore could understand this.

  25. Jim G said

    Great work.
    Regardless of the actual trend, it does show that a more homogenous approach shows that the trend is significantly lower than the trend calculated by Steig, et al.

    The error that I see is that by averaging from the outside in, it may give a slight, or perhaps significant warming bias. Since the coldest areas are likely to be locations that don’t have stations.

  26. Tim L said

    that reduced it down but……

  27. Jeff C. said

    Re #18 and # 20

    I tried a few variations deleting the peninsula cells. The impact to the mean trend was small, less than +/- 0.01 deg C/decade. At first I was surprised until I realized that I had already “watered down” any disproportianate weighting of the peninsula stations with the gridding. Deleting three or four grids out of 34 total doesn’t have much of an effect.

    A better way of evaluating the impact of these would be to return to the original 42 station series and selectively delete stations to see the impact. In addition to the peninsula, there are stations included on the South Orkney Islands, South Georgia Island and two islands nearly as far north as New Zealand (cells U and V on the second map). None of these would seem to be good predictors of interior continetal trends.

  28. Great inspiration. However, I suggest there is still an inherent bias. You brilliantly looked to fit a honeycomb over the sites so that as many cells as possible contained stations. But I see that many of the cells encompass maritime stations, and landlocked stations are short-changed. Earlier NASA temperature maps suggest that the majority continental area is cold and cooling, the opposite to the minority maritime peninsula and other maritime locations that are warmer and warming. Now these two opposite tendencies are exactly what I would expect from a “Svensmark” effect cooling the ice fields while the peninsula enjoys a strongly downwelling current from the tropics that have warmed. I’d expect mid-continent to reflect mid-continent and not anything maritime. The increase of sea ice seems like a giveaway that cooling has been outweighing warming.

  29. Jeff Id said

    Jeff

    I deleted all the peninsula stations and ran it raw. Matlab didn’t converge for me until over 80 iterations and a garbage output. I reduced the number of PC’s and found the same result for regpar=2 but it would converge with regpar=1. I got a negative slope for the average.

    How many iterations did your gridded version go through before the convergence threshold was met?

  30. Jeff C. said

    #29 It converged right off the bat using the standard settings (30 iterations). Presumably it is related to both the number of series and the number of available data points in the series. Since the penisula stations have fairly complete records, deleting them significantly reduces the amount of data ReGEM has to work with.

  31. DWPittelli said

    Great post!

    It does seem that you are still heavily overweighting coastal regions, as well as the peninsula and islands of course.

    Not only are most of the sites right on the coast, skewing the coastal hexagon readings from what they would be if measured even, say, 100 miles from the sea, but the next ring of hexagons to their interior is almost entirely dependent on coastal data. It would be interesting to see the trend for “interior Antarctic” temperatures, 3 stations which collectively ought to be seen as at least as important as all the coastal sites put together, if one wants to guestimate temperatures for the continent as a whole.

  32. Reminds me of the old joke: It’s late at night and Joe comes across a drunk searching the ground at an intersection.
    “Lose something?” says Joe.
    “Yeah, lost my keys.”
    “Right around here?”
    “No, I lost ‘em a block down the street, but the light is much better here.”
    The data is so much better over on the peninsula.
    Please stop me before I metaphor again!

  33. hswiseman said

    Congrats to Jeff Id on being added to the CA Blogroll! It has been impressive watching you develop from your early analysis work here which didn’t fully engage the complexity of the endeavor, to the more subtle and interesting work of late.

    At the risk of sounding dumb, what is the relationship between RegEM and the statistical tests for autocorrelation? If autocorrelation is low, is it still valid to use RegEm to infill?
    Is Jeff C’s work a backdoor version of infilling using autocorrelation, as one would assume higher autocorrelation where station density was higher, and Jeff C gives these stations more weight?

  34. Chris H said

    @myself who said “Very neat. Although I can think of one argument Steig might use against this analysis: By combining multiple stations into a single grid cell value, you are reducing the amount of information that RegEM has to work with, and therefore *of course* it will produce a different result.”

    After sleeping on it, I decided that although I don’t understand how RegEM works, it is clear that with enough stations located close together (as on the peninsula) RegEM would have a good chance of finding spurious correlations from one or two stations, and then use those spurious correlations to incorrectly generate data. i.e. This is a case of giving RegEM *too much* information!

    Jeff C’s idea of gridding means that the above RegEM problem is greatly reduced – although for those stations with no neighbours, it still does not stop RegEM latching on to one or two of THOSE stations which happen to have spurious correlations.

    BTW, by “spurious correlations” I mean recorded temperature with poor accuracy (and probably few data points) happening to correlate with satellite temperature trends elsewhere (in the calibration period). RegEM will then assume this (spurious) correlation holds before the calibration period (where no satellite data exists), and generate wrong data (which could have a positive or negative trend).

  35. Chris H said

    Perhaps a better way to express my last post is to say that gridding means we supply RegEM with one high-quality data set, instead of many low-quality ones.

    I therefore think that outside the peninsula, where most stations are ‘on their own’, it would make sense to group adjacent stations together – effectively using a larger grid. We will then be loosing spatial resolution, but gaining better data to feed RegEM.

    After all, RegEM surely obeys GIGO (Garbage In, Garbage Out).

  36. Pierre Gosselin said

    The amount of temperature data available from Antarctica is so scant that the margin of error has to be hugely significant. Can I tell someone the AVERAGE depth of a lake with only a few measuremenmts. I could, but not without specifying big error margins. No one can conclusively say what is going on in Antarctica, like Steig and the media seem to be doing.

    Has Steig assigned a plus/minus margin of error to his newly discovered warming trend?
    How about a study on the thickness (volume) growth of the Antarctic sheet? That’s the important metric with regards to costal adaptation issues, is it not? Last I knew is that the sheet is thickening.

  37. Jeff Id said

    #33 you ask tough qeustions about RegEM, from the papers I’ve read there has been little exploration of the actual performance of RegEM. Spatial correlation drives the weighting so it has to affect the result. That’s about as far as I get.

    Since it hasn’t been fully explored in literature, we get to do some ourselves. What could go wrong!!

    One thing I want to do is take sets of known data from nearby temp stations eliminate some data and see what is reproduced.

    #35 Chris

    I think gridding helps but I still can’t agree with the paper’s premise for an impution/extrapolation that doesn’t recognize spatial location explicitly. While RegEM probably works sometimes, I bet in other cases you can put good data in and get garbage out. The more infilling you do, the more likely it is.

    GIGO redefined – Good Data in Garbage Out.

    #36 As far as confidence limits, I can’t even read them because of the method used. It would waste brain space otherwise utilized for watching cartoons or something. —

    If you develop a method which by regularization limits the variance and that regularization is defined by coefficients and odd weightings of data then you calculated confidence limits based on AR1 analysis, what you get is a nice looking confidence interval where a small change in setup creates an entirely different trend well outside the original confidence interval with similar narrow confidence limits. I’m not a statistician but it seems like the basic stat values are incapable of representing the confidence level of the processing procedure used on the data.

  38. Pierre Gosselin said

    Thanks for the reply.
    I’m no mathemetician – so I’ll have to chomp on that for awhile.
    I think I get your drift though. Sounds like the method used by Steig is on awfully thin ice.

  39. Pierre Gosselin said

    hswiseman
    You got to admire people who can comprehend this stuff and add to it. I’m just barely hanging on their coattails!

  40. It would be useful to compare these gridded results with GISTEMP which, after all, uses essentially the same data. I did such a comparison as my first Steig post – it has a retrieval script for GISS gridded data which also gives info on the GISS grid.

  41. Peter D. Tillman said

    Nice work, guys.

    It seems like a more-polished version of this should be submitted as a comment on Stieg’s article to Nature .

    Perhaps McKittrick could help if you’re not familiar with the mechanics — or even provide an academic co-author if you prefer.

    Best wishes,
    Pete Tillman
    Consulting Geologist, Arizona and New Mexico (USA)

  42. Jeff C. said

    Pierre – Steig realizes that his calculations have a large MOE. In the paper he states:

    “We find that West Antarctica warmed between 1957 and 2006 at a rate of 0.17 +/-0.06 deg C per decade (95% confidence interval). Thus, the area of warming is much larger than the region of the Antarctic Peninsula. The peninsula warming averages 0.11 +/-0.04 deg C per decade. We also find significant warming in East Antarctica at 0.10 +/-0.07 deg C per decade (1957–2006). The continent-wide trend is 0.12 +/-0.07 deg C per decade.”

    These are for the satellite reconstruction, not the AWS recon, but the methodolgy is supposed to be comparable and Steig says the two recons have similar trends. He doesn’t give the precise values for the AWS recon in the paper.

    Of course those MOEs were not included on that pretty picture on the cover of Nature. Also, keep in mind that trend value (e.g. 0.12 deg C/decade for the continent) presumes that the data massaging was done correctly in the first place. I show using a distance weighting halves that number although the MOEs are probably comparable to Steig’s values.

  43. I am totally bemused by the concept of “infilling” data that do not exist, and then treating these artificial points as if they were real. Primarily I cannot understand the point of it. So I guess I’m stupid, because everyone else here writes fluently about it.

    The maps appear to show just three stations in the interior of Antarctica. Surely these are the only source of acceptable data for the continental mass. We can merely bemoan the lack of further data, but have to make do with what is available. However, there has been some great maths going on, using somewhat different assumptions or algorithms, and these have resulted in reconstructions of the interior climate.

    Steig’s, displayed as Antarctic AWS Recon. Trend Steig, is the subject of some comment or criticism, and his general techniques have been modified by Jeff to produce another plot, with the same name except for “gridcells” which is derived from the method combining real and other imputed data.

    Much as I have misgivings (or lack of understanding) about the background behind the plots I am fascinated by them. The plots have a certain similarity but are nonetheless clearly different. I would very much lke to be able to access the final numbers (monthly averages) that go make up these plots with the objective of producing behaviour patterns that could reveal /clearly/ just how similar or different the plots are.

    Is there any chance that these data sets could be made available for download as an ASCII file?

    Robin

  44. Michael D Smith said

    I liked the comparison that someone posted on WUWT – with such procedures someone might be able to determine the stock price of webvan or pets.com in 1957…

    A point I made was that I’ve seen a lot of “correlation mining” in the team’s work, and I wondered whether they run the programs with different parameters until the highest temperature slope is obtained. While I realize this is time consuming, I have a lot of background writing back-solving routines like this. I suspect that’s exactly what was done – tweak the parameters until the highest obtainable result is given. If this is what happened, I suspect the RegEM parameters used may or may not make sense with other data sets. Your thoughts? Wish I could help but I don’t have matlab…

  45. Jeff Id said

    #43 and #44,

    Steve almost has RegEM working in R. At that point I would recommend trying it out, it isn’t too bad to learn and you can do a lot with it.

  46. I asked Gavin a few questions on realclimate you may or may not find interesting about RegEm and the TIR data. He responded quite civilly.

  47. Jeff C. said

    #40 Steve – I’ll take a look at this over the next fews days. I’ll also plot out the trends for the individual stations on the map to see how gridding affects the West/East balance. Unfortunately, my process uses R, Excel, Matlab and back to R so it’s not simple to replicate. Looking forward to the R version of RegEM.

  48. #44. the choice of regpar=3 drops out of the sky. It’s worth looking whether regpar=3 yields a higher slope than regpar=1 or 2.

  49. WhyNot said

    #42 Jeff ID – If I understand the quote correctly, continent-wide trend is +.12C/decade +/-0.07C with a confidence of 95%, correct? Lets see, max trend is then +.19C/decade and min trend is +.05C/decade, and the +/- error bar is almost 60% (.07/.12)? Peak to peak variance is .14C/decade?

    I took the same data used by Steig, ran it through my little pee brain and my results are +.19C/decade +/- .11C/decade. My confidence level is 99.9% positive that it is full of BS. The peak-to-peak variance is greater than the nominal??? The scientific community can draw conclusions based on this work???

    Will somebody please give me a dart board and a blindfold!!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 140 other followers

%d bloggers like this: