RSS and UAH have differing 30 year trends which lie outside my calculated measurement error for the data. Subtraction of the two measures leaves a step located at about 1992. Since they both use the same dataset, the question becomes. Which series is right?
After reading several papers and some short emails to some smart people, I have come to understand the bulk of the trend difference to this single step point in the data which corresponds to the time when satellite NOAA-12 began adding data into the trend.
Below is a graph of the RSS-UAH data where the step is quite visible. The flatness of the slope on either side fo the step is a good indicator that most of the data is in good agreement between the satellite processing algorithms.
The graph below is a plot of the raw data and a filtered difference and the overall trends of the data. You can see the trends are crossed and divergent.
The correction is applied at the center of the circle above, the green line is again the difference between RSS and UAH. The method used for correction this time is improved over my last method in that more data is taken into account to produce a higher degree of accuracy in the trend. To match the GISS data to satellite record the curve was first de-trended linearly over a 14 year window in the range shown in the gray box in the graph below. This was done by a linear least squares fit to the data. The slope is subtracted from the data (residuals) the data is then multiplied by the lower troposphere satellite to GISS amplification factor of 1.23. Detrending the satellite data and overlaying gives the good match to the amplitude of the curve as seen below and predicted by climate models. The small green section is the region identified by Dr. Christy as being in question due to a transition between NOAA-11 and NOAA-12 (Oct 1991-March 1992), because we want to change the data as little as possible this is the area I focused on. GISS should be a good metric to correct the trend as it is comprised of hundreds of temperature measurements creating a smoother trend. Large sections of GISS data would not have been as useful due to UHI effects but since we are looking at trendless data over such a short section, it should work well.
The same procedure was repeated again for RSS.
Please note that the graphs are allowed to diverge outside the slope match area as seen in the endpoints.
The next step was the big difference from my last calculations, I used the mean of half year windows (6 monthly values) on either side of the defect region in trendless data from all three datasets. I assumed trendless GISS was correct over this short timeframe and corrected RSS and UAH to match. Below is what the corrections look like.
These curves were added directly to the RSS and UAH data. The RSS correction again was of greater magnitude (the same as my previous attempt), however this time the corrections fell in line with Dr Christy’s analysis of the RSS and UAH data as compared to radiosonde (weather balloon) data. According to this GISS analysis the change in the temperature across the step for UAH was downward by -0.001deg C after the step. RSS was downward by -0.037C after the step. The step is in the center of the data length today so the net trend is strongly affected by small changes at this point.
The corrected curves look like the graph below.
The RSS curve and UAH lay very much on top of each other now. The step in the difference between the metrics (green line in circle) is not visible any more.
The corrected slope for UAH is 0.126 DegC/decade – down from 0.127 Deg C/decade.
The corrected slope for RSS is 0.136 DegC/decade – down from 0.157 Deg C/decade.
After homoginization the difference in trend is only0.01C/decade which is within my stated measurement error from my previous ARMA analysis posts of about 0.02 C/decade at 95%. This instrumental error level is substantially smaller than any other error I have ever read on any blog or paper. That’s because it is related only to the noise level of the instrumentation as computed by ARMA difference analysis from GISS and UAH residuals. The actual slope created by non-random error can be outside this limit. Still, since these measures use the same data as a source, we would expect very tight agreement as the homogenized data shows.
This is another confirmation that the RSS data is in error at the point in question, the same conclusion reached by Dr. Christy using sonde data in his paper.
Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements
Published in
JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 112, 2007
I believe that my work has now resulted in a much more accurate correction than my first attempt. This result is verified by a couple of methods, first it matches Dr. Christy’s result in the paper above, second it compares favorably to the uncorrected trends after the step.
UAH after step — 0.123 C/Decade
RSS after step — 0.129 C/Decade
After digging into the data for endless hours, I now believe the UAH trend is the superior dataset of all three metrics. Until I find a reasonable explanation for the difference in short term temperature variations in sat data being 1.23 times greater than GISS (which is predicted by models), I also believe that the 30 yr temperature trends need to be divided by 1.23 in order to achieve a suitable match to GISS ground measurements. This places the heavily corrected GISS metric well outside of a reasonable difference from satellite measures having a 30 year value of 0.183 C/Decade for the last 30 years!
As I understand it today, if the UAH trend as confirmed by the short term variation and models are accurate the GISS trend should be only 0.103 degrees C/decade. What percentage of that is man made is anyone’s guess.
Just one question. How does Hadcrut compare?
I.e. if you do the same step transformation using Hadcrut instead of GISS do you get similar results.
It is noticeable that from 2003 Hadcrut and UAH are in very close agreement regarding trend see this WFT graph, while RSS seems to have a somewhat steeper slope.
(Note in the graphs I have subtracted 0.2 from all Hadcrut values to overlay it)
As Jeff has noted before when examining the differences between the satellite metrics, something changes around 2000 (give or take). There are some rythmic swings in the differences perhaps due to the UAH annual temp signal which Jeff and Tamino have explored. Interestingly, just eyeballing the 2 yr filter difference graph from 2003 on, there are a couple of tendancies: 1) the differences seem to decrease (with recent uptick) 2) the overall average difference would be positive rather than negative. This may help to account for remaining differences in the 30 year slopes and also the post 2003 slopes.
Jeff, if you have not already, please review David Stockwell’s recent post here:
http://landshape.org/enm/hansens-regression-to-zero/#more-1443
It deals with solar irradiance and points out an acknowledgment by Dr. Hansen of a paper by Tung, K.K., J. Zhou, C.D. Camp suggesting that solar irradiation forcing is underestimated. I have not had time to look at the paper but Stockwell suggests that the problem pointed out by the paper has to do with heat transfer to oceans.
According to Stockwell, the paper zeros in on the 11 year solar cycle vs. global temp trends.
When looking this over I recalled your post:
https://noconsensus.wordpress.com/2008/10/25/an-orbital-heating-signal-from-solar-input/
As you pointed out in your post, The signal you identified is common to all 3 metrics refered to in your post, and coincides with earth’s distance from the sun. As Dr. Svalgaard said, any such solar factors should be corrected for in the metrics. If there has been correction then it would seem that there is a common residual signal in the 3 metrics which your post and thread discussion implied. Perhaps further evidence of an underestimated solar forcing signal discussed in Stockwell’s post?
#3 I will check out the link tonight. I still think year to year variation could be a good method to evaluate the change in heat absorption by the climate system.
so we get a whopping 1.0C per 100 years………….. the sky is falling the sky is falling!!!!
LOL
#5
I think most AGW guys agree that the total trend is not caused by runaway say 60-70% by many. My guesses are less but they are just guesses. So let’s say 0.7 C in 100 years above the baseline climate variation, as though we knew what that was. It seems to me the pacific is doing a pretty good job freezing the US this year so how much of the 30 yr trend is ocean? I don’t think anyone knows. I am hoping for a bit more global warming today though ’cause it’s pretty damn cold here!
… Here the road side snow banks are at 4 to 6 FEET high and the road commission will need to get out the ” WING” plows to cut down-move over the tops of the banks so that the front plow and grader can dump-slide the snow. 40 years ago we saw them out every year. maybe then only 2 times in the last 30years.
I under stand cooling is not necessarily responsible for the large amounts of snow, BUT given heat latency of forming ice There is a whole lot of heat going some where to freeze this volume!!!!!!!
Thank You for an honest forum !!!!!!
Not a speck nore a spot But a
tiny Tim!
Francis #1
I haven’t had time to do that yet. It will take a bit but I’ll try it. I don’t know the history of Hadcrut as well as GISS so I don’t know how sat data is integrated, if at all. So far I’ve ignored it because as I understand it they don’t publish their algorithms. So it could be a smart 4 year old with a crayon for all I know.
Dr. Christy had some interesting comments for me by email including a new paper to read and I want to do a post on that next.
Hi Jeff
One of your best posts! Nevertheless, I am not quite ready to bow down to the Sat data. UAH and RSS get pretty darn close once you zero out the error as you have shown, and together form a mutual admiration society that can diss the ground based record. But you do run into the issue of data independence, and UAH and RSS are joined at the hip data-wise. If you use bristlecombs you always get hockey sticks, and the multitude of studies using bristlecomb with the same result cannot be held up against each other for mutual validation. CA is plowing this ground as we speak. The same goes for satellite data unless the underlying satellite data and the processing methods used stand up to scrutiny and calibrate to a tolerance against actual, accurate, contemporaneous measurements . I don’t believe that the radiosonde work has made anyone very happy or truly confident. A really solid piece of satellite calibration will render ground based temperature measurement almost irrelevant. I would love to see it.
SIDE NOTE- My observation is that the scientists who are perform actual observational gathering and discovery work are eager to discuss their work, answer questions and debate methods and conclusions. It is the analysts and historians that seem to get much more hissy.
Thanks for noticing, this took more work than the last ten posts.
“I don’t believe that the radiosonde work has made anyone very happy or truly confident. ”
I learned that the sonde data is full of steps due to instrument type changes. I would like to use the same process on sonde data because it visually seems that without the step it would be an excellent match for sat.
You have to be a glutton for punishment to play with the radiosonde data.
Roll the SR-71 out of mothballs and chase the Sats around for a few days with a full instrument pack. Or do it yourself with a bunch of weather balloons tied onto your lawn chair (hint, bring a radio and a bbgun).
I’m a slow learner. Actually Dr, Christy sent a paper which shows two distinct areas of discontinuity in the sonde data. I was going to go after those and see what happened.
There is a a fundamental problem with measuring temperature by microwave emission from the 90 GHz oxygen band that can’t be completely resolved. The inversion of the data to temperature is ill-posed. That is, there are an infinite number of solutions that will produce the same readings. The reason is that the data is noisy and that the data does not come from one altitude or pressure level. The weighting functions that show how broad a range of pressures is actually sampled are on the RSS site. The solutions can be constrained somewhat by smoothness, but you need a best guess, i.e. a current radiosonde sounding, to get the best result in a particular region. Or you can assume that the lapse rate is constant, which is what RSS does, IIRC. Then the temperatures are a simple linear combination of the various sensor readings. UAH uses a more complex method that combines the readings at different angles, but there still has to be some assumption about lapse rate somewhere that could in principle be wrong. So it’s not possible to simply abandon the instrumental surface data.
Hypothetically, a radiosonde data set which was free of data inhomogeneity could be compared to surface data in the same manner as the GISS / UAH comparison. Certain types of error could then be identified or ruled out as a source of linear trend bias. If discontinuities can be properly accounted for, this would leave things like lapse rate assumptions and other systematic types of errors to explain any trend gaps with other metrics. If it turns out after the excercise that linear slopes are consistent with satellite, then it would be very hard to argue systematic bias as there would be no evidence to support that.
DeWitt Payne
I’m not familiar with your lapse rate terminology. If I understand you, your point is that because the Sat data has a wider measurement region the unequal distribution of temperature in the LT profile needs to be resolved for accurate measurement. Is that right?
If that’s what your saying wouldn’t it average out over a long term trend?
I don’t want to abandon surface data but from Anthony Watts surfacestation project, the CRN1 and 2 stations show good agreement with each other as they have little UHI effect. We could abandon the stations between buildings and next to air conditioners. It would need to be a global project to finish the job though. Sounds a lot cheaper than abandoning oil to me.
#14 I think that’s the best way to “correct” the trends.
Jeff,
Would it average out over time? I don’t know, but the burden of proof that it does rests with the people who perform the calculations that result in the brightness temperatures near 60 GHz, not 90 btw, being converted to actual temperature.
Lapse rate is the rate of change of temperature with altitude. It’s by convention a positive number even though the temperature decreases with altitude, mostly. The 1976 standard atmosphere uses a constant lapse rate of 6.5 K/km. The dry air adiabatic lapse rate is about 10 K/km. Moist adiabats have a lapse rate that varies with altitude with a low (5 K/km or less) lapse rate at low altitude reflecting the increased energy content of moist air.
To quote from A First Course in Atmospheric Radiation, Grant W. Petty:
The problem is further complicated by the high degree of vertical overlap between adjacent weighting functions, resulting in even fewer degrees of freedom of the data, and noisy data. You have to have a first guess of the temperature profile and you have to put limits on deviation from the first guess to obtain physically realistic profiles. These are all potential sources of systematic error that will never go away. For mid to long range (greater than 3 days) weather forecasting, the problems are manageable. For use as climate data?????
Thanks for the excellent definition. Years of aeronautical engineering and I don’t remember the term. Of course I understand the weighting functions and even see how the LT is derived from the other channels in RSS.
It seems to me that over the time span of months, the lapse rate would have little effect on trend. I just have a hard time imagining the atmosphere holding one lapse rate for a substantial time and then switching to another. I do take your point though, cycles possibly could play havoc without balloons to back up sat observations. I have an open mind though, when you say
“The problem is further complicated by the high degree of vertical overlap between adjacent weighting functions, resulting in even fewer degrees of freedom of the data, and noisy data.”
By noisy data, you sound like you may have seen the data quality from the raw channels? I haven’t as yet.
I’m not disagreeing with you except perhaps the nitpick that it seems to me that there are more variables and more degrees of freedom on an already indeterminant dataset. The last thing I want to do is defend a dataset but I don’t mind giving GISS a kick or two. Still, I see the net noise level of the sat data is reduced from GISS this of course could be due to massive processing and filtering but it is also possible from my perspective the raw data is less noisy than GISS.
Blogging always has the problem that you don’t know people’s background. I was having a discussion with a Smith on Watts Up and I think he is actually credited with inventing the CCD camera. How’s that for amazing. If you have further insight, the thousand readers who stop by without commenting each day may be interested as well.
Jeff,
The problem is that the models do predict the tropospheric temperature profile will change over time. That’s where enhanced tropospheric warming, otherwise known as the tropical tropospheric hot spot, comes from. So using a method which assumes, AFAIK, a more or less fixed temperature profile may be a problem.
The data are noisy and there is massive averaging and smoothing. One of <a href="http://www.sciencemag.org/cgi/content/abstract/302/5643/269Vinnikov, et. al.’s arguments against the UAH algorithm was about how they did the smoothing, IIRC. Vinnikov’s method had its own problems, though. I don’t think anyone really believes his high trend (0.22 to 0.26 C/decade in the article linked) is correctly calculated.
I messed up the link syntax. Here’s the full URL:
http://www.sciencemag.org/cgi/content/abstract/302/5643/269
I think I hit CTRL instead of shift.
“The problem is that the models do predict the tropospheric temperature profile will change over time. That’s where enhanced tropospheric warming, otherwise known as the tropical tropospheric hot spot, comes from. So using a method which assumes, AFAIK, a more or less fixed temperature profile may be a problem.”
If they assume a fixed profile of measurement and the temperature varies changing the strength of the calibration level in the troposphere, the balloons would easily correct for the sensors with a scalar factor. I believe that is what they do, calibrate the sensor readings to balloon data.
Jeff,
As a relatively new comer to the “science” of climate change, forgive me if the following has been noted before.
Looking at the color maps generated by GISS Surface Temperature Analysis:
it is obvious that the color scheme is designed to give the impression of more warming. This has been simply done by manipulating the color bars showing negative anomalies.
The range for -0.2 to -0.5 shows a white color, the same color as the range -0.2 to +0.2. Therefore all white color on the maps could in fact be negative anomaly.
The range +0.2 to +0.5 is a yellow color = a warming anomaly.
To show a more representative color map GISS needs to either show a light blue color for the range -0.2 to -0.5, or show a white color for the range
+0.2 to +0.5.
But of course this would spoil the effect of having more yellows(+ve) than blues (-ve)
DJA
Welcome, you have a skeptical mind. Do yourself a favor and don’t make decisions based on what appears to be a biased graph. These manipulations are well known and I don’t think your wrong by any stretch. I’ve been at it for 6 months on the air vent and quite a bit before this. The data is a mess, the conclusions are overstated and anyone who tells you they know the answer about global warming is, well, slow or …not particularly honest.
Check out this link
https://noconsensus.wordpress.com/2008/09/11/ten-things-everyone-should-know-about-the-global-warming-hockey-stick/
and this less popular one
https://noconsensus.wordpress.com/2008/12/06/ten-global-warming-myths/
Filtering lets you see some things and hides others. Plotting the raw monthly anomaly differences RSS-UAH global TLT, the thing that stands out most to me is the discrepancy in the response to big El Nino’s in 1983, 1987 and 1998.
On the satellite thing, my main point is that you can’t rely on satellite data only. Radiosonde and surface (strictly speaking near surface, at least over land) data are important as cross checks. None of the systems are truly fit for the purpose of measurement of small temperature changes over long times. That’s not what they were designed to do. So saying one is superior to the others is wrong. They all have different flaws and disagreement between them should be expected.
Would you agree that UAH is superior to RSS in that the trend doesn’t have this big of a step?
Even if my first analysis method was right, it showed that UAH didn’t have the same magnitude error as RSS. After the step is removed, they are almost the same data.
Jeff,
Ignore my graph above. Somehow I copied UAH Tropics Land data when I wanted global. Not surprisingly, El Nino is bigger in the tropics. Here’s the graph using the correct data with an exponential smooth (alpha=0.2). I think there may well be more differences in cross calibration between satellites than just 1992, although that is clearly the biggest. The post 2002 difference looks particularly odd. That’s when AMSR-E went into service with an Advanced MSU with a lot more channels than the older satellites.
If forced to pick, I would go with UAH. As I said about the Cryosphere Today seasonal Arctic ice extent data, step changes bother me a lot.
Jeff,
Out of curiosity I plotted some of the regional data differences, specifically NH, SH and Tropics (0 to 82.5, -70 to 0, and -20 to 20 in RSS) land and ocean. The step change in 1992 barely shows up in the NH and Tropics data but is obvious in the SH data. So logically, the big effect should be in the SoExt data (-70 to -20). Yes and no. There is a feature in 1992, but the behavior before and after is not constant. The difference between RSS and UAH appears to be far more complex than a simple step change.