Subsampled Confidence Intervals – Zeke

Sub-sampling is hard to argue with.  Zeke has done a post on global temperatures at the Blackboard taking 500 stations at a time and looking at the extremes of averages.   It presented a set of very tight error bars based on weather variance, sampling errors, and any other random events which affect measurements.  The error bars don’t incorporate any systematic bias but there is an amazing amount of detail in the result.

Global temperature extremes using 5 percent of available stations selected at random 500 times
Method:

To test if this is in fact true, we can randomly select subsets of stations to see how a global reconstruction using only those records compares to other randomly selected stations. Specifically, I ran 500 iterations of a process that selects a random 10% of all stations available in GHCN v3 (which ends up giving me 524 total stations to work with, though potentially much fewer for any given month), created a global temperature record with the stations via spatial gridding, and examined the mean and 5th/95th percentiles for each resulting month.

The plot above uses only 5% of the data but the point of this exercise is that Zeke has proven beyond any shadow of doubt that we do have enough station data to determine temperatures to an effectively consistent level.   Is the data clean enough to have a high quality trend is another question as systematic bias is also perfectly recorded in the above record.  Note how the uncertainty expands both in the past and in recent years due to lack of station data in both times.

Back when I still worked with numbers, I performed asimilar analysis on the Ljungqvist proxy data.  The method is just too simple and direct to argue with.

 

Here is what is interesting.   These are the error bars incorporating weather noise due to sampling error.  This is entirely different from Pat Frank’s weather noise discussed in previous posts because Zeke’s example includes  all the local correlation and distant de-correlation of weather patterns.  Every possible random variation is included, and the 95/5 percent extremes are still as narrow as shown in Figure 1 above.   I wrote at the blackboard to see if he would mind projecting those to the total dataset.  It should be possible to estimate the true  (everything included) error per station from his result and that would allow a projection of a very narrow confidence interval on surface station data.  The reason that gets me excited is because it would be based on reality instead of complex estimates used in the standard CRU CI projections.

I like real but that doesn’t make CRU uncertainty inaccurate.

Zeke’s result makes the CRU uncertainty estimates look to be a little weird but not bad.  From the CRU website the paper Brohan, P., J.J. Kennedy, I. Harris, S.F.B. Tett and P.D. Jones, 2006: Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850. J. Geophysical Research 111, D12106, justifies the following confidence intervals:

Click for larger image

Sorry for the size of the image, the pdf has the full size.  Considering the number of stations involved is 20 times greater in the total dataset of Figure 2 than 1, it looks like the CRU intervals are visibly reasonable – excepting the weird lack of expansion of the CI in recent and distant past years which are known to have less data.  It leaves me wondering just how that bodge was applied, because it has to be a mistake of some sort as there is a lot less data.

Some may wonder why I think it is alright to put weather noise in Zeke’s CI yet not in Pat Frank’s study.  The answer is that this weather noise Zeke incorporated represents the temperature differences due to global sampling error.  Zeke determines the error bars which say, if you don’t measure all of the weather, (incomplete gridding) how much effect does it have on uncertainty of your final average.  In Pat’s work, the error due to weather was the total variance of different stations.  In other words, comparison of Pat and Zeke’s method proves that the difference in temperature from two stations doesn’t affect our knowledge of the average, but the density of measurement of weather patterns does.

Anyway, Zeke’s post should put to rest any concerns that people should have about sampling density being insufficient for discerning average global temperature.   Sampling quality,  systematic bias and missing global coverage in the distant past are other matters entirely.

47 thoughts on “Subsampled Confidence Intervals – Zeke

  1. From what I can understand this is a homogenized data set. How does that affect the numbers? It would seem that you wouldn’t have a random set. if you pick a station that has been homogenized that station has been affected by the neighboring station so it isn’t truly random. Might explain why the CI is tighter with fewer stations.

  2. This still uses the GHCN data which is known have been adjusted as shown by raw data from New Zealand, Australia, some US and maybe many other sites around the world. There is little sense in reanalysing adjusted and selected data. One has to go back to the orginal raw unadjusted data if it is available (it should be in NZ, OZ, USA & many parts of Europe (eg Nederlands, Sweden, Norway). If the raw data does not give an accurate picture than the concept of a global temperature before the satellite era needs to be abandoned.

  3. “Zeke determines the error bars which say, if you don’t measure all of the weather, (incomplete gridding) how much effect does it have on uncertainty of your final average.”

    I have a problem with how that the size of intervals following this principle seems to have been determined, namely that at no point do we have complete gridding, so we are taking subsamples of subsamples, which if you ask me clearly must result in an underestimate of the of the uncertainty in closeness to the “true” average of a perfectly sampled network.

    Although maybe I am just misunderstand the method, so I’ll go see what Zeke has to say.

  4. 4-Maybe I wasn’t clearly articulating my problem. The way I see it, the existing population is already a sample of the whole. In order to address the question of how much uncertainty this results in, one would need address the actual subsampling with time, and for that would not a globally complete sample at some point in the record be necessary to subsample from? The question Zeke seems to be answering is different from how much uncertainty is there in the estimation of the “true” average from historically incomplete data, rather he is asking how much worse the situation could be if we in turn further subsampled that data. Well, okay, interesting I suppose, but not the issue as stated.

    Thinking about it I guess one could say that this would overestimate the uncertainty, althbough I am not sure. I will take my revised questions to Zeke and get his opinion.

  5. Andrew,

    I think you’ve got it right. If we only had 500 stations this would be the error bars. But, there is always a but right, the actual error would be smaller and with some fairly basic calculations I think it would be possible to estimate the correct sigma per station that could be expanded to the whole dataset. IOW, we could estimate the uncertainty using actual subsamples of the data and project it to the complete set.

  6. Yup, thinking about it these are smaller populations on top of an already smaller than perfect population, The uncertainty in the real population would be smaller. My initial mistake was thinking that this estimate of the uncertainty would combine with the problem from actual subsampling (which was silly, clearly I shouldn’t comment so early in the morning). Anyway, seeing how large the uncertainties are with five hundred stations, I can’t believe that Hansen said he could get a meaningful result with just sixty (can’t think of the reference at the moment). Thank goodness we have a lot more than that!

  7. Andrew,

    What is important in the climate issue is the evolution free of weather. We could determine the uncertainties on the basis of 5 years average and they would probably be significantly lower. Perhaps the claim of Hansen with 60 stations is not so ridiculous after all. Well, I add that I do not think the result, good or bad, would be representative of real global anomalies.

  8. “Global temperature extremes using 5 percent of available stations selected at random 500 times.”

    A nit pick, but I think this caption could be misleading or misconstrued. With the station numbers heavily concentrated in North America and Western Europe, the 5 percent of available stations means little when we consider spatial uncertainty. Also the uncertainty in a global mean temperature will certainly be less than it is for regions of the globe and particularly for those that are not well sampled.

    Zeke has done an interesting calculation/analysis here and, since I have not read the details, I am not sure what he was attempting to answer. Certainly his error bands are smaller than Pat Franks – as would be expected.

    To estimate the uncertainty of less than complete spatial and temporal station coverage, I think you must look at grids (or least this is one of the alternative methods) with near complete coverage and then look at randomly selected stations over time and space. Uncertainty from incomplete coverage will vary according to the diversity of the geography of the grid and depending on using stations at greatly higher elevation and the proximity of a station next to large bodies of water.

    Zeke’s analysis as I understand it can be used to show the increasing uncertainty from lesser spatial coverage. I am guessing that the uncertainty increases in the early and late instrumental period because a random selection of 5 percent of the available stations yields a smaller number. It might be interesting to vary the number of stations randomly selected and look at the increase/decrease in uncertainty from a given period of the time series.

    I am also curious whether the random selection that Zeke made had any weighting for area or was it a straight draw from all available stations.

  9. 1. Scientific Conclusion from 1st Figure:

    a.) Global temperatures increased (~1880 – 1940)
    b.) Global temperatures decreased (~1940 – 1975)
    c.) Global temperatures increased (~1975 – 2000)
    d.) Global temperatures unchanged (~2000 – present)

    The above conclusion directly falsifies AGW claims of “Industrial CO2-Induced Global Warming:”

    Atmospheric CO2 does NOT follow this trend.

    2. Political Conclusion from History & Current Economic Collapse:

    a.) In 1972 world leaders (conservative/liberal, left/right, Republican/Democratic politicians) started manipulating the results of government-financed studies secretly for a noble cause (in their opinion):

    a-1) 1945: A nuclear bomb vaporized Hiroshima;
    a-2) 1951: General McArthur wanted A-bomb to end Korean War; Fired instead.
    a-3) 1962: Cuban Missile Crisis convinced politicians:
    a-4) All life – including their own – would be destroyed unless they found a “Common Enemy” to

    —i.) Unite Nations;
    — ii.) End Nationalism;
    – iii.) Abolish Nuclear Weapons !

    b.) Background/Consequences of 1972 Solution:

    b-1) 1952: The US Psychological Strategy Board employed Henry KIssinger;
    b-2) 1971: Henry Kissinger secretly visited China to find a solution;
    b-3) 1972: Henry Kissinger took President Nixon to China to meet Chairman Mao;
    b-4) 1972: “Global Climate Change” identified as “Common Enemy”?
    b-4) 1972: NASA Grant terminated [Third Lunar Science Conference, vol. 2 (1972) 1927; Nature 240 (1972) 99].
    b-5) 1974: “Another Ice Age” Coming (Time Magazine)“

    http://www.time.com/time/magazine/article/0,9171,944914,00.html

    c.) Background/Consequences of 1980’s Solution

    c-1) 1974: “Another Ice Age” did not happen.

    c-2) 1975: Data from other meteorites confirmed [Nature 240 (1972) 99]: Fresh supernova debris formed the Sun and its planets [Science, 190 (1975) 1251; Nature 262 (1976) 28; Science 195 (1977) 208; Nature 277 (1979) 615; Icarus 41 (1980) 312; Meteoritics 15 (1980) 117; Geokhimiya 12 (1981) 1776].

    NOTE: 1981 article was published in RUSSIAN. Was the secret 1972 solution was being re-negotiated?

    c-3) 1983: Data from Apollo Mission showed Earth’s heat source – the Sun – is a supernova remnant, NOT a stable H-fusion reactor [Meteoritics 18 (1983) 209]. That paper correctly predicted excess Xe-136 in upcoming Galileo probe of Jupiter [Meteoritics 33 – A97 (1998) 5011].

    c-4) 1985? Secretly-negotiated new “Common Enemy” would pinch off tail-pipe of Western economies and level the economic playing field:

    AGW or “Industrial CO2-Induced Global Warming.”

    c-5) 1987: Former movie actor, President Reagan, delivered a stunningly accurate forecast at the base of the Brandenburg Gate:

    ”Mr. Gorbachev, tear down this wall!”

    c-6) 1989: East Germany opened the Berlin Wall.

    c-7) 1990: Reagan did TV public relations with symbolic hammer swings at remains of the Berlin Wall amid talk of an evil empire and praise from other world leaders (NY Times, 16 Sept 1990):

    d.) Background/Consequences of Climategate and Economic Collapse

    d-1) 2009: Data manipulation revealed in e-mail messages

    d-2) Today: USA economy is on brink of collapse

    d-3) 2011: Belatedly NASA makes and releases a new video that undercuts any remaining credibility in AGW and the Standard Solar Model (SSM) of Earth’s heat source as a stable H-fusion reactor:

    http://www.physorg.com/news/2011-07-dark-fireworks-sun.html

    When the present world leaders are all gone, we still need to unite nations, end the threat of mutual nuclear annihilation, and establish a cozy, caring, (and DEMOCRATIC) world government (controlled by the people).

    Thanks for letting me share.

    With kind regards,
    Oliver K. Manuel
    Former NASA Principal
    Investigator for Apollo

  10. Kenneth,

    I agree with you. That’s kind of why I asked on Lucia’s thread if he felt it could be expanded to the full dataset using the sigma per station of each month. It is a little confusing to me but I think the sigma would be valid for that purpose.

  11. 8-We aren’t trying to stamp out the effects of “weather” on the average, but rather the effects of missing some local “weather” on estimating an average “weather” for the entire globe. The real point is that the uncertainty in a sixty station series from undersampling would be HUGE.

    But five years is, IMAO, much too short to eliminate “weather” from the data anyway.

  12. Within the error bars … 1930s/40s was as warm as the 2000s. If UHI was properly dealt with the graph would look very different.

  13. I can’t believe that Hansen said he could get a meaningful result with just sixty (can’t think of the reference at the moment). Thank goodness we have a lot more than that!

    The source on that is not hansen. It’s Shen. gavin quoted the Shen paper. 60 optimally placed stations.
    based on EOF analysis as I recall.

  14. 14-Thanks, I knew it was a statement by someone at GISS, and I think it’s a safe bet that Hansen would agree with Gavin on this point, since he is responsible originally for GISTEMP. I also did not know that they referenced a specific paper by someone else on this. I thought it was connected to Hansen’s paper on the spatial coherence of anomalies that allows for him to interpolate over the Arctic.

    To be fair, I have a feeling that I labor under a very different impression of what a meaningful result is than whatever the paper being quoted was. I can imagine that for them, it merely entails be able to get a curve that looks vaguely like the “truth” from more stations, for me meaningful requires uncertainties that don’t stretch out much more than the signal, which I suspect would be the case if one really only used sixty stations, on the basis of the size of the “uncertainty” induced by much larger samples by Zeke. Of course, “optimal” placement probably also is an important element of this idea, but with the loosest interpretation I can think of, ie that the geographic spread of the sixty be uniform around the world, and not just randomly selecting stations which would be very likely to be in the US or Europe, is easily dealt with by gridding stations and requiring that grids too nearby a selected station will not be part of the sample for selecting the remaining stations (which Zeke has, I believe, done). So the uncertainties in a record with sixty stations, compared with the amount we actually have would undoubtedly be quite large.

  15. How many stations from each random station selection were out in the ocean? How many were in the Southern Hemisphere, especially pre-1930?

    What happens to the uncertainty when the real global SST is included?

    When will we know the real SST history? They haven’t even sorted out the bucket adjustments properly yet. For much of the last 150 years the Southern Ocean and the South Pacific are hardly even sampled. Then there is Antarctic as well. How confident are we of the trend down there since say 1850? How real can the 150 year trend in the “global” average ever be?

    Nice work Zeke but I suspect it is based on inadequate data. That is just my humble opinion though.

  16. “I can’t believe that Hansen said he could get a meaningful result with just sixty (can’t think of the reference at the moment). Thank goodness we have a lot more than that!

    The source on that is not hansen. It’s Shen. gavin quoted the Shen paper. 60 optimally placed stations.
    based on EOF analysis as I recall.”

    My point was that a random sample from stations concentrated in a few regions of the globe would produce much more uncertainty than a smaller number of well placed stations. That is why at the Blackboard I suggested that Zeke use a region well populated with stations and then look at repetitive sampling with different sample sizes.

    Jeff mentioned the Ljungqvist proxies and that brought to mind how Zeke’s exercise could relate to uncertainty in temperature reconstructions. Dismissing for this exercise the assumption that proxies can quantify changes in temperatures over time, think of the sparse spatial coverage of the proxies in a reconstruction and the fact that the essential relationship of correlation over distance gets degraded by the noise in the proxies (3 times more white noise in the Lungquvist proxy compared to instrumental temperature measurements would account for the degradation of the distance correlations in the proxies compared to the instrumental station data).

  17. #15

    the justification for extrapolating 1200km at the pole is based on.
    1. correlation studies. 1200km is fine at the pole.
    2. understanding the physics.

    Ask yourself this question: assume
    A. you have a station at 80N 30 and one at 80N 210
    B. both those stations show a zero trend.
    C. heat get transported to the pole

    will the trend at 0,0 be…
    higher, lower, the same,

    WRT Shen. Its been a while since I read it. So, I’ll not comment

  18. How many stations from each random station selection were out in the ocean? How many were in the Southern Hemisphere, especially pre-1930?

    You mean on islands? propose a testable hypothesis. If you exclude “ocean” sites what do you expect?
    SH? again, what is your hypothesis and how do we test it?

    What happens to the uncertainty when the real global SST is included?

    I’d probably look at the SST separately since it measures a different thing. But again, you have to propose a testable hypothesis. The big concern of course is that SST uncertainty will be greater. I’m not convinced that SST has to be sampled as widely as the land. My thinking would go like this. Look at the range of air measures over land.
    look at the range of SST measures. The variance of SST is likely to be much smaller. The spatial coherence of
    SST is likely to be higher for physical reasons. it would be interesting to look at the the correlation length in SST.

    When will we know the real SST history? They haven’t even sorted out the bucket adjustments properly yet. For much of the last 150 years the Southern Ocean and the South Pacific are hardly even sampled. Then there is Antarctic as well. How confident are we of the trend down there since say 1850? How real can the 150 year trend in the “global” average ever be?

    we will never know the real history about SST or anything for that matter. buckets are a problem, but one can always do parametric studies. The real trend isnt that critical to the fundamental question of sensitivity.

  19. 21-Yeah, I’ve heard the basis, it’s not terrible, but I must admit that the potential for missing certain spatial variability bothers me. I do NOT think that it necessarily biases the trend one way or another, although based on how trends and variability have tended to scale in the higher latitudes, one could possibly argue that the bias from simply interpolating over the Arctic would be to underestimate change and variability, but one cannot be sure what precisely happened. I think in areas we’ve missed the anomalies are as likely to be higher there than one would estimate as lower, so it is not obvious to me that this creates any bias, but contributes to uncertainty. It’s worth noting that sometimes missing data could potentially be filed in, in which case we could reduce uncertainty. Perhaps stepping up collection and keying efforts. In the case of the Arctic, unfortunately, there is no data to collect, which is why the best we can think to do is interpolate/extrapolate.

  20. “the justification for extrapolating 1200km at the pole is based on.
    1. correlation studies. 1200km is fine at the pole.
    2. understanding the physics.”

    Mosher, I think you can do better than this. It comes across as handwaving.

    Correlation of temperatures decreases over distance, but as you imply here that decrease can be different for different regions of the globe. Decreasing correlations means increasing uncertainty of the extrapolated temperatures as indicated from TTCA above. The hooker for longer term trends is that the correlations can change over time or that we do not have a way of measuring whether the correlations have changed or not. While not completely covered by satellite, the Arctic is sufficiently covered to do some correlation studies of Arctic temperatures with those 1200 km away. That gives a better picture of spatial uncertainty over a relatively short period of time, but does not answer the question whether the correlation might change as we go back in time.

    I find curious, my failure to find studies by main stream climate scientists that make use of the satellite records for analyses. It is as though these scientists do not trust the satellite record – or maybe they do not trust that there is near a one to one correlation between troposphere and surface temperatures.

  21. Mosh

    I do not agree. The real trend is what really happened. It is a property of real data that has suffered minimal corruption, or has at least been cleaned up in a way that is transparent to everyone and I like your work on this issue. There must be a relationship between the climate sensitivity and the real trend, and don’t try to impress me with the hidden heat that is somehow sneaking to the bottom of the oceans without being measured.

    At present the supposed climate sensitivity implies a system that should be warming faster than what appears to be happening in the real world. If this continues for more than a few more years then it will become an a very large problem for climate science and for the various GCM’s.

    The climate sensitivity depends on the nature of the various feedback loops. Are these loops positive or negative, clockwise or anticlockwise? Right now you do not know the climate sensitivity. If you think you do its just a figment of your imagination. Are low clouds a positive or a negative feedback?

    How much of the upward trend in air temp is due to natural internal oscillations in the climate system. That can’t be answered with confidence until we can agree on what the actual trend has been.

    The real trend in surface temperature is a constraint on the possible range in climate sensitivity. In this context I find it rather interesting that with not much more than a stroke of the pen the recent (60 yr) trend in SST has suddenly changed, and further that the uncertainty in the early part of the record has suddenly increased.

    I know that SST is not the same as air temp. The two are not the same over the ocean. But they are related and the SST has a substantial effect on the air temp over much of the planet.

    Over much of the ocean there few islands. The history of temperature measurement at many of those islands is not a long one, particulalry in the less inhabited areas in the bottom half of the Southern Hemisphere. So I don’t find the argument that these stations can represent all that empty ocean to be very convincing at present.

    I am not fixated on any particular number for climate sensitivity. Like most of us I want to advance the science towards a more confident answer, to firm up on the range of possibilities and then reduce the standard error. My prediction is a smaller number than the so called consensus.

  22. Speaking of Sea Surface Temperature versus actual air temperature over the ocean, I have wondered if there has been any further work since this paper:

    http://www.agu.org/pubs/crossref/2001…/2000GL011167.shtml

    Examining the “marine air temperature” for how it connects with the sea surface temperature measurements.

  23. Steven Mosher,

    you claim that you can get approximately the same trends with any grouping of stations. I think y’all have done a reasonably good job of showing that. Now, when are you going to show that all these stations are NOT significantly different from areas away from anthropogenic contamination, which just so happens to be the majority of the surface area of the earth?

    Remember Dr. Spencer’s work that showed that lower density areas generally had higher trends than higher density areas?? Sure explains how GISS can average rural and urban and increase the urban trend.

    http://www.drroyspencer.com/2010/03/the-global-average-urban-heat-island-effect-in-2000-estimated-from-station-temperatures-and-population-density-data/

  24. Kenneth:

    “Correlation of temperatures decreases over distance, but as you imply here that decrease can be different for different regions of the globe. Decreasing correlations means increasing uncertainty of the extrapolated temperatures as indicated from TTCA above. The hooker for longer term trends is that the correlations can change over time or that we do not have a way of measuring whether the correlations have changed or not. While not completely covered by satellite, the Arctic is sufficiently covered to do some correlation studies of Arctic temperatures with those 1200 km away. That gives a better picture of spatial uncertainty over a relatively short period of time, but does not answer the question whether the correlation might change as we go back in time.”

    In the paper you referred me to some time ago the changes in correlation did change with latitude. The further north you go, the LONGER the distance. Now, of course,
    it is logically possible that:
    1. Above 85 north, things suddenly change
    2. In the past it was different.

    However, we have no basis for concluding that. What we do know is that the trends have a latitude dependence. As we go from the equator toward the pole the trends get higher.
    And we have a physical basis for understand why this should be the case. Is it possible that from 85N to 90N this reverses? Is it possible that the pole cools while the stations
    at 80-85N warm? It’s logically possible. So I’ll repeat your choices:

    A. Assume the pole warms as much as the stations at 80-85N.
    B. Assume the pole warms MORE
    C. Assume the pole warms less.

    This actually brings up an interesting issue. from 85N to the pole is mostly Ice or Ocean. Should we even be including estimates from land based stations there.
    should we use SST under the ice? that would be relatively constant ( at the ice/water interface) no trend.
    The other issue to consider is that the land north of 80N is something like less than 2%. So you could impute a century long ZERO trend and the final answer doesnt
    change in any way that changes the fundamental science.

    Maybe I should just include the bouy data. But the history question is very tough to answer

  25. “Rob R said

    July 17, 2011 at 7:44 pm
    Mosh

    I do not agree. The real trend is what really happened. It is a property of real data that has suffered minimal corruption, or has at least been cleaned up in a way that is transparent to everyone and I like your work on this issue. There must be a relationship between the climate sensitivity and the real trend, and don’t try to impress me with the hidden heat that is somehow sneaking to the bottom of the oceans without being measured”

    See paul K’s work at Lucia’s or Held’s writings. It’s taken a long time for me to wrap my head around it. I could still be wrong. However If the effects from GHG warming take a long time to materialize, if the ECR is a centuries long process, then looking at the last 150 years can only get you a TCR. a transient climate response. So you only see the fast feedbacks in the century long process. slow feedbacks take longer to accumulate.

    The last great ice age resulted from a 7.5 watt differential INTEGRATED over milllenia.

    One wayto look at this would be to take schwartz’s paper and perturb the temperature record to see how the result changes in the computed sensitivity.

  26. “Rob R said

    “Over much of the ocean there few islands. The history of temperature measurement at many of those islands is not a long one, particulalry in the less inhabited areas in the bottom half of the Southern Hemisphere. So I don’t find the argument that these stations can represent all that empty ocean to be very convincing at present.”

    you dont understand how Islands get used.

    In my system an island station is WEIGHTED by the fraction of land. Lets take a cell at the equator

    5 degrees x 5 degrees = 110km on a side. go figure the area.

    lets take an island in that box: 10km by 10Km

    The SST has a temp
    The island has a temp

    They DO NOT get averaged. They get WEIGHTED by AREA and then added.

    so If the SST is 30C and its 90% of the grid and if the land is 35C and 10% of the grid, you dont average them. You compute
    a contribution to the grid temp by AREA. 27+3.5 = 30.5C for that grid

    I can also drop Islands. no change.

    here is the challenge. Find a compelling argument that we have Missed out on recording some cooling stations or that we have systematically
    selected stations that are skewed more warm (UHI is one such argument) Absent a THEORY, there is nothing for me to test.
    There are METHODOLOGICAL DOUBTS, that is there are doubts that one always has about ANY measuring, there are pure skeptical, philosophically
    skeptical doubts, but there are not any doubts that can be tested.

    That’s a very subtle point. For example, what evidence do you have that leads you to believe that the SH MUST BE sampled more? what we know is that it is sampled less than the NH. so what, the NH is over sampled.

    At this stage doubts need to be put on a rational testable footing.

  27. Kuhnkat.

    I’ve done that kind of study many times.

    with respect to Roys work. i could not duplicate his work.

    no code. no data.

    I attempted to do something similar and could not duplicate his results.

    Some notes:

    1. GSOD data he uses has not been quality controlled
    2. The land mask he used. not so sure about that one.
    3. The metadata he relied on to adjust for lapse rate. he relied on station inventories. BAD IDEA.
    4. Population density data. He refers to a web site. from there he could have used GPW or GRUMP
    Both of these datasets have issues, sometimes large issues. they also capture different
    population dynamics.. like — daytime population versus nighttime population.. weird but true
    this is one reason ahy population density is a very suspect proxy for UHI. people dont cause UHI
    The changes people make to the LAND is what causes UHI. population CAN BE correlated
    with these land changes or not. it depends on the part of the world you are in, tall buildings
    versus sprawling shanty towns..
    5. You get a better sense of urbanity from data like ISA ( impervious surfaces) or from modis 500 meter data
    6. There are better population products than the ones he used: in the US very accuarte historical 1km data.
    for the world– landscan data. But landscan is behind a paywall.

    So, in a nutshell: Roys work as It stands was not reproducible. no code no data. and his data sources as CITED have
    various issues. When I looked at the same problem using available quality controlled sources I could not find the effect.

    I’m by no means claiming certainty here. I’m just not interested in wild goose chases. been there, chased the goose.

    I will lay down the rules. If you have a hypothesis, AND IF YOU ARE WILLING TO GIVE UP YOUR BELIEFS IF YOUR TEST FAILS, then I am willing to test
    your hypothesis.

    So, make a hypothesis. and propose a test. but you have to promise to give up your beliefs if the test you propose fails.

    I am more than willing to say there could be a “small town” UHI effect. literature supports such a thesis. It hasnt been tested
    completely. So, propose a test. but you’ll have to agree to die by the sword you propose to use.

  28. 1) I don’t think UHI contaminate can be detected retroactively.

    2) But any town that has data from the 1930s all the way to the 2000s then the UHI is the difference between 1934’s temperature and current. 🙂

  29. 29-“The last great ice age resulted from a 7.5 watt differential INTEGRATED over milllenia.”

    With respect, this appears to be based on Hansen’s and similar analysis of the Ice Ages that “proves” high climate sensitivity. Frankly it is not really appropriate to look at the “global mean TOA radiative forcing” for changes clearly resulting from heterogeneous (spatially and temporally) climate forcing like Milankovitch effects. Hansen’s calculations which gave a value very close to yours, made NO reference to accounting for Milankovitch (he attributes the Ice Age bizarrely to changes in ice sheets, vegetation, and CO2, all of which were caused by the temperature change itself, which makes them feedbacks, which arguably obligates Hansen and others to explain why the climate sensitivity for ice ages is actually 37 degrees for a doubling CO2 (the aerosol forcing cited by Hansen, which is not significantly different from zero (0.5+-1) is the only one not obviously a feedback to me, and for 5 degree mean change that gives a whopping 10 degrees per watt per meter squared), which is physically ridiculous-their models CANNOT explain how Milankovitch causes the Ice Ages, which means we DON’T know the forcing that caused the Ice Ages within the current paradigm of “global mean TOA radiative forcing”.

  30. Mosh,
    You said:

    If the effects from GHG warming take a long time to materialize, if the ECR is a centuries long process, then looking at the last 150 years can only get you a TCR. a transient climate response. So you only see the fast feedbacks in the century long process. slow feedbacks take longer to accumulate.

    Well, there are multiple issues with the “transient response” estimates. The accumulation of heat in the oceans was claimed (as recently as 2005, Hansen et al) to be ~0.75 watt/M^2…. with vast “committed warming” due to that rapid accumulation. In light of Levitus et al, Argo, and recent warming estimates for the deep ocean, that value appears to be too high by a factor of at least 2, and probably closer to 3. The current explanation is that aerosol effects are greater than previously thought (China and India), or maybe heat is in fact accumulating but not being accurately measured in the ocean (how this doesn’t show up in thermal expansion I do not know!).

    Issac Held’s suggestion is that sensitivity will appear modest for the next 100-120 years but then increase as the behavior of the ocean (specifically, the rate of heat absorption) changes due to warming at high latitudes. Plausible? Sure. Testable? Nope. And IMO, that is a common problem with most all of climate science: altogether too many untestable projections. I tried today (in the most polite way I could) to ask Stefan Rahmsdorf two questions at Real Climate about his recent projection of rapidly accelerating sea level rise (1 – 2 meters by 2100). I noted that his most recent projection (Vermeer & Rahmstorf 2009) was for ~0.6 cm/year rate of rise by 2020 and ~0.74 cm/year by 2030, while the 1993 – 2011 satellite measured rise appears essentially constant in rate @ ~0.31 cm/year. I asked if he thought the satellite data was accurate, and if so, did he expect the measured rate of rise to start accelerating soon. You can probably guess what happened to my comment: first, a long time awaiting moderation, then… gone to the bit bucket. So it seems to me that Stefan Rahmsdorf does not want to address the apparent discrepancy between measurements and his projections of sea level rise. The authors of that paper also editorialized in their conclusions about how a rapid rise in sea level confirmed the pressing need for rapid and drastic forced reductions in fossil fuel consumption; comments I found both odd and inappropriate in a scientific paper.

    IMO, these fish are too slippery by half. Any time there is an apparent conflict between reality and their projection of high climate sensitivity (or resulting catastrophe), there are always multiple excuses for that discrepancy, none of which are testable, and always those excuses remain consistent an estimate of high ECS. You seem to give them the benefit of the doubt on most issues; I honestly don’t think they are deserving of that benefit. We need testable predictions, not arm waves.

  31. Mosh,

    RE my comment #34

    My Real Climate comment appears to have come back form the bit-bucket! No answers to the two questions so far.

  32. Steven Mosher,

    Do you have a problem contacting him for the data?

    You apparently have no problem discussing UHI studies from Jones and Wang with missing data.

  33. Steven Mosher,

    I should have looked before responding. In Dr. Spencer’s post are links to the data sources he used:

    NOAA’s International Surface Hourly (ISH) weather data
    http://www7.ncdc.noaa.gov/CDO/cdo

    1 km gridded global population density data
    Socioeconomic Data and Applications Center (SEDAC).
    http://sedac.ciesin.columbia.edu/gpw/

    USAF 1/6 deg lat/lon percent water coverage dataset
    http://www.drroyspencer.com/global-percent-water-coverage-grid-at-one-sixth-deg-res.dat

    I realize he didn’t post his code or the list of station pairs he actually used so I guess I am wasting my time here. For some reason I thought someone just MIGHT be interested in finding out what is happening out there rather than chanting the Climate Science Mantra.

  34. Steven Mosher,

    Have you discussed this with Dr. Spencer. It would be good for us to know how Dr. Spencer handled questions about this.

    By the way, where is your data and your code showing that you couldn’t duplicate Dr. Spencer’s work. Negative results are important too.

  35. Mosh

    Anything longer than transient and we can certainly adapt to it quite easily. Even transient on the scale you are talking is “multi-generational” in human terms, so we can probably adapt to that as well.

    I did not say there were any “cooling islands”. I suspect there are a quite a few that are warming less quickly than is often supposed (like the entire South Island of NZ, for which I have all the climate station temp data, most of which is not in GHCN).

    I am well aware of the energy changes required to cause glaciations and the time scales involved (Geologist specializing in the Late-Quaternary period, living in an area with evidence of many cycles of glacial advance/retreat, with moraines no more than 15 min drive from home). Over the next few years I should have several co-authored several papers in the peer reviewed literature on glacial chronology for the area where I live.

    We do not need greenhouse gases to explain the majority of the climate shift from glacial to interglacial and back. That hypothesis is beginning to unravel.

  36. Hi Jeff,

    I used Zeke’s plot on the global temperature record to introduce my summary of the Historical Roots (1945-2011) of Climategate.

    http://dl.dropbox.com/u/10640850/20110722_Climategate_Roots.doc

    If this posting works, I’ll also put it on Blackboard.

    Who would have guessed nine years ago that we were battling the “Evil Empire” when attending the 2002 SOHO/GONG Conference on Local and Global Helioseismology, in Big Bear, CA?

    What a strange, strange world we live in!

    Despite the politicians, I am pleased to report that

    Today all is well,
    Oliver K. Manuel

  37. Sorry for the o/t, but for some reason, my browser (Firefox 5.0) doesn’t display the comments. They pop up briefly, then disappear. Any ideas?

  38. Oliver K. Manuel,

    I appreciate your posts (but (regarding one of your posts elsewhere) Kissinger’s Bilderberg meetings seem to be at another influencing Bilderberg group than that one of the “Bilderberg Continuum Atmospere” (BCA) (cf. http://articles.adsabs.harvard.edu//full/1968SoPh….3….5G/0000005.000.html and, f.i. http://www.bilderbergips.org/index.php?lang=en&content=meetings)).

    (Thanks Jeff that you let me post it on your side where I don’t have to register.)

    Keep up your good work alltogether.

  39. It’s really a nice and useful piece of info. I’m satisfied that you simply shared this useful info with us.
    Please stay us informed like this. Thank you for sharing.

  40. Hey there I am so glad I found your site, I really found you by mistake, while I
    was searching on Aol for something else, Anyways I am here now and
    would just like to say cheers for a remarkable post and a all round enjoyable blog
    (I also love the theme/design), I don’t have time to look
    over it all at the moment but I have book-marked it and also added your RSS feeds, so when I have time I
    will be back to read much more, Please do keep up the fantastic jo.

  41. Oh my goodness! Amazing article dude! Many thanks, However I am experiencing problems with your
    RSS. I don’t know the reason why I cannot join it. Is there anybody getting similar RSS problems? Anyone that knows the answer will you kindly respond? Thanx!!

Leave a reply to Anonymous Cancel reply