the Air Vent

Because the world needs another opinion

In Search of Cooling Trends

Posted by Jeff Id on September 2, 2010

This is a repost from the ‘digging in the clay’ blog by Verity Jones, re-posted by request.   I’m not sure the purpose of looking for cooling stations results in what they were hoping for, but the results are fairly thorough and readers may find them interesting.  I’m certain the globe has warmed — a little.   What makes this interesting to me, is that quite a few stations haven’t captured the same trend.

Anyway, it’s well written, here you go.

————-

by Verity Jones and Tony Brown (Tonyb)

Back in October Tony asked me to help with a big idea.  Searching Norwegian climate site Rimfrost (www.rimfrost.no) Tony had found many climate stations all over the world with a cooling trend in temperatures over at least the last thirty years – which is significant in climate terms.  You see Tony had a grand vision of a website with blue dots on a map representing these “cooling stations”, where clicking on the dots brought up a graph of the data and the wonderful cooling trend.  Would this not persuade people to look again at the notion of worldwide global warming?

Figure 1. Map showing stations on Tony’s “Cooling List” – stations which appear to have a cooling trend (>30 years) to present (data source: http://www.rimfrost.no Oct-Dec 2009; Earth image source: Dave Pape)

I asked Tony how many stations he had in mind. “Oh two hundred or so…”  He suggested breaking it down into bite-sized chunks and sending me sets of ten at a time.  I was to compare the data with that on the GISS site and/or those of national met agencies where available to verify the source, and produce graphs to a standard template.

We were concerned that this could be seen as ‘cherrypicking’ nonetheless it was an attractive idea.  In many cases it was not just cherrypicking the stations, but also the start dates of each cooling trend.  Despite these reservations we decided to go ahead, although ultimately we have not completed the project, partly for these reasons, but also because it is a case where the journey became more important than the destination and it is worth sharing.

The first 10 (Set 1) of Tony’s target stations, which at this point I should say seemed to be a randomly chosen set, were:

  • Brazil – Curitiba (1885 to 2009) Cooling 1955 to 2009
  • Canada – Edmonton (1881-2009) Cooling from 1886 to 2009
  • Chile – Puerto Montt (1951-2009) Cooling from 1955
  • China – Jiuquan (1934-2009) Cooling all years
  • Russia – Kandalaska (1913-2009) Cooling 1933-2009
  • Iceland – Haell (1931-2009) Cooling all years
  • India – Amritsar (1948-2009) Cooling all years
  • Morocco – Casablanca (1925-2009) Cooling all years
  • Adelaide – Australia (1881-2008) Cooling all years
  • Abilene, Texas – USA (1886-2009) Cooling 1933-2009

The comparisons in many cases were not straightforward.  While many matched GISS data, some of the graphs in Rimfrost used unadjusted data, others homogenised data.  For some such as Kandalaska, there was a close but not exact match to either GISS data set.  The data for Haell was clearly from the Icelandic Met Office, but I could find no match for Edmonton to any GISS series or data from Environment Canada (although having looked at Canadian data further since I am not entirely surprised). The first set took much longer than we had anticipated; however, I drew the graphs to a template and prepared to start on Set 2.

Tony also wanted a ‘spaghetti’ graph for the anomaly data of the first set, and this is where it got most interesting.  In fact we were blown away by what the graph looked like.  Taking these ten locations from across the globe and superimposing the anomaly data produced a sine wave-like pattern (Figure 2) with distinct cooling from the early 1940s to mid-1970s followed by warming to present; for many of the locations the older data was warmer, or at least as warm as present.  Now I had seen this before with many individual stations, but it really impressed me to see the pattern matching from such far-flung locations.

Figure 2. “Spaghetti graph” of anomalies for the ten stations in Set 1.

But in the meantime there were other developments.  Tony knew I was interested in putting the GHCN v2.mean temperature data from stations all over the world into a database.  As usual, this exceeded my own knowledge and capabilities, but I had made a start and was learning as I went along.  Tony, whose contacts and connections never cease to amaze me, put me in touch with a computer professional, database, web and mapping expert who was well known to commenters on The Air Vent, Climate Audit and WUWT as KevinUK”.  Kevin was also keen to put climate data into a database.

By now this was the end of November.  Kevin and I rapidly established a good rapport by email and voip and, with really only a few pointers to GHCN and GISS datafiles from me (and probably lots of hindrance), he rapidly built a fully functional database.  Not only that but he set about writing software to plot graphs and calculate trends from the data and put the whole lot on an interactive map – and all this in a period of about 6 weeks.  It is still a work in progress, fixing glitches and preparing Version 2.0; for more information see blog post Mapping Global Warming and the website itself: www.climateapplications.com.

I did deliver 40 graphs for Tony in the end, but I was quite slow about it (and that “sine wave” pattern kept showing up again and again and stuck in my mind). Tony had moved on to researching other climate projects and Kevin’s maps meanwhile showed so much more than we ever could.  With the “sine wave” climatic pattern in mind, the following maps (focussing on North America and Europe) show how climate has cooled, warmed, cooled and warmed again since 1880.

Figure 3. Maps showing temperature trends at weather stations for defined periods. Cooling trends are shown by blue colours: dark blue>blue>light blue>turquoise>pale turquoise. Warming trends are shown by reds: dark red>red>light red>orange>light orange. For full legend see: http://diggingintheclay.wordpress.com/2010/01/18/mapping-global-warming/

So is this “sine wave” the true climate signal?  It would seem so, although we can’t expect it always to be so regular.  Choosing stations that are more closely geographically located does give a more homogeneous shape to the wave.

Figure 4a (left) Anomaly data for a subset of Arctic stations ; Figure 4b (right) Anomaly data for a four US stations.

Figure 5. Anomalies of unadjusted data for stations in Madagascar

It is most extreme in the high Arctic – Figure 4a shows the graph for six stations above 64N where the magnitude of change is +/- several degrees Celsius.  Further south (e.g. Figure 4b – four stations in the US) the magnitude is smaller, and close to the equator (Figure 5, Madagascar) the magnitude is less still.

A final point – with the exception of the Madagascar graph, which was prepared for a blog post (link), all these graphs were part of different sets (the first 40 stations for which data was examined). Although the original data was chosen for its cooling trend this, in many cases, results from warmer temperatures in the period 1930-1940 than present.

The wave pattern is still present in many data sets worldwide, no matter what the overall trend.  In some the date of the onset of warming or cooling is later or earlier, depending on location – as would be expected with the oceans moving warmth around the globe.  In others however the wave pattern is not present or is obliterated by something – in these sets should it be present or not? Is it wiped out by anthropogenic effects on the temperature record such as growth of cities and even small rural communities though the otherwise cooling 40s, 50s and 60s?

For us the take-home message of this study was simply how widespread and consistent the wave pattern is, and this, ultimately is very convincing of the veracity of the arguments against CO2 as a primary cause of current warming.  From the physics I don’t doubt it has a role in warming, but its role needs to be disentangled from the large magnitude natural climate swings that are clearly present all over the world – a pattern that is not widely disseminated.


35 Responses to “In Search of Cooling Trends”

  1. John Norris said

    I am guessing there is a solar index dataset that could be smoothed into that same sine wave.

  2. Brian H said

    Even the IPCC data and graphs show the same pattern, though it has fiddled with the slope.

  3. Steven Mosher said

    One day I went hunting through all GHCN for the ‘coolest” and the warmest.

    The coolest had that pattern you’ve displayed. weird.

    Since I have all the stations in a big zoo data structure it’s pretty easy to do
    and then with other R packages to output into Google earth format.

  4. Thanks Jeff, Tony did say he had emailed you. I’ve kind of drawn a line under this mentally a long time ago (December?) because there were too many ‘issues’ to doing what Tony wanted to do with it, but it has coloured my thinking since and it is good to get it out for discussion.

    #1 John,
    Tonyb posted this link: http://www.heliogenic.net/2010/03/26/scafetta-on-the-60-year-temperature-cycle/ in comments under the original posting.

    #3 Steven,
    Averaging all data and spatially weighting globally almost obliterates the wave, as, as said in the post, it has different amplitude and onset in different parts of the world (and by altitude too probably) (also see: http://diggingintheclay.files.wordpress.com/2010/07/latitude-bands.png). That is one of the reasons why Kevin and I think the concept of a Global average temperature is so flawed. Yet if you look at groups of stations in a small geographical area, it is not unusual for some of them have this pattern (and they not warming overall) while some don’t and are warming. Why?

    I agree with your comments elsewhere about the metadata and overall quality and placing of stations and that could hold the key. I have read somewhere recently (and will find it again) an informed opinion that UHI starts to kick in significantly at about 2000 persons and if that is the case a that would cover many stations labelled as ‘rural’.

  5. tonyb said

    Verity

    That study of the effect of UHI on even small communities may have been within my article carried here-see graph

    https://noconsensus.wordpress.com/2009/11/05/invisible-elephants/

    I have written numerous times on Global temperatures-I think that sticking a hugely variable number of moving thermometers of variable quality together, many of which have been artificially warmed, then believing they represent any sort of meaningful global average that can be parsed to a fraction of a degree is not helpful.

    I agree with Jeff-I think the world has been generally warming (since around 1698) as part of the longer cycles (on which a 60 year one is superimposed) which presumably produced the LIA, MWP, the Roman optimum and other notable periods of warm and cold.

    But within that general trend are the wheels within wheels, so some places are cooling whilst the majority warm (and vice versa in other climatic epochs)

    It depends at which point in the cycle of a single station you intersect it as to whether it will show up as being in a cooling, neutral or warming phase. This also assumes the station records are old enough to show the trends in the first place. These nuances are lost in the general warming signal when all records are lumped together, and further muddied by the variable quality of the data.

    As well as the natural cycle of warming there are undoubtedly other factors superimposed-uhi has a big effect on individual station records whilst the allowance for it is small. CO2 may well have had a small effect also, but it is by no means the only game in town despite the IPCC torturing their models to prove otherwise.

    http://www.appinsys.com/GlobalWarming/SixtyYearCycle.htm

    The IPCC do not appear to show these short cycles-let alone the longer ones-within their computer models and it is misleading for them to talk of ‘global’ warming without reference to the areas that do not show that trend.

    Yes, as observed by John Norris it would be interesting to superimpose the solar index dataset over this.

    Tonyb

  6. Geoff Sherrington said

    A useful analogy that might apply can be the human body. It has cyclicities (like pulse and respiration and perspitation). Each can have variable amplitude and period, so they are not pure sinusoids. There are conditions when some work harder than others and there are conditions that make several cyclicities anomalous at the same time (like exertion increases heart and respiration and perspiration rates). However, even if you can model the input of various cyclicities into another variable (say body temperature)and put a % next to each contribution to variance, the predictive power remains very low. This is because of the usual old reason, too many unconstrained variables. (e.g. the thought experiment here would be different in dry air than in moist air. It would differ between clothed and unclothed bodies. How many extra such factors exist that we have not thought about?)

    We know that climate has a number of cyclicities, but we do not know how to combine them into equations with predictive use. Am I being pessimistic to propose that we shall never reach this goal, because there are far too many incidental variables to quantify and consider – and even then, we do not know when to incorporate a particular one, such as an El Nino?

    We have climate cyclicities because we are in a dynamic system. It is interesting that some correlate resonably well with others, but the interest could be in the ones that seem to correlate with nothing much. These might be a result of a recording system that has not operated long enough to show adequate cycles. The short record of gamma ray flux at some prominent wavelengths, intercepting the earth, had some cyclicities; but I’m not sure than anyone knows now what they mean, or ever will.

    Likweise, although we have instrumental temperature records for 150 years or so, I remain unconvinced that any person knows the full cause of temperature cyclicities, or ever will. Put more optimistically, by the time we quantify enough side variables and explain their mechanisms and interactions, we will have lost much of the current interest and will be working rather harder on something productive like fusion power.

    The current interest in temperature arises, in part, because be have so many measurements. More data equates to more scope for control freaks. The same situation applies to natural nuclear radiation, where highly sensitive and cheap instruments can easily detect ambient levels of many forms. If the Geiger-Muller counter had not been so sensitive when invented 100 years ago, we would have far less fear about ionising radiation.

  7. A C Osborn said

    Nice work by Kevin and basically confirms EM Smiths work at his Chiefio site.

  8. steveta_uk said

    Tony/Verity,

    Dr. Spencer’s UHI graph appears to show it has an effect at surprisingly low population densities.

  9. tonyb said

    Geoff

    I agree with your comments and your excellent analogy.

    You say;

    “The current interest in temperature arises, in part, because be have so many measurements. More data equates to more scope for control freaks.”

    We think we know far more than we really do about the climate, and whilst we do not even attempt to properly factor in the known parameters such as ‘cycles’ of varying lengths, there will be no hope of taking into account the random aspects of the climate.

    Climate science in its present form is very new, yet is trying to express scientific certainties by insisting that the climate can be expressed in terms of mathematical formula. To do this we assign a level of preciseness to aspects of the data that simply don’t warrant this amount of precision.

    For example, the preciseness with which we insist that a surface temperature has changed by say .476degrees when in reality we can do no more than say ‘it might have changed by anything up to a degree or so, or it may not, but who knows?’ Hardly likely to encourage more grants is it?

    The level of certainty apportioned to surface ‘global’ temperatures I find bizarre, but that is as nothing compared to SST’s. These records are for the most part haphazard and compleely unreliable (due to methodolgy) and very sparse especially those going back to 1850 or so.

    Yet we have CRU making money by selling material based on this data.

    Whether or not we can ever know how the climate works is debatable, but it is certain that our level of current knowledge doesn’t begin to warrant the sweeping statements that are made by the IPCC which impacts on national government policies, which affects us all.

    tonyb

  10. Geoff,
    excellent points all. Yes the complexities are many and varied. What is interesting to examine though is how the cycles captured at a regional level in some stations are lost by the ‘outreach’ of homogenisation.

    steveta_uk,
    Thanks, although I’m sure I have seen a reference elsewhere recently also.

  11. slimething said

    It would appear this brings into question the justification for extrapolating and/or interpolating temperature from one station to another region 1200km away, no?

  12. Brian H said

    Slimey;
    Indeed. Considering that the radiative surface of the planet is a patchwork of widely different emitters, etc., which respond very differently to diurnal cycles, changes in humidity (desert sands vs forest, e.g.) and have widely differing chemistries, it is inane to do that kind of interpolation.

  13. Steven Mosher said

    #3 Steven,
    Averaging all data and spatially weighting globally almost obliterates the wave, as, as said in the post, it has different amplitude and onset in different parts of the world (and by altitude too probably) (also see: http://diggingintheclay.files.wordpress.com/2010/07/latitude-bands.png). That is one of the reasons why Kevin and I think the concept of a Global average temperature is so flawed. Yet if you look at groups of stations in a small geographical area, it is not unusual for some of them have this pattern (and they not warming overall) while some don’t and are warming. Why?

    1. Of course the average erases these aspects. by definition. you misunderstand the meaning of the word average. Very simply if I want an estimate of unsampled places I use an average. That is the whole purpose.

    2. The pattern. A pattern without a physical explanation is simply that. Something that humans are wired to “recognize” or find in data. It is almost impossible for us NOT to find patterns. When I get some more time I’ll look at it some more. One could say ( for example) that this is what the world would have looked like globally WITHOUT a GHG induced warming.

    “I agree with your comments elsewhere about the metadata and overall quality and placing of stations and that could hold the key. I have read somewhere recently (and will find it again) an informed opinion that UHI starts to kick in significantly at about 2000 persons and if that is the case a that would cover many stations labelled as ‘rural’.”

    Population is a poor proxy. its what that population DOES that makes the difference. as entities we dont give off that much heat. Its what we do to the landscape.

  14. #11 – exactly! While many of the correlations are very good, there are also cases where, over much shorter distances, they are poor.

    #13 no I don’t misunderstand the meaning or purpose of averaging. My point, rather hastily written, is that the averaging (homogenisation) all too readily combines data from stations where this wave pattern exists with those that do not have it and without we know the reasons for the differences, the process is seriously flawed, albeit the best we can do for the present.

    “One could say ( for example) that this is what the world would have looked like globally WITHOUT a GHG induced warming.”

    Er, just so. Consider that since say 1940 there are many anthropogenic changes in the world that may have affected local temperature records – changes in agricultural practices for example, that may have the same effect that may be ascribed to CO2. How do we disentangle those?

    The cooling trend beginning in ~1940 and running to ~1970 is important. Indeed it may have been reduced by man’s activities rather than caused by them (as global dimming); it is widespread, yet completely misses out many places close to others which are cooling. CO2 should not do that.

  15. JR said

    The cooling trend beginning in ~1940 and running to ~1970 is important. Indeed it may have been reduced by man’s activities rather than caused by them (as global dimming); it is widespread, yet completely misses out many places close to others which are cooling. CO2 should not do that.

    That’s the clue. Think outside the box.

  16. BarryW said

    WRT cooling trends: Look at the CRN data.

  17. Steven Mosher said

    #11 – exactly! While many of the correlations are very good, there are also cases where, over much shorter distances, they are poor.

    The correlations are more than a function of distance. They change by season, by latitude and by direction. Hansen oversimplifies it, but the choice of 1200km does not have an appreciable effect.

  18. Steven Mosher said

    #13 no I don’t misunderstand the meaning or purpose of averaging. My point, rather hastily written, is that the averaging (homogenisation) all too readily combines data from stations where this wave pattern exists with those that do not have it and without we know the reasons for the differences, the process is seriously flawed, albeit the best we can do for the present.

    #####
    The process is not flawed for it’s intended purpose. If you want an average height of humans you average humans. Noting that men are taller than woman ( on average) while true, says nothing about the average height of all humans. I tell you that I measured a human. That is all you know. And I tell you that the average of 10,000 humans is 5’6. Now, guess the height of the human I measured? You can of course desire to know whether I measured a male or female. But that’s not the question. You can point out that males are taller and that your guess would be closer if you knew. but that is not the question. When I found this ‘wave’ in the data, one thing I asked myself was this: “what if all Unsampled places were like the wave?” That’s really the concern. Conversely one can ask “what if all unsampled places are like the warming trend?” You can actually do that test and see the effect. Perhaps I’ll post up on that when time permits.

    “One could say ( for example) that this is what the world would have looked like globally WITHOUT a GHG induced warming.”

    “Er, just so. Consider that since say 1940 there are many anthropogenic changes in the world that may have affected local temperature records – changes in agricultural practices for example, that may have the same effect that may be ascribed to CO2. How do we disentangle those?”

    Well, there are several ways. First is with historical land use data which exists. Second is to understand that land use changes dont hapen over 70% of the globe. The water. Third is to compare the UHA and RSS tropo trends with the surface trends. So we can bound the problem. When we bound it in this way we see that land use change while real, does NOT explain the warming seen in the trop or in SST. That doesnt mean the effect is zero, but its not .6C. You dont need to distangle all the factors to estimate the impact of the sum of them.

    “The cooling trend beginning in ~1940 and running to ~1970 is important. Indeed it may have been reduced by man’s activities rather than caused by them (as global dimming); it is widespread, yet completely misses out many places close to others which are cooling. CO2 should not do that.”

    First thing you have to do is test “the cooling trend” to see if it is in fact statistically significant. Again, seeing patterns is what your brain has to do. Deciding if such a patter can arise from chance is a whole different matter. Also, I’d look at GCM output to see if I found consistent coherent cooling patterns within a field that generally is increasing. Warming is not predicted to happen monotonically nor uniformly, so finding non uniform patterns in the field really doesnt impinge upon the theory. C02 cause warming. We know physically why this happens. How that warming manifests itself over time is an open question.. polar amplification is a prediction for example. But to my knowledge there is no prediction that the warming will happen monotonically or absolutely uniformly. For example, if the theory did predict uniform warming then polar amplication would be a point against the theory.

    One last thing, in the attribution studies the output of GCMs is actually tested against the observations on a grid cell basis. That is if a grid cell is not observed in the record, then that grid cell output of the GCM is not used. If you run a GCM with no C02 forcing you dont match the observed record. If you do include the forcing your match with the observed record improves.

  19. slimething said

    #17 The correlations are more than a function of distance. They change by season, by latitude and by direction. Hansen oversimplifies it, but the choice of 1200km does not have an appreciable effect.

    Is there a reference for a comprehensive analysis to support that? Preferably not GISS.

  20. Brian H said

    #13;
    What the averaging is doing, amongst other things, is suppressing valuable information about causation. (Don’t, by the way, denigrate human “pattern recognition” — it’s the basis of all understanding and communication!) The way in which warming occurs is of the essence, here. Causality can only be determined by tracing the finest-grained detail and variation available, and seeing the mechanisms at work. How about some study of the daily temperature swings over deserts, where there is CO2 and little humidity? Those sub-averages would be informative, but are suppressed by the crude sampling and modelling now relied on by the IPCC et al.

    Using bulk “global” averages is almost useless, here. Deviations from the longer baselines are actually quite small, and sloppy averaging (with the very peculiar characteristic that identified errors seem all to be on the up-side) means that it is very likely that there is no signal to decipher.

    This is the statistical procedural question, at base. A “good” average depends on fair sampling, and Hansen et al. have violated just about every possible guideline for doing that. How would you feel about a human height figure that systematically excluded concentrations of orientals, and heavily sampled Nordic and East African regions? And extrapolated those sample points 1200km? Or even filled in Japan and China by averaging Australia and Russia?

    Finally, as G&T and others point out, the grain or grid size demanded by any realistic modelling of a nonlinear system like the planet’s climate is many orders of magnitude smaller than is actually, or even potentially, available.

  21. Steven Mosher said

    #19

    Is there a reference for a comprehensive analysis to support that? Preferably not GISS.

    yes.

    http://hadobs.metoffice.com/hadghcnd/HadGHCND_paper.pdf

    daily data from thousands of daily stations. The simple fact is this. within 1200km stations are pretty well corelated. Not perfectly of course. I used to rant about the 1200km thing as if it made a huge difference. It doesnt.

    And if you dont like that analysis the data is open. Codes easy.

  22. Steven Mosher said

    Brian:

    “#13;
    What the averaging is doing, amongst other things, is suppressing valuable information about causation. (Don’t, by the way, denigrate human “pattern recognition” — it’s the basis of all understanding and communication!) The way in which warming occurs is of the essence, here. Causality can only be determined by tracing the finest-grained detail and variation available, and seeing the mechanisms at work. How about some study of the daily temperature swings over deserts, where there is CO2 and little humidity? Those sub-averages would be informative, but are suppressed by the crude sampling and modelling now relied on by the IPCC et al.”

    Well averaging does not suppress that information about causation because causation is UNOBSERVABLE. causation is inferred by either the process of induction or abduction. I do not denigrate human pattern matching. however, anyone who has studied cognition understands that it is both a strength and a weakness. You are not making very clear points about the “study” of causation. It’s unclear what temperature swings over deserts have to do with anything. I do know that looking at temperature trends at deserts you find the same trend as the rest of the globe. At least US deserts, I looked at that 3 years ago when somebody was going on and on about deserts. That dog didnt hunt. basically, I’ve spent an inordinate amount of time looking at airports, deserts, mountains, valleys, cities, towns, high population low population, coast/no coast. you name it.

    At one point I was going to take all the cooling stations and do a regression on the metadata. Still might, but the metadata needs work first. basically I was going to take the stations that were heating the most and those that were cooling the most and see what popped out of a stepwise regression against the metadata. But that would be cheating.

    So its incumbant on people to start with a physical THEORY. and then test that. Starting with the data and fussing about with patterns is so Mannian.

  23. Steven Mosher said

    Brian:

    “This is the statistical procedural question, at base. A “good” average depends on fair sampling, and Hansen et al. have violated just about every possible guideline for doing that. How would you feel about a human height figure that systematically excluded concentrations of orientals, and heavily sampled Nordic and East African regions? And extrapolated those sample points 1200km? Or even filled in Japan and China by averaging Australia and Russia?”

    1. Hansen did not do the sample.
    2. I’m not sure what you think is SYSTEMATICALY excluded from the sample.
    3. Well if you had KNOWLEDGE that the part of the sample being excluded was in fact different in character, the you would
    have an issue. But with GHCN for example we know this:
    A. if we expand the sample using OTHER databases, the answer doesnt change
    B. if we expand the sample to GLOBAL coverage with UAH, the answer doesnt change. UAH doesnt find large scale pockets
    of unsampled COOLING over 1979-2010. Now, if UAH showed all of africa cooling ( FOR EXAMPLE) and we had no
    thermometers in africa, then Id be worried about lack of thermomter sampling in africa. But nice try with the oriental
    analogy. The difference is this. In your height bias you have some evidence that the unsampled area (china) actually has
    shorter people. You have KNOWLEDGE of the metric in the area where you didnt sample. That Knowledge is what gives you concern. With GHCN you have certain areas that are either sampled in other databases or NEVER sampled. so the question is what do you estimate about areas that have never been sampled.
    C. you can test the sensitvity to sampling by resampling. I wasted a whole week on that. No difference
    D. You can test the sensitivity by infilling with min/max values. thats interesting may write that up.

    Finally, I dont think you understand how the 1200km ‘infill’ works.

    let me put it too you this way.

    You have a 110 year record at 80N 30E ( for example) Lets say that record has ZERO trend for 110 years.
    Directly opposite over the top of the globe at 80N -150E you have another 110 year record showing zero trend.

    Now, estimate the trend at 90. 0?

    Logically you have these options:

    1. Refuse to estimate
    2. Estimate a warmer trend
    3. estimate a cooler trend
    4. estimate the same trend

    now, lets look at how CRU infills. Imagine a 3×3 square

    1 1 1
    1 X 1
    1 1 1

    X has never been sampled. Guess X? you gunna guess 63? or -3675. Nope. CRU infills a cell if it surrounded.
    Now what supports this? suppose you have other cells

    2 2 2
    2 2 2
    2 2 2

    So we can look at other cells that are surrounded and see if they differ from their neighbors.

    3 3 3
    3 -5 3
    3 3 3

    would make us pause. and question infilling. If we looked at UAH and found places that over 30 years had this pattern
    we would wonder.. how does one patch of the amosphere have a perisistem negative trend while surrounding cells have
    perisistent positive trends ( kinda like an eddy or big blue spot in earths atmosphere) go figure heat moves. does it mix perfectly? down to the finest scale? nope. does it mix over larger scales. ya? uniformly? no. Again polr amplification.

  24. Steven Mosher said

    Brian:

    “Finally, as G&T and others point out, the grain or grid size demanded by any realistic modelling of a nonlinear system like the planet’s climate is many orders of magnitude smaller than is actually, or even potentially, available.”

    well, thats largely untrue. “realistic” is the problem word here. I can model the climate ( say a couple aspects) with very simple equations. The answers will be inexact. Let me give you and example from RCS. The equations to predict the radar return from a complex surface are nasty. And we can pretty much show that they never get the exact right answer. However, they get close. I can mesh the surface to a really fine detail and still be off. but I know the answer is close to -xyz Dbsm
    A less finely grained model may miss [snip] as [snip] changes over the surface and if I dont model [snip] I’ll miss potential issues like [snip].. but it gives me a good first order estimate. Put another way all physics is models and all physics is imperfect. GCMS are ok. imperfect. no physics predicts cooling. So I look at a big flat metal plate. No physics will tell me that this plate will have ZERO return. A first order model may predict a 10 sq meter signature, a more detailed model may predict 10.23. a higher detailed model may predict 9.976. in the field I measure 10.12 +- .23. The fact that the best models DONT match the field test isnt very interesting. I knew they wouldnt. But I also knew that No physics predicted a zero return. I have a model, its imperfect, but it gets me in the ballpark. The better the model, the tighter the ballpark. At the limit I have a tiny ballpark. When the ballpark gets really tiny, we call the residual “error” When the ball park is bigger we know we have missed certain processess in our model.

  25. Brian H said

    Steven;
    You’re being far too forgiving of the sampling and models (or yourself?) The desert temperature swings indicate swift radiative loss of heat, which doesn’t happen when water is present in the atmosphere, much less cloud cover, however thin. Thus CO2 does a lousy job of “back-radiating” all on its own.

    And as far as determining causation, of course I meant that observing mechanisms at work in detail is only possible when you actually look at detail, not large overall results. You need to get up close to determine actual physics.

    And similarly for grid sizing; that the range of sizes so far tried all come close to each other means little, since the granularity of the processes is many orders of magnitude smaller than the smallest available. So all those available are almost equally poor.

    As for selectivity, it’s pretty damn hard not to believe, as the Russians suggest, that the coldest and most variable stations have been selectively excluded. Using the average of a seacoast station and an Amazon rainforest station to infer the state of a location in the Andes inbetween is simply unconscionable — ludicrous in fact. If you insist that such duplicity is so ungentlemanly that you refuse to entertain the notion, then fine. But few skeptics are so warm and fuzzy-hearted. Sorry if that offends you.

  26. Al Tekhasski said

    There is an opinion that all exercises of averaging of current data make no sense. Of course, anyone is free to average everything with anything. But let’s consider the subject from perspective of mechanical dynamical system. What we have is a wildly varying spatio-temporal (and three-dimensional!) field of air temperature around the globe. Then we have certain locations where this field is sampled several times a day (sometimes only two times).

    Now consider a particular location (a met station). As weather goes by and warm air shines back to surrounding surfaces, and wind mixes up the boundary layer around, we have a time series of temperature. Let’s also assume that thermometers are properly and regularly calibrated. The time series looks like a bull piss, it goes 10-20C up and then 10-20C down, day-night, season to season. However, does it mean that it is a random series of numbers subject to statistical manipulation? No. If you would set another thermometer in a close proximity and same enclosure, it will show exactly the same sequence plus minus random readout uncertainty (of about 0.5C). What we have is a spot record of a complex temperature field. If you could return 100 years back in time into exactly same conditions and start a record from another thermometer, it would produce exactly the same curve. So, this individual station record is not a random (red or else colored) noise, it is a fully deterministic record at one point in a dynamical system. Therefore, the characteristics of this record (higher momentums, autocorrelation, 100-years trend, etc) are very accurate numbers and are not subject to any substantial “uncertainty” and “statistical significance”. The record is a well-defined characteristic of this particular spot (unless senseless “adjustments” were made to it).

    Now, let’s consider another station nearby, say 50km apart. It sees nearly the same weather with high variance of signal. Obviously, if a cross-correlation is calculated, it will be very high. As Hansen and Lebedeff found, stations correlate well for up to 1200 miles radius, which is a trivial result. But they (or somebody else) are making a mistake that this correlation must hold for all other statistical characteristics of individual stations as, say, the 100-years trend.

    Let’s look at data for 100-years trends, in my lovely Texas for example. I use
    http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?lat=33.17&lon=-99.75&datatype=gistemp&data_set=1
    and check Haskel vs Albany or Abilene, Crossbyton v. Lubbock, Ada vs. Pauls Valley,etc.
    All these pairs are about 30-50 miles apart, yet their long-term trend is diametrically opposite. Since these nearby stations see the same (allegedly increased and measurable) backradiation from alleged man-made CO2 increases, they all should exhibit uptrend. They do not. It is obvious that other changes in surroundings caused this divergence in century-long temperature trends.

    What does this mean? It means that the station spacing is too coarse to capture the essential variations in global field conditions. It means that even a regular grid of stations of 50km apart is not adequate in the sense of Nyquist-Shannon-Kotelnikov sampling theorem. To have a grid of 25x25km, one needs about 800,000 stations around the globe, but it still will not guarantee the stability of estimations; a finer grid would be required if one to follow scientific practice.

    Therefore, averaging an undersampled (by a factor of 100 !!!) temperature field with pre-selected and biased station locations makes no sense.

  27. Al Tekhasski said

    Steven, you wrote: “I have a model, its imperfect, but it gets me in the ballpark. The better the model, the tighter the ballpark. At the limit I have a tiny ballpark.”

    You are judging climate models by wrong metrics from inadequate model, one-time radar return. Climate is very different, it is an infinitely recurring “return” with feed-forwards and feed-backs. If the model get the long-term “return characteristics” wrong, infinitely recurring return will accumulate errors, and the climate model would “explode”, which happens all times.

    The major problem with climate models is that they incorporate the stabilizing dissipative terms in artificial way. The models do not initially have the natural molecular viscosity that naturally and correctly stabilizes “infinite recurring” (even with computational artifacts). Climate models begin with ideal inviscid fluid model of meteorology, and only after several transformation they incorporate an artificial (parameterized) dissipation that makes model to run stable. This mathematical trick causes very subtle changes in number and quality of modes involved, and their long-term behavior, which is exactly the main interest in climate modeling. Therefore, you have no ballpark whatsoever for the most interesting feature of model.

  28. Steven Mosher said

    Al

    “Al Tekhasski said

    September 5, 2010 at 2:26 pm
    Steven, you wrote: “I have a model, its imperfect, but it gets me in the ballpark. The better the model, the tighter the ballpark. At the limit I have a tiny ballpark.”

    You are judging climate models by wrong metrics from inadequate model, one-time radar return. Climate is very different, it is an infinitely recurring “return” with feed-forwards and feed-backs. If the model get the long-term “return characteristics” wrong, infinitely recurring return will accumulate errors, and the climate model would “explode”, which happens all times.”

    Well. I’m not Judging GCMS by the wrong metrics. The metric proposed was ‘realism’ What I am pointing out is that we have physically “unrealistic” models all the time that produce useful output. In fact science itself is a physically unrealistic model whatever that word ‘realistic’ means. The metric I am proposing is usefulness or skill. Do they in fact get things “right”, things that matter. For example, I amy do a simulation of an airplane at High AOA and I will NEVER get the nose vortex modelled “realistically” But, I can get a pretty good idea of the regime in which they form. I can never get the buffet characteristics exactly right, but I can estimate about when it will kick in. the test of any model is not its ‘realism’ now we believe that realistic models should do “better” so we persue ‘realism’ Along the way, we have useful ‘false’ results. we never actually get to the ‘truth’ whatever that means ( besides perfectfully useful)

  29. Steven Mosher said

    Al;

    “Since these nearby stations see the same (allegedly increased and measurable) backradiation from alleged man-made CO2 increases, they all should exhibit uptrend. They do not. It is obvious that other changes in surroundings caused this divergence in century-long temperature trends.”

    The lacuna in logic here are astounding. nothing in theory predicts or prevents local variation from the overall trend. you are assuming that supposition into the theory. Observing these local phenomena doesnt logicaly contradict a theory that doesnt make such predictions. I may very well predict the average flow rate of a river. I am not predicting that the rate will equal the average at every location. You can point to an eddy and say “look here” the rate direction of flow is backwards. But since I never made a prediction that PRECLUDED these local phenomena, pointing them out is an exercise in a strawman takedown.

    Further its not “obvious” that changes in the surroundings “caused” the effect you saw. Thats a hypothesis, not a conclusion. Is it a testable hypothesis? Dunno, you havent formed it very well.

  30. Steven Mosher said

    Al:

    “What does this mean? It means that the station spacing is too coarse to capture the essential variations in global field conditions. It means that even a regular grid of stations of 50km apart is not adequate in the sense of Nyquist-Shannon-Kotelnikov sampling theorem. ”

    Unforntunately resampling shows your concern to be misplaced. A good place to start is Shen’s paper.

  31. Al Tekhasski said

    Steven Mosher wrote: “The lacuna in logic here are astounding. nothing in theory predicts or prevents local variation from the overall trend”

    Really? How is that? Could you name any mechanism related to global atmospheric increase of CO2 that would cause the year-averaged temperatures to trend up for 100 years in a row in one spot, while an adjacent spot just 50km apart has a 100-years trend downward?

  32. Al Tekhasski said

    Mosher wrote: “For example, I may do a simulation of an airplane at High AOA and I will NEVER get the nose vortex modelled “realistically” But, I can get a pretty good idea of the regime in which they form.”

    Well. apparently you did not get the idea of “recurring” perturbations. Your vortex (or whatever) forms and goes. It goes outside the domain of calculations, that why it does not matter if it is lightly off. It is not so in problems like buoyancy-driven convection in finite domains.

  33. Al Tekhasski said

    Steven Mosher wrote: “Unforntunately resampling shows your concern to be misplaced. A good place to start is Shen’s paper.”

    Resampling? Of what? Could you kindly point me to a set of 100-year long records on a regular grid of say, 12x12km, covering some reasonable area, say 500x500km?

  34. BillyBob said

    I think the sine wave is the true global temperature signal. Anything that implies otherwise is most likely UHI artifacts or adjustment failures.

  35. Brian H said
    September 5, 2010 at 2:19 am

    The desert temperature swings indicate swift radiative loss of heat, which doesn’t happen when water is present in the atmosphere, much less cloud cover, however thin. Thus CO2 does a lousy job of “back-radiating” .

    Verity Jones and Tony Brown (Tonyb)
    Hey this is it right here….. If we want to find the CO2 back radiation then we should look at the desert data from around the glob ….. do it really do it!
    To get the answer subtract the 60 year wave from the desert data that would leave the co2 foot print, but beware it might be very small like .05c per 100 years. it is there, everyone knows this but what is it really?

    Jeff do it, you have the system in place to run the numbers.
    take the desert data add it to the inverted 60year cycle.

    where is layman lurker when I need him!

    Tim L

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: