What would it take?

Half of global warming happened since 1990 yet less than half of the temperature stations are available since 1990?

Why?

A Full Open, well funded, quality analysis of temperature stations IS with no doubt warranted yet where are the money hungry researchers?

If we are to spend trillions of dollars on fixing warming, don’t you think we should be able to work out how to read ONLY 7000 thermometers regularly!!

One of my beliefs is that the IPCC is a money hungry political organization with no concerns about correcting any scientific details like not melting glaciers, which might get in the way of it’s growth and funding. So if I’m right, where are the demands for funding better and more complete thermometer data?

So I set out to make a rough estimate of what it would take to document, quality control, keep current and make public all of the temperature stations in the last 150 years. The goal was to find out if it is impossibly difficult, such that only computers are up to the task?

Let’s say 7000 stations, one station per eight hour day. Nope it’s govt, let’s say 1 station per 7 eight hour days, seven work days to stare at a single temperature dataset. A total of 49,000 person-days to document all the 7000 stations examine the data closely and make them public. Let’s say 150 government work days per year, including vacation, sick days, mental disability pay whatever. That’s 326 man years to look through each and every temperature station in detail, record its individual nuances and manually recommend and document temperature step or trend corrections.

So lets say that every govt. employee on this difficult job has a menial 80,000USD salary plus 60,000 USD in benifits. You know, a mid-level government employee. So assume 140,000 per year *326 = 46 million USD. Now these people will need management, since it’s govt, management should at least double the cost of the workers. 100 million dollars.

So for 100 million dollars, less than 1 percent of the 30 billion committed to the copenhagen each year by the US, we can pay US government employees to spend their efforts on 21 stations per year to catalog and make public all of the global temperature data in a single database for the public to see. Add 50% for equipment, data storage and other crap.

Say I miscalculated and didn’t factor in inflation, earthquakes or the extra sweat induced by global climate change. Let’s say TWO HUNDRED FIFTY MILLION for a full open accounting of temperature stations in one year.

I finally found a new tax I would be happy to pay. A one time expenditure which is:

Less than 0.8 percent of the money than the UN distributes every year for global warming research.

Less than 0.3 percent of what the United States will spend for economic recovery in 2010 — on climate research alone.

Less than 0.03 percent of the total 2009 Obama economic recovery bill.

Yes the US Obama “stimulus” will spend 4-5 times Nasa’s budget on climate research in 2010. One tenth of our at war military budget on climate – to invigorate the economy. Potentially a disingenuous statement from our good democratic leaders???

Consensus of datasets by government funded scientists, is not an acceptable answer and it’s damned well time the scientists started making sense on this. Complete, open data, open explanations, open code, open results.. then if everything falls in line, we can begin negotiations for a cause, a consensus and an appropriate response.

If we are in such huge danger as a planet, this should be an easy sale. In fact, a blogger shouldn’t have to write it.

I suppose they already have the curve they want though don’t they.

43 thoughts on “What would it take?

  1. I’m all for a proper effort to maintain a collective. open. up-to-date temperature dataset. And I’m glad you’ve found a tax you like. But I keep seeing these statements like “half of the temperature stations are available since 1990”.

    Not true. You mean “are available through GHCN”. But GHCN never claimed to be the resource you are looking for. It was a 1990’s project to compile a historical climatology. That’s a valuable effort, but not intended to meet the expectations you are outlining here.

    As I quoted in another thread, from Peterson in 1997 re the future of the database:

    Updates.
    Thirty-one different sources contributed temperature data to GHCN. Many of these were acquired through second-hand contacts and some were digitized by special projects that have now ended. Therefore, not all GHCN stations will be able to be updated on a regular basis. Of the 31 sources, we are able to perform that regular monthly updates with only three of them (Fig. These are
    1) the U.S. HCN, 1221 high quality, long-term, mostly rural stations in the United States;
    2) a 371-station subset of the U.S. First Order station network (mostly airport stations in the United States and U.S. territories such as the Marshall and Caroline Islands in the western Pacific); and
    3) 1502 Monthly Climatic Data for the World stations (subset of those stations around the world that report CLIMAT monthly over the Global Telecommunications System and/ or mail reports to NCDC). Other stations will be updated or added to GHCN when additional data become available, but this will be on a highly irregular basis.

  2. #1, Nick,

    This is why……

    Almost all the data we have in the CRU archive is exactly the same
    as in the Global Historical Climatology Network (GHCN) archive used
    by the NOAA National Climatic Data Center [see here
    and
    here ].
    The original raw data are not lost. I could reconstruct what we
    had from U.S. Department of Energy reports we published in the
    mid-1980s. I would start with the GHCN data. I know that the effort
    would be a complete waste of time, though. I may get around to it
    some time. The documentation of what weve done is all in the
    literature.

  3. #2
    Jeff, that’s a complete non sequitur. Jones is talking about “lost” data from the 1980’s and before. And yes, of course GHCN is a primary source for that.

  4. The important bit is in the first sentence where Phil says CRU is almost entirely GHCN.

    “Almost all the data we have in the CRU archive is exactly the same as in the Global Historical Climatology Network (GHCN)”

    However, even if it weren’t the fact that some other places have some data does not negate the point that this data needs to be collected and visually QC’d. It is insanity to take a position against that when we’re talking about trillions of dollars.

  5. #1
    Nick,
    “It was a 1990’s project to compile a historical climatology. That’s a valuable effort, but not intended to meet the expectations you are outlining here.”

    If that were the only issue it would perhaps not be so bad, but what do you think creates the frequent pronouncements we hear from GISS about ‘warmest month/year on record’ etc? That is the GHCN database being used there (the only substantial update has been the US portion with the inclusion of USHCN).

    So if it was only a project to create a climate history, it is being used as much more now – and if there is so much data missing, it is not fit for purpose, therefore the answers coming out of it should not be used for the scaremongering using half answers, and certainly not used as a basis for policy.

  6. Jeff #4
    “The important bit is in the first sentence where Phil says CRU is almost entirely GHCN.”
    No, you can’t take that out of context. He says “the data we have in the CRU archive”. And he’s answering a criticism about the inability to locate some “lost” data (pre-1990). Which he could reconstruct from a mid-1980’s report. That’s clearly “the data” that he is referring to in the quote.

    It’s certainly not true that CRU post 1990 is almost entirely GHCN. As Peterson says above, GHCN, at least post 1997, is almost entirely US. If that’s all CRU had, I think someone would have noticed.

  7. Nick,

    I understand what you’re trying to say but I think you’re wrong about this. Phil’s point is that the archive IS the data. Phil is making the point that the data isn’t lost because most of it is exactly the same as the GHCN data.

    If we have lost any data it is the following:
    1. Station series for sites that in the 1980s we deemed then to be
    affected by either urban biases or by numerous site moves, that were
    either not correctable or not worth doing as there were other series
    in the region.
    2. The original data for sites for which we made appropriate
    adjustments in the temperature data in the 1980s. We still have our
    adjusted data, of course, and these along with all other sites that
    didnt need adjusting.
    3. Since the 1980s as colleagues and National Meteorological
    Services (NMSs)
    have produced adjusted series for regions and or countries, then we
    replaced the data we had with the better series.
    In the papers, Ive always said that homogeneity adjustments are
    best produced by NMSs. A good example of this is the work by Lucie
    Vincent in Canada. Here we just replaced what data we had for the
    200+ sites she sorted out.

    Regarding this bit:

    As Peterson says above, GHCN, at least post 1997, is almost entirely US. If that’s all CRU had, I think someone would have noticed.

    Nobody claimed that ALL of the GHCN data is in CRU only that most of CRU data is from the GHCN.

  8. #5 Vjones
    “That is the GHCN database being used there (the only substantial update has been the US portion with the inclusion of USHCN).”
    No, it isn’t. It’s the GISS database (which actually pre-dated GHCN). Of course GISS will use GHCN data where they can. But as I said in #6, GHCN since 1997 is almost entirely US. GISS has much more than that.

  9. To me the whole idea of building a “global” database that meets a set standard and is reliable based on land stations was to put it nicely naïve in the extreme and goes down from there. There is obvious problems with this idea such as:

    1.The network is based on trust of each countries reporting. Now who here believes that a a county like North Korea or Iran will honestly take the reading instead of having someone sitting in a room filling out CLIMAT reports with made up numbers. The leaders of those two countries would do it for GP if nothing else.

    2.The network is based on political stability. It’s not really a problem with places like Italy or Australia but lets take an honest look at Africa and the Middle East. Africa for the longest time changed governments so frequently, there was no one responsible for the program in those countries. Then there is the aspect of warfare, genocide and famine to consider in Africa . In the Middle East in the 80’s the loss of data can be tied to the start of the Iran/Iraq war and follow the spread of no data through time to Desert Shield/Storm, Operation Iraqi Freedom and the War on Terror. When people are trying to kill you and/or you don’t know where your next meal is coming from reading a thermometer is not top priority.

    That is why if you are going to attempt to make a “global” dataset, satellite is the way to go. You don’t have to worry so nutcase in charge of a country is feeding you bad data if he is sending out any at all. Nor do you have to worry about the person in charge of county X’s data gets stood up against a wall and executed in a civil war, or dying in a ditch from starvation. Those were just man made reasons for bad/no data, natural disasters are also a problem. Massive earthquakes like in Haiti or Tsunamis such as seen in Indonesia will cause data loss.

  10. 1-that the data exists but isn’t being used is WORSE than if the data had actually stopped being collected.

    It is even easier than Jeff suggested to get this data. Yet nobody is bothering to make the effort to get the data into the analyses of the major GMST providers.

    You can find tons of stations which just “stop” reporting according to the official GMST sources. When you dig for the original data, it is readily available.

    Steve once refereed to the “GHCN Daily Parallel Universe”.

    http://climateaudit.org/2008/03/05/ushcn-raw-a-small-puzzle/

    This is on top of the fact that John Christy has been able to locate something like 10 stations for every one that is used in East Africa, Central California, and Northern Alabama-that’s not recently, that’s stations used at all. Nobody does the grunt work to get them digitized, so they get ignored.

    I won’t bother to note what kind of biases this always seems to create. You can read his papers for that.

  11. #8
    Nick you just proved you don’t know a thing you are talking about. GISS does not have their own dataset they use Three datasets provided by other agencies plus a lone long temeperature record from a single station. Maybe you ought to go read the GISTemp page first:

    Current Analysis Method

    The current analysis uses surface air temperatures measurements from the following data sets: the unadjusted data of the Global Historical Climatology Network (Peterson and Vose, 1997 and 1998), United States Historical Climatology Network (USHCN) data, and SCAR (Scientific Committee on Antarctic Research) data from Antarctic stations. The basic analysis method is described by Hansen et al. (1999), with several modifications described by Hansen et al. (2001) also included.

    http://data.giss.nasa.gov/gistemp/

    + NASA Portal
    + Goddard Space Flight Center
    + GSFC Earth Sciences Division

    GISS Temperature Analysis
    =========================
    Sources
    ——-

    GHCN = Global Historical Climate Network (NOAA)
    USHCN = US Historical Climate Network (NOAA)
    SCAR = Scientific Committee on Arctic Research

    Basic data set: GHCNftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2
    v2.mean.Z (data file)
    v2.temperature.inv.Z (station information file)

    For US: USHCN – ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly
    9641C_200907_F52.avg.gz
    ushcn-v2-stations.txt

    For Antarctica: SCAR – http://www.antarctica.ac.uk/met/READER/surface/stationpt.html
    http://www.antarctica.ac.uk/met/READER/temperature.html
    http://www.antarctica.ac.uk/met/READER/aws/awspt.html

    For Hohenpeissenberg – http://members.lycos.nl/ErrenWijlens/co2/t_hohenpeissenberg_200306.txt
    complete record for this rural station (thanks to Hans Erren who reported it to GISS on July 16, 2003)

    http://data.giss.nasa.gov/gistemp/sources/gistemp.html

    From that they combine it with Hadely SST to get a land/ocean Anomaly output.

    Step 4 : Reformat sea surface temperature anomalies
    —————————————————
    Sources: http://www.hadobs.org HadISST1: 1870-present
    http://ftp.emc.ncep.noaa.gov cmb/sst/oimonth_v2 Reynolds 11/1981-present

    http://data.giss.nasa.gov/gistemp/sources/gistemp.html

    Now that we established that there is no such thing as the “GISS Dataset” and shown that the basic dataset for GIStemp is GHCN (As well as HadCRUT3), any problems in GHCN will effect not only GISTemp but HadCRUT3 as well.

  12. #10
    “1-that the data exists but isn’t being used is WORSE than if the data had actually stopped being collected.
    It is even easier than Jeff suggested to get this data. Yet nobody is bothering to make the effort to get the data into the analyses of the major GMST providers.”

    Just not true. The data is being collected. It’s just not being assembled by GHCN. It’s no longer their job. The “major GMST providers” themselves bother to get it into their analyses. What was all the CRU data that people were complaining about?

  13. #12

    Nick maybe you ought to visit the NCDC site where they tell you it is their job:

    About NCDC

    NCDC is the world’s largest active archive of weather data. NCDC produces numerous climate publications and responds to data requests from all over the world. NCDC operates the World Data Center for Meteorology which is co-located at NCDC in Asheville, North Carolina, and the World Data Center for Paleoclimatology which is located in Boulder, Colorado.

    NCDC supports a three tier national climate services support program – the partners include: NCDC, Regional Climate Centers, and State Climatologists.

    http://www.ncdc.noaa.gov/oa/about/about.html

    Take note of that line that says “World Data Center for Meteorology”. That is where the worlds records are kept such as GHCN:

    World Data Center(WDC) for Meteorology, Asheville is one component of a global network of discipline subcenters that facilitate international exchange of scientific data. Originally established during the International Geophysical Year (IGY) of 1957, the World Data Center System now functions under the guidance of the International Council of Scientific Unions ( ICSU).

    The WDC for Meteorology, Asheville is maintained by the U.S. Department of Commerce, National Oceanic and Atmospheric Administration (NOAA) and is collocated and operated by the National Climatic Data Center (NCDC).

    In accordance with the principles set forth by ICSU, WDC for Meteorology, Asheville acquires, catalogues, and archives data and makes them available to requesters in the international scientific community. Data are exchanged with counterparts, WDC for Meteorology, Obninsk and WDC for Meteorology, Beijing as necessary to improve access. Special research data sets prepared under international programs such as the IGY, World Climate Program (WCP), Global Atmospheric Research Program (GARP), etc., are archived and made available to the research community. All data and special data sets contributed to the WDC are available to scientific investigators without restriction.

    http://www.ncdc.noaa.gov/oa/wdc/index.php

    Global Historical Climate Network (GHCN) dataset. GHCN is a comprehensive global baseline climate data set comprised of land surface station observations of temperature, precipitation, and pressure. All GHCN data are on a monthly basis with the earliest record dating from 1697.

    http://www.ncdc.noaa.gov/oa/wdc/index.php

    As shown from the NCDC site it is NCDC’s job to compile the temperature dataset and archive it.

  14. 12-I chose my words carefully but you completely changed what I said. GHCN IS STILL GETTING THE DATA. CRU et al apparently aren’t using it, AS IS OBVIOUS FROM THEIR OWN GRAPHS OF STATION COUNT!

    The data is out there. You said it yourself. BUT IT ISN’T BEING USED!

  15. Specifically, on GHCN’s end, the problem is that they no longer take the data WHICH THEY STILL COLLECT (the daily data is still around) and create monthly data from it. Read Steve’s post. The Daily GHCN data for stations which neither GHCN nor GISS nor anybody says have monthly values ARE STILL REPORTING AND ARE IN GHCN’s POSSESSION.

  16. It would be nice if NCDC removed the pay wall for their data. Maybe now is a good time for all you bloggers to apply some public relations pressure on that.

  17. #12 Boballab
    Bob, yes, I have looked into the detail of the GHCN raw file and you are right. I was going by Peterson’s statement in the 1997 paper and my knowledge of the GHCN adjusted file, which hasn’t been updated much outside US. But the raw GHCN file has, and does seem to be what GISS primarily uses. So my apologies to you, TTCA and indeed Jeff. The number of raw stations is more significant that I thought.

  18. If you look at the first chart, number of stations. Then ask yourself if this were some other area of science. Lets say its medical. We are tracking outbreaks of some disease using particular reporting locations. Cholera for instance. So we put forward a historical account which shows an upward trend of a certain magnitude, and argue that this will continue.

    Meanwhile, the number of reporting stations we use has started at a single numbers, gone up to near 6000, and fallen back to close to 1000. The geographical distribution has also changed, populations have changed, hygiene has changed.

    Don’t you think we would start to get just a bit worried about sampling accuracy and the extent to which changes in reporting station location and quantity affected the consistency of the record, and thus our ability to plot trends reliably?

  19. The whole process of gathering, processing and reporting of these temperatures needs to be completely re-evaluated and redone. There’s just too much doubt. The first thing I would do is discard all readings that are located at urban growth sites. These are definitely “poisoned” and should be invalidated for the purpose of generating an average world temperature. If I were to take the temperature of a sick patient, I would not be placing the thermometer on the cigarette that patient happened to be smoking at the time.

  20. #17, Just read your comment, apologies are not required to me. It’s just data.

    It just makes sense that all of it possible would be collected and maintained in a central database with QC info similar to surfacestations however with manual inspection of time series for steps. We might find exactly the same results as CRU but my guess is that we could exactly identify how much of a problem urban warming is, the actual warming trend to finally achieve reconciliation with satellites but most importantly we could better establish temperatures in the 1920’s to see the net change over time.

  21. I don’t see the point. The data we have is what we have. I want to see the raw data, as it was recorded since 1701. The raw data is data. “Corrected” or “adjusted” or “smoothed” or “glitch removed” or anything is not real data, it’s opinion. Someone’s opinion that I may not agree with. A call for “quality control” is a call to modify the data or discard some of it.
    Anyone with practical scientific or engineering experience knows that the data ALWAYS contain errors. Proper procedure is to take repeated readings and average them. The more readings taken, the more the average will approach the correct answer.
    The drastic reduction of the number of stations in GHCN after 1980 is suspicious. I have found nothing on the internet explaining how the number of stations was reduced from 100,000 to 14,000. The data would be better if we had 100,000 stations reporting today.

  22. David,

    I fully agree that the true un-touched Raw data HAS to be part of the record. However regarding QC, certainly you would agree that station moves which we do have records of will cause steps and should be documented and handled.

    Also, some temp records have visually obvious steps in them. If they were manually compared to a nearby station the steps could be corrected (if done transparently) or removed. Another issue is having photos of nearby structures to grade stations with different levels of thermal contamination.

    With reasonable procedures for the comparison and quantification of microclimate warming (urban) good station data can be separated from bad. After that, appropriate methods to blend 7000-15000 spatially distributed anomalies can be determined and a true temperature trend can be produced.

    Right now we have corrections by guesstimate and taxes by truckloads. It’s dumb as hell.

    The stations which dropped out in 1992 still exist, are still measuring yet haven’t added to the temp data. Since people claim warming has only happened since 1978, that is a very suspicious problem in my opinion.

  23. Raw data is OK, but we really need to know the surroundings. Do we have a pristine station, grass, flowers and birds, or do we have an air conditioner, barbeque and an upside down boat? The surfacestations project attempts to document that, and results are not exactly encouraging. Without knowing when the ‘added features’ appeared at the measuring site, we can’t make a valid evaluation of the data.

    And this doesn’t include instrumentation and recording errors. People looking at thermometers make mistakes now and then. Thermistors need calibration. I’m not sure it averages out. We know the ‘adjustments’ mostly go in whatever direction the researcher needs for the next grant application. Which doesn’t generate a lot of confidence in the results.

    So, I’m not sure we would get much benefit.

  24. Let us not get side tracked on the issue of station numbers and what is a proper number of stations given the uncertainty in x number of stations used to represent an X gridded area. My guess is that a proper analysis of this source of uncertainty has not been made to date. Another issue here is the uncertainty I noted above and how it plays out when we want regional temperature trends. Global trends will be less sensitive to the sparceness of stations globally, but not immune from those uncertainties.

    We can document, I would think, the portion of GHCN stations that are used by GISS and CRU. From what I have read I think that most of what GISS and CRU uses comes from GHCN. The point is that stations for all temperature data sets is declining and particularly so in recent times.

    Jeff ID, I see some problems with the reliability of the older data from stations and what that reliability means in terms of longer term temperature trends. The Watts team evaluations of USHCN station CRN ratings would appear to indicate potentional major problems in non climate inluences on temperature readings and, if changing over time, on temperature trends. The influence of that poor reliability will not be overcome by maintaing more stations in the present and future times. More statistical analyses of the Watts team findings would be more instructive in this matter.

    We have 2 sources of satellite data from 1979 and going forward. Why not insure that these measurements are reliable and forget about the “old fashion” station data?

    If I were to rely on station data going forward I would not make it a project run by government employees. I would run it with interested parties who would be periodically checked for quality control by some independent body.

  25. Bruce, I’ll grant you that NS has been APITA on some prior topics, but there is nothing wrong with way the debate has gone on this thread. The way Jeff runs things, NS has the right to misstep and look silly as many of us have at one time or another. Learning and debating in public isn’t always pretty, but in this instance the conversation was instructive for me and I suspect some other lurkers. NS challenged Jeff to prove he was right, and by elaborating on statements, Jeff and others did so effectively. Pretty good Socratic effort and more evidence that this medium is going to change scientific discourse, as ongoing argument is simultaneously reviewed by peers, laymen and nudniks who occasionally find an acorn blind squirrel style.

  26. In the alternative, throw out the historical records for stations that are no longer tracked, create a baseline from the average of the surviving 1500 stations, and calculate anomalies for each of these 1500 against their own, apples to apples, history.

    It’s bad science and horrible math to say this “average” is higher (or lower) than THAT avererage if the sample sets aren’t comparable. Either we have to collect all the data the same way, or we have to omit old data from stations that don’t get collected at present

  27. Pouncer, I was also thinking along those lines. If you find the average come out pretty close the same when compared to the full set, then I think that would give a point to GHCN (whether they got lucky or not). If there are differences between the two, then someone has some ‘splanin’ to do

  28. #27
    My understanding is that station anomalies are calculated relative to their own mean over the base period – not relative to some aggregate of stations. The GHCN file v2.mean, for example, has individual station anomalies, and I don’t see that they could have been calculated any other way. So what you suggest wouldn’t change those 1500 anomalies.

  29. Kenneth I believe you are very mistaken about how much effect the loss of stations have globally. The obvious example is when you look at the diference in Africa. Basically by 2008 there was basically zero data for about 75% of the entire continent. You can see the effect it has just by changing the infill parameter from 1200km to 250km.

    Here is a link to a Anomaly map from NOAA that is from Jan to Dec 2008 for GHCN with zero infill:

    Now compare to the GISS Anomaly map for the same time period with 1200km infill:
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2009&month_last=12&sat=4&sst=0&type=anoms&mean_gen=0112&year1=2008&year2=2008&base1=1961&base2=1990&radius=1200&pol=reg

    Notice that Canada is almost entirely infill, so is Africa and the center section of South America. Russia has huge gaps in it. Greenland and the Arctic Ocean is 100% an infill artifact. Notice the average is .44 in the upper right hand corner. Now watch what happens when I go from 1200km infill to 250km infill:
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2009&month_last=12&sat=4&sst=0&type=anoms&mean_gen=0112&year1=2008&year2=2008&base1=1961&base2=1990&radius=250&pol=reg

    Notice the great big gaps in the data and you still have some infill. Also notice the average in the corner is now .51 up from .44. What would it be with no infill?

    Next is a polar view (Keep in mind that they are infilling the entire artic ocean from surrounding land areas and as shown Canada is very lacking in data in the north and you get the same problem in Russia.
    1200km:
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2009&month_last=12&sat=4&sst=0&type=anoms&mean_gen=0112&year1=2008&year2=2008&base1=1961&base2=1990&radius=1200&pol=pol

    250Km:
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2009&month_last=12&sat=4&sst=0&type=anoms&mean_gen=0112&year1=2008&year2=2008&base1=1961&base2=1990&radius=250&pol=pol

    As shown by the maps the Anomalys for both Polar regions are not made with a lot of solid data. Phil Jones himself said in one of the emails from CRU that infilling can induce false readings, for that was how GISS got 2005 so warm. This is not just a little infilling here and there, it is used to make entire anomalys for large swaths of the globe.

  30. I don’t mind the reduction of the numbers from 100,000 to 14,000, or whatever it is. The Argo project has around 3000 floats in the oceans and seas, which cover a much larger area than all the land. It’s the position of the thermometers they are still using that bothers me. As I said, they should let go all those in urban areas, such as airports, buildings, car parks, and the like. It’s common sense to do so. But then that’s the problem, there’s not much common sense when it comes to climate science.

  31. #29
    Nick Stokes,
    The GHCN file v2.mean is the unadjusted station temperature data, not anomaly data.

    #30
    Peter of Sydney
    Unfortunately, if you take away the airports: http://82.42.138.62/GISSMaps/stationairportcodes.asp
    then take away the urban areas: http://82.42.138.62/GISSMaps/stationpopulationcodes.asp
    you lose a lot of world coverage (note most airports are classed as rural)
    (These interactive maps are a bit slow to load – hit NO if it asks dod you want to abort the code)

  32. I am grateful for your work. Thank you!!

    I am spending all my spare time on this exposure of climate science fraud. I have stored archives and found ways to contact people. But today I spent reading climateaudit (link below) and most of the commentators were bemoaning the fact that the UK Parliamentary investigation won’t be effective. So I have been emailing this link to anyone I can find who is in a position to publish.

    Samizdata (see bottom link) is an awesome site full of intelligent people. On January 10, one of their contributors wrote about Climategate being similar to the Cold War. But the best part was the comments section. The contributors there had some realistic suggestions for getting past the news blackout on this issue. They suggest we concentrate on Pachauri, as it will allow politicians to save face. And that may be one of the single biggest hurdles we have to overcome.

    As we speak:

    NASA is under scrutiny:
    http://climateaudit.org/2010/01/23/nasa-hide-this-after-jim-checks-it/#comment-217428

    But what politician is going to attack NASA, NOAA, etc. If you look at NASA’s site today, they have a brand new explanation (1-21-10) for everyone to read. http://climate.nasa.gov//news/index.cfm?FuseAction=ShowNews&NewsID=248 Gavin Schmidt has smoothed over the arguments raised by KUSI tv.

    The CRU is under scrutiny:
    http://blogs.telegraph.co.uk/news/jamesdelingpole/100023449/wow-uk-parliamentary-investigation-into-climategate-may-not-be-a-whitewash/ But the news media and the politicians will do their best to divert attention to the religious, or political, or social divergences among skeptics.

    This is an article I want people to see. There are good strategies here:
    http://www.samizdata.net/blog/archives/2010/01/cold_wars_1.html

  33. boballab @ Post #30:

    Kenneth I believe you are very mistaken about how much effect the loss of stations have globally. The obvious example is when you look at the diference in Africa. Basically by 2008 there was basically zero data for about 75% of the entire continent. You can see the effect it has just by changing the infill parameter from 1200km to 250km.

    I do not disagree with what you say in this post, but you misread my point in mine. My point is that if stations numbers are cut back precipitously in a region or area of the globe, with everything else being equal, the uncertainty for that area will increase more than for the entire globe – as that area makes up only a small fraction of the globe. I would rather discuss the bigger issues here than get side tracked, as we so frequently do, into other issues. Of course, if stations all over the globe are cut back then that creates more global uncertainty.

    My point was: why not analyze and determine the validity of the satellite measurements and use them instead of depending on many differently manned stations and stations that are shrinking. The upper Arctic and Antarctic regions might need to be measured by stations as the satellites do not cover that area of the globe. I have little faith in historical temperature records even though the station numbers were greater. And since we are looking for longer term trends I would think we would need to determine with some detective work and statistical analyses what changed with these stations back in time.

  34. Kenneth that explains it better. I og the impression from the first post that you thought it was regulated to regional only. Also it seems we are in agreement that surface stations are not the way to go due to various man made and natural problems with such a network. To me surface stations are more suited to see local and regional trends as long you have decent coverage.

  35. To me surface stations are more suited to see local and regional trends as long you have decent coverage.

    Boballab, I am glad we could resolve the points being made. I was going to add that station data still has a place in the realm of local weather information. Even locally, micro climate differences (and attributable to climate and not non climate conditions) can provide significantly different temperatures and other climate measures. In Chicago land the “official” weather reporting location was Midway Airport closer to the lake and conditioned by Lake Michigan and then was moved to O’Hare Airport which is away from the lake effects. Most of the more conscientious meteorologists point to this difference when reporting record temperatures and other climate events.

    Weather reporting for temperatures in our area gives a large number of reports from all the suburban and rural areas within 60 miles of Chicago (which are obviously not official reporting stations) and it would appear that we do not lack for local sources of weather and the capability to compare one to the other for known climate differences and for confirmation one to the other. I have noted that a new reporting station at the Aurora airport has “set” new records for winter low temperatures. I suspect that this is a true micro climate effect as I am not aware of a practical non climatic way of producing colder air temperatures. I am open to suggestions though.

  36. Jeff Id wrote:

    >I fully agree that the true un-touched Raw data HAS to be part of the record. However regarding QC, certainly you >would agree that station moves which we do have records of will cause steps and should be documented and handled.

    Actually, I disagree. A station move may make a change in station readings, temperature is different in different places. But who is to say which measurements are better? The old location or the new location? What makes one location better than the other? I don’t believe you can. A reading is a reading. I see no logical reason for preferring the old readings, the new readings or some mathematical function of old new and adjacent readings. I see no good reason for step smoothing. The data is the data, modifying it does not improve it.
    I just average all the data together. I figure station moves that increase the readings will be averaged out by other station moves that decrease the readings.

    >Also, some temp records have visually obvious steps in them. If they were manually compared to a nearby station >the steps could be corrected (if done transparently) or removed. Another issue is having photos of nearby >structures to grade stations with different levels of thermal contamination.

    Doubtful. From my house to center of town is less than 5 miles. Usually center of town is 5 degrees F hotter in summer and 5 to 20 degrees F colder in winter. This happens ’cause my house is high up in the White Mountains and center of town is deep in a valley. Doing some kind of data switchy-swap from stations like that isn’t going to smooth anything.

    >With reasonable procedures for the comparison and quantification of microclimate warming (urban) good station data >can be separated from bad. After that, appropriate methods to blend 7000-15000 spatially distributed anomalies can >be determined and a true temperature trend can be produced.

    You are talking about editing the data. I’m not sure what you mean by “true”. We have temperature measurements. From them we can compute various mathematically interesting things, such as the mean. That’s the best we can do, and all we can know. The various mathematically interesting things may, (or may not) have various amounts of physical significance. But they are not “true”. They are just measurements.

    >Right now we have corrections by guesstimate and taxes by truckloads. It’s dumb as hell.

    >The stations which dropped out in 1992 still exist, are still measuring yet haven’t added to the temp data. Since >people claim warming has only happened since 1978, that is a very suspicious problem in my opinion.

    I’m just an amateur, working in my spare time, with nothing but the Internet to supply my data. Sites omitted from GHCN v2.mean are not available to me. They may still be reading their thermometers, but doesn’t do me any good.

  37. #37, there is a tendency amongst some people to simply recognize that the data is crap but then deny the fact that there are actually reasonable methods to retrieve the all important trend information from it.

    For instance, if a station is moved there will likely be a step in the data of some magnitude at that time. If a nearby station wasn’t moved and you happen to know when the move occurred, you can split the data into two series and re-knit them accurately using the second station information by offsetting the information after the step. Using a method like this, you can retain the maximum trend information but it absolutely must be done manually. The alternative is to chuck the data. You might say split the data into two sections and anomalize, but the result is that you’ve an “assumed offset” between the two halves which has no information whatsoever WRT temperature.

    These offsets will add up over time creating trends.

    I am talking about processing the data, and processing is a valid thing to do with data. The problem is that it must be done in an open fashion with all data, methods and information open to the public – including all the unprocessed RAW data as well.

    From my perspective, these points are not matters of opinion where people can reasonably disagree, but rather just matters of proper data handling, quality control and disclosure.

  38. Nick Stokes #1,

    “1) the U.S. HCN, 1221 high quality, long-term, mostly rural stations in the United States;”

    HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

    I grew up in rural and THOSE STATIONS AREN’T RURAL!!!!

  39. Jeff Id,

    “#37, there is a tendency amongst some people to simply recognize that the data is crap but then deny the fact that there are actually reasonable methods to retrieve the all important trend information from it.”

    How do you recover rural trend data from urban temp series??

  40. Also, what you want is mean temperature for that region, not the temperature that would have been there without the city.

    If you measure in a green area in a city that would be sufficient.

Leave a comment