the Air Vent

Because the world needs another opinion

Gridded Global Temperature

Posted by Jeff Id on July 14, 2010

This is a reposted work by Zeke and  Mosh from WUWT and linked by Lucia.  It compiles the many hours of work from a number of bloggers on recreating gridded global temperature.   The interest in global temperature for myself came because of my interest in whether the processing code was adding in any unusual trends by its method.   It sounds a bit strange, but paleoclimatology is very good at doing just that.  In this case however, everyone got about the same answer, and there are a number of open source solutions to gridded global temperature here and on the web.  Now we just wonder about the data. –Jeff

——————————————————–

Calculating global temperature

I’m happy to present this essay created from both sides of the aisle, courtesy of the two gentlemen below. Be sure to see the conclusion. I present their essay below with only a few small edits for spelling, format, and readability. Plus an image, a snapshot of global temperatures.  – Anthony

https://i1.wp.com/veimages.gsfc.nasa.gov/16467/temperature_airs_200304.jpg
Image: NASA The Atmospheric Infrared Sounder (AIRS) instrument aboard NASA’s Aqua satellite senses temperature using infrared wavelengths. This image shows temperature of the Earth’s surface or clouds covering it for the month of April 2003.

By Zeke Hausfather and Steven Mosher

There are a variety of questions that people have about the calculation of a global temperature index. Questions that range from the selection of data and the adjustments made to data, to the actual calculation of the average. For some there is even a question about whether the measure makes any sense or not. It’s not possible to address all these questions in one short piece, but some of them can be addressed and reasonably settled. In particular we are in a position to answer the question about potential biases in the selection of data and biases in how that data is averaged.

To move the discussion onto the important matters of adjustments to data or, for example, UHI issues in the source data it is important to move forward on some answerable questions. Namely, do the methods for averaging data, the methods of the GISS, CRU and NCDC bias the result? There are a variety of methods for averaging spatial data, do the methods selected and implemented by the big three bias the result?

There has been a trend of late among climate bloggers on both sides of the divide to develop their own global temperature reconstructions. These have ranged from simple land reconstructions using GHCN data

(either v2.mean unadjusted data or v2.mean_adj data) to full land/ocean reconstructions and experiments with alternative datasets (GSOD , WDSSC , ISH ).

Bloggers and researchers who have developed reconstructions so far this year include:

Roy Spencer

Jeff Id

Steven Mosher

Zeke Hausfather

Tamino

Chad

Nick Stokes

Residual Analysis

And, just recently, the Muir Russell report

What is interesting is that the results from all these reconstructions are quite similar, despite differences in methodologies and source data. All are also quite comparable to the “big three” published global land temperature indices: NCDC , GISTemp , and CRUTEM .

[Fig 1]

The task of calculating global land temperatures is actually relatively simple, and the differences between reconstructions can be distilled down to a small number of choices:

1. Choose a land temperature series.

Ones analyzed so far include GHCN (raw and adjusted), WMSSC , GISS Step 0, ISH , GSOD , and USHCN (raw, time-of-observation adjusted, and F52 fully adjusted). Most reconstructions to date have chosen to focus on raw datasets, and all give similar results.

[Fig 2]

It’s worth noting that most of these datasets have some overlap. GHCN and WMSSC both include many (but not all) of the same stations. GISS Step 0 includes all GHCN stations in addition to USHCN stations and a selection of stations from Antartica. ISH and GSOD have quite a bit of overlap, and include hourly/daily data from a number of GHCN stations (though they have many, many more station records than GHCN in the last 30 years).

2. Choosing a station combination method and a normalization method.

GHCN in particular contains a number of duplicate records (dups) and multiple station records (imods) associated with a single wmo_id. Records can be combined at a single location and/or grid cell and converted into anomalies through the Reference Station Method (RSM), the Common Anomalies Method (CAM), and First Differences Method (FDM), or the Least Squares Method (LSM) developed by Tamino and Roman M . Depending on the method chosen, you may be able to use more stations with short records, or end up discarding station records that do not have coverage in a chosen baseline period. Different reconstructions have mainly made use of CAM (Zeke, Mosher, NCDC) or LSM (Chad, Jeff Id/Roman M, Nick Stokes, Tamino). The choice between the two does not appear to have a significant effect on results, though more work could be done using the same model and varying only the combination method.

[Fig 3]

3. Choosing an anomaly period.

The choice of the anomaly period is particularly important for reconstructions using CAM, as it will determine the amount of usable records. The anomaly period can also result in odd behavior of anomalies if it is too short, but in general the choice makes little difference to the results. In the figure that follows Mosher shows the difference between picking an anomaly period like CRU does, 1961-1990, and picking an anomaly period that maximizes the number monthly reports in a 30 year period.  The period that maximizes the number of monthly reports over a 30 year period turns out to be 1952-1983.  1953-82 (Mosher). No other 30 year period in GHCN has more station reports. This refinement, however, has no appreciable impact.

[Fig 4]

4. Gridding methods.

Most global reconstructions use 5×5 grid cells to ensure good spatial coverage of the globe. GISTemp uses a rather different method of equal-size grid cells. However, the choice between the two methods does not seem to make a large difference, as GISTemp’s land record can be reasonably well-replicated using 5×5 grid cells. Smaller resolution grid cells can improve regional anomalies, but will often result in spatial bias in the results, as there will be large missing areas during periods when or in locations when station coverage is limited. For the most part, the choice is not that important, unless you choose extremely large or small gridcells. In the figure that follows Mosher shows that selecting a smaller grid does not impact the global average or the trend over time. In his implementation there is no averaging or extrapolation over missing grid cells. All the stations within a grid cell are averaged and then the entire globe is averaged. Missing cells are not imputed with any values.

[Fig 5]

5. Using a land mask.

Some reconstructions (Chad, Mosh, Zeke, NCDC) use a land mask to weight each grid cell by its respective land area. The land mask determines how much of a given cell ( say 5×5) is actually land. A cell on a coast, thus, could have only a portion of land in it. The land mask corrects for this. The percent of land in a cell is constructed from a 1 km by 1 km dataset. The net effect of land masking is to increase the trend, especially in the last decade. This factor is the main reason why recent reconstructions by Jeff Id/Roman M and Nick Stokes are a bit lower than those by Chad, Mosh, and Zeke.

[Fig 6]

6. Zonal weighting.

Some reconstructions (GISTemp, CRUTEM) do not simply calculate the land anomaly as the size-weighted average of all grid cells covered. Rather, they calculate anomalies for different regions of the globe (each hemisphere for CRUTEM, 90°N to 23.6°N, 23.6°N to 23.6°S and 23.6°S to 90°S for GISTemp) and create a global land temp as the weighted average of each zone (weightings 0.3, 0.4 and 0.3, respectively for GISTemp, 0.68 × NH + 0.32 × SH for CRUTEM). In both cases, this zonal weighting results in a lower land temp record, as it gives a larger weight to the slower warming Southern Hemisphere.

[Fig 7]

These steps will get you a reasonably good global land record. For more technical details, look at any of the many https://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/different  http://residualanalysis.blogspot.com/2010/03/ghcn-processor-11.html models  http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/ that have been publicly  http://drop.io/treesfortheforest released http://moyhu.blogspot.com/2010/04/v14-with-maps-conjugate-gradients.html

].

7. Adding in ocean temperatures.

The major decisions involved in turning a land reconstruction into a land/ocean reconstruction are choosing a SST series (HadSST2, HadISST/Reynolds, and ERSST have been explored  http://rankexploits.com/musings/2010/replication/ so far), gridding and anomalizing the series chosen, and creating a combined land-ocean temp record as a weighted combination of the two. This is generally done by: global temp = 0.708 × ocean temp + 0.292 × land temp.

[Fig 8]

8. Interpolation.

Most reconstructions only cover 5×5 grid cells with one or more station for any given month. This means that any areas without station coverage for any given month are implicitly assumed to have the global mean temperature. This is arguably problematic, as high-latitude regions tend to have the poorest coverage and are generally warming faster than the global average.

GISTemp takes a somewhat different approach, assigning a temperature anomaly to all missing grid boxes located within 1200 km of one or more stations that do have defined temperature anomalies. They rationalize this based on the fact that “temperature anomaly patterns tend to be large scale, especially at middle and high latitudes.” Because GISTemp excludes SST readings from areas with sea ice cover, this leads to the extrapolation of land anomalies to ocean areas, particularly in the Arctic. The net effects of interpolation on the resulting GISTemp record is small but not insignificant, particularly in recent years. Indeed, the effect of interpolation is the main reason why GISTemp shows somewhat different trends from HadCRUT and NCDC over the past decade.

[Fig 9]

9. Conclusion

As noted above there are many questions about the calculation of a global temperature index. However, some of those questions can be fairly answered and have been fairly answered by a variety of experienced citizen researchers from all sides of the debate. The approaches used by GISS and CRU and NCDC do not bias the result in any way that would erase the warming we have seen since 1880. To be sure there are minor differences that depend upon the exact choices one makes, choices of ocean data sets, land data sets, rules for including stations, rules for gridding, area weighting approaches, but all of these differences are minor when compared to the warming we see.

That suggests a turn in the discussion to the matters which have not been as thoroughly investigated by independent citizen researchers on all sides:

A turn to the question of data adjustments and a turn to the question of metadata accuracy and finally a turn to the question about UHI. Now, however, the community on all sides of the debate has a set of tools to address these questions.


99 Responses to “Gridded Global Temperature”

  1. Thank you for posting credible, quantitative information on global temperatures.

    The discussion on threats to global temperatures should have started here, instead of the promotional campaigns.

    More post-1960 data would be helpful.

    With kind regards,
    Oliver K. Manuel

  2. Oliver, are you referring to ‘more analysis’ post 1960 in the post above or more stations post-1960. Because if it is the station count you are looking for, check my reformatted GSOD data. There are more stations than this, these are the ones that survived the reformat into a GHCN style file. So look for ‘GSOD’ in the above charts to see more post-1960 stations.

  3. Jeff Id said

    Oliver,

    To me this just means the unknown lies outside of the data compiling code. It’s nice though that for the first time those of us who would question the climate science authorities, can confirm that they didn’t mess with the code for a preferred result.

    Now the questions are down to UHI (sighting), data density (small effect I’m sure) and instrumentation.

  4. kim said

    Heh, liars, not crooks?
    ==========

  5. kim said

    Shepherd Mann and his Crook.
    ============

  6. Jeff, I have almost 0 knowledge of the methodologies of paleo-climate. But a very brief look at Mann 08 seemed to indicate that some spatial analysis was being included. Do you have any interesting links regarding the spatial component of proxy reconstructions?

  7. Jimmy Haigh said

    Looking at that first image we can immediately see that we are in BIG trouble. We are on fire. Apart from the poles that is.

  8. mrpkw said

    WOW !!!!!!!!!!!
    That was a boatload of work !!!

    Now the data just needs to be agreed on !!!!

    Great work all.

  9. Kenneth Fritsch said

    I would have little doubt that using the same data that those replicating the global or regional temperature series would replicate. What really bothers me, and more so from the more skeptical who tend to harp on it in other areas of climate study, is not directing efforts towards estimating the uncertainty in these time series. An average time series means little in the bigger picture of things without the CIs.

    I have suspected all along that a valid determination of uncertainties or even a good attempt at one would be difficult and complicated. I have looked at some 5 x 5 degree grids with adjusted GHCN data where the grids have a reasonable large density of stations. I see problems with stationarity that can significantly change the year to year and decade to decade relationship of one station to another and even for those in close proximity to one another.

    I have been attempting to do some serious background research and testing of the estimation of confidence intervals for temperature anomalies and trends over the historical instrumental record. My most recent literature searches have focused on the three papers linked below. RomanM has been very helpful in helping me understand what the authors intended in these papers. I see lots of assumptions and approximations in these papers and even a tendency towards circular reasoning if some of these assumptions were considered to be inappropriate. The articles often reference climate models as a source of the “true” temperature data density and then in the same article show where the models and observed data disagree.
    I have done some PCA of the more densely populated (with stations) grids and found that the first principle component captures much of the variation over the time period 1950-1990 and the first 3 capture almost all of the variation. I have also found that elevation and proximity to the coastal areas have an effect (lessening) on the correlation of stations temperature anomalies.

    My next step will be looking further at the more populated grids and also using the satellite temperature data from RSS and UAH to analyze what a more complete spatial coverage leads to with regards to uncertainty in temperature anomalies from 1979 to present. I have not seen a good comparison of temperature anomaly uncertainty calculated from observed satellite data versus ground based station data.

    The papers referenced above are below:

    (1) Estimating Sampling Errors in Large-Scale Temperature Averages

    P. D. JONES, T. J. OSBORN, AND K. R. BRIFFA

    http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%281997%29010%3C2548%3AESEILS%3E2.0.CO%3B2

    (2) An Estimate of the Sampling Error Variance of the Gridded GHCN Monthly Surface
    Air Temperature Data

    S. S. P. SHEN AND H. YIN

    http://journals.ametsoc.org/doi/pdf/10.1175/JCLI4121.1

    (3) A Theory for Estimating Uncertainties in the Assessment of Global and Regional Surface Air Temperature Changes —The SOA Theory

    Samuel Shen, San Diego State Univ,Jerry North, Texas A&M University,Tom Smith, NOAA/STAR/SCSB

    Other collaborators:

    Chris Folland, Phil Jones, Nick Rayner, Tom Karl, Dave Easterling, Dick Reynolds, Francis Zwiers, Christine Lee, Art Dempster, Chet Ropelewski, and Bob Livezey

    http://www.math.sdsu.edu/AMS_SIAM08Shen/Shen.pdf

  10. Jeff Id said

    #9 uncertainty is interesting but I’m more interested in how the siting issues affect the results. That’s probably because I expect a significant change in trend when that’s taken into account. Even a 30% reduction would leave the models out to dry and I don’t think that’s an unreasonable amount when we’re talking tenths of a degree.

  11. Layman Lurker said

    #9 Kenneth Fritsch

    Wow, pretty ambitious stuff there Kenneth. Maybe there is some code that Jeff/Nic/Ryan have kicking around from the Antarctic work that could help you a bit. Best of luck with your work.

  12. Kenneth Fritsch said

    #9 uncertainty is interesting but I’m more interested in how the siting issues affect the results. That’s probably because I expect a significant change in trend when that’s taken into account. Even a 30% reduction would leave the models out to dry and I don’t think that’s an unreasonable amount when we’re talking tenths of a degree.

    Interesting, Jeff ID, that +/-30% is approximately the CIs I see for temperature anaomalies as reported by the published accounts of these papers I linkied – and with all the assumptions and use of climate models that are required. Now the authors would apparently translate that uncertainty into something considerably less (10%) for long term trends, but if one wants to check the validity of climate models with observed data one would look at the anomaly fit first and the trend secondarily.

    What I puzzle about in this whole matter is how do you take into account the uncertainty of the statistics used to estimate uncertainty and further how would one estimate the uncertainty of the assumptions used to estimate uncertainties. Also please understand that the uncertainty I am talking about here is the error due to lack of complete coverage of the regional and global
    area with temperature measurements. In order to estimate (by infilling) one has to have estimates of the spatial and temporal correlations of stations which requires knowing or estimating the correlations of the “missing” areas using either the climate models as the “true” vales or using the observed data of which you are trying to find the uncertainty.

    I also puzzle why more use is not made of the more complete satellite coverage instead of climate model data. Maybe the satellite data is not as complete as I suspect it must be. Most of these uncertainty estimates use temperature data from a rather limited time period (and as I have been doing) because the 1950-1980 and 1990 data is the most complete – and even that data is far from complete.

    Jeff, I am wondering how you would propose to validate the siting as an issue here. What Watts and his team did in the CRN evaluations for the USHCN stations could go a long way towards demonstrating this, but I have yet to see a finished publication on this matter. The paper (by Menne, I believe) that borrowed the Watts team data and did their own analysis was totally unacceptable in my view and aimed at minimizing any differences.

  13. mrpkw said

    # 10
    isn’t there also an issue with extrapolating when there is no data?
    I recall an issue with Bolivar sometime ago.

  14. Jeff Id said

    #12, Anthony’s work, if it is correctly received, can only be the first step. The wakeup call that we need something better, I had a discussion with him at the ICCC where he revealed that he’s quite pleased with his early results. Validation that there is a problem first followed by very thorough QC of all stations, that’s all my engineer head can think of.

    I’ve looked at sat data here, it’s very complete but it is from a different altitude and contains a couple of small steps that tweak the trends a bit.

  15. Kenneth Fritsch said

    Layman Lurker at Post #11:

    I was able to do PCA on the GHCN data, but as Ryan has pointed out here one needs to be very careful in interpreting principle components that have no direct physical basis. I think at this point that I can show that the first principle component accounts for the variations due to station proximity to coastal areas (and other variations such as distance) but not the variation due to altitude. I can show it, but quite frankly at this point it does make a lot of sense ith me.

  16. Kenneth Fritsch said

    isn’t there also an issue with extrapolating when there is no data?
    I recall an issue with Bolivar sometime ago.

    My problem with extrapolation is how does one include the uncertainty of the extrapolation process. It can be readily shown that the extrapolation using spatial correlation has a very scattered plot when the correlation is plotted against ditance. Other factors are altitude and proximity to coasts.

  17. Zeke said

    Kenneth Fritsch,

    What was your main issue with Menne’s approach?

    I did a basic replication of it (calculating anomalies, spatial gridding) for CRN12 and CRN345 stations awhile back: http://rankexploits.com/musings/2010/a-detailed-look-at-ushcn-mimmax-temps/

    Granted, the CRN list he used is incomplete, and you could do more fine grained analysis (e.g. CRN1 vs. CRN5). But the method of comparison itself is what I’m more interested in establishing. The only major issue I see is potential correlation between CRN rating, urbanity, and sensor type (MMTS vs CRS) that might complicate things.

  18. Zeke said

    Also: now that we have 20,000 stations available from GSOD post-1970, we can do some analysis to look at how extrapolated temps compare to actual ones in those regions.

  19. Re: Kenneth Fritsch (Jul 14 15:11),
    I did a study here of the “Bolivia effect”, specifically on effects of coast and altitude.

    Extrapolation isn’t quite the right concept when you’re calculating a global average. You’re summing the same station data – the gap in Bolivia just changes the weighting.

  20. Kenneth Fritsch said


    What was your main issue with Menne’s approach?


    Two major issues Zeke:

    1) The period that they use was from the 1980s to present or near present as I recall. I did some preliminary analyses of the Watts team data and used the period 1920-2006. Why did Menne use that short period? I suspect he would say that it corresponded to the availability of the other land based station data that he had. I would strongly suspect that many of the station specific local changes that occurred and were documented in the Watts team CRN ratings could have occurred before the 1980s, e.g. air conditioning, black topping and paving. Obviously then a trend considered after that period would not see it.

    2) The amount of stations with CRN 1 and 2 ratings is sparse and given the station to station differences we see in this time series that number of stations makes finding a difference due to CRN rating an uncertain proposition, i.e. it would have to be large. In my preliminary analysis (actually RomanM did the analysis) we used separate station data, where one can see a trend in the trend with increasing CRN rating, but not necessarily at a statistically significant difference. We then compared CRN123 versus CRN45 and the numbers there allowed us to see significant differences between trends from CRN123 and CRN45. So why did Menne choose to use CRN12 versus CRN345? Do you think that he looked at the data using both of these combinations? Do you think he used a longer time period to look for CRN versus trend differences. Any one interested in sensitivity testing for a published paper would have done the extra work – and reported the results.

    You make a good point about using the more complete coverage of the GSOD station data. Was this the time series data that Menne used in the article under discussion? As I recall he had to adjust the “other” data he used in order to make an apples to apples comparison with the USHCN data.

  21. Kenneth Fritsch said

    Nick Stokes @ Post #19:

    A quick look at your approach appears to show that you simply filtered out the higher altitudes and coastal areas and then compared results. Not sure what this does when comparing such sparse data. I have looked at altitude and proximity to coastal areas as it effects the distance coorelation between stations. The altitude effects appears to fit well in the grids that I have checked to a simple model whereas the proximity to coastal areas was not so straight forward. The coastal thing I think gets down to how one can objectively classify a station as coastal and to differentiate levels of “coastalness” and for that matter “proximityness”. Altitude measures are,on the other hand, almost a continues function and objectively determined.

  22. rcrejects said

    Seems to me that there is no surprise that different workers using the same input data generally come up with similar outcomes.

    The REAL questions however relate to:

    1. The quality of the temperature stations and resulting records. The Anthony Watts work.

    2. Analysis of raw versus “adjusted” data. Adjustments often being very significant, but mostly unexplained. For a great example of this see: http://www.quadrant.org.au/blogs/doomed-planet/2010/05/crisis-in-new-zealand-climatology.

    3. Dismissal of delta UHI effects that affect temperature records.

    4. Analysis of the selection/rejection of temperature stations that make up the averages. Closure of Siberian stations. Rejection of many rural stations with flat records. Inclusion of metropolitan station data without adjusting for delta UHI.

    I note that these sorts of questions are those not responded to by the climate scientists.

    I realise that this is what you are referring to when you conclude your post with: “A turn to the question of data adjustments and a turn to the question of metadata accuracy and finally a turn to the question about UHI.”

  23. Brian H said

    Two questions/observations:
    1. Specific heat matters. Warming water takes far more energy than warming rock. Thus averaging sea and land by weighting for area is insufficient; the energy itself is what will determine any “trend”.
    2. The assignment of global average to unfilled boxes is illegitimate; the GIST approach is far superior. The average temperature of the missing polar cells (e.g., the majority of the Siberian stations, and 99 out of 100 northern Canadian ones) is generally far below global average, so dropping them had the immediate effect of replacing their grid numbers with higher ones. Thus the final graph shows the truth: there is no warming trend whatsoever; only station selection has created the artifact of one.

  24. mrpkw said

    This is what I was looking for:

    http://chiefio.wordpress.com/2010/01/08/ghcn-gistemp-interactions-the-bolivia-effect/

  25. Re: Kenneth Fritsch (Jul 14 17:44),
    Kenneth,
    I believe coastal means within 30 km.

    I’m working on another analysis using GSOD, which has a lot of stations in Bolivia reporting currently.

  26. Re: Brian H (Jul 14 20:08),
    On 2, I agree, and I’m trying an improvement. However, remember that when you assign no value to an empty cell. you are effectively imputing an average anomaly, not temperature, so the effects are lessened.

  27. Al Tekhasski said

    I second the opinion that there is nothing special in similarity of results that are derived from essentially the same set of data. I am of a firm opinion that the original data are fundamentally flawed. The indications are very simple. If you examine, say , GISS database for Texas, you can find many pairs of stations that exhibit opposite temperature trends over 100-year period. It could be warming, it could be cooling, or it could be some more complicated 100-year-long pattern.

    For example, start here
    http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?lat=33.17&lon=-99.75&datatype=gistemp&data_set=1
    and check Haskel vs Albany or Abilene, Crossbyton v. Lubbock, Ada vs. Pauls Valley, etc, etc.
    All these pairs are about 30-50 miles apart, yet their long-term trend is diametrically opposite.

    So, we have two opposite trends from two points that are 50km apart. The first concern is that this is inconsistent with CO2-induced forcing – these stations see essentially the same sky and hence the same backradiation. The other concern is that climatology does not have information about the next seven neighbors to these pairs (there are actually 23 relevant neighbors). From the precedent, these neighbors could have recorded anything, up or down, so the average trend for the area could be anything. In technical terms it means that the spatial sampling frequency of the temperature field is insufficient, the necessary requirement of Nyquist-Shannon-Kotelnikov sampling theorem is not satisfied.

    Some proponents of AGW climatology claim that these individual trends are “statistically insignificant”, but it is easy to show that this claim in not applicable to station data. The reason is that each annual average is in fact accurate to about 0.02C. It is also obvious that if someone magically comes back in time and started recording from the very beginning, the record will be exactly the same (excluding missing datapoints).

    The other AGW argument is that the stations are located randomly, and therefore must represent fair statistics. There is no proof that this is true. More, it can be argued that EVERY STATION is located in a degenerate place. The fact is that climatologists did not select these locations from randomness or other equal coverage criteria. These stations were placed by meteorologists in places where people live (and therefore expand and use the surrounding land). Therefore, almost EVERY STATION IN GLOBAL DATABASE is subject to strong effect of LAND USE. UHI or else, the environment does change where people live – airports are constructed, water retaining dams are erected, agriculture spreads out, whatever. This is not only the closest 100-feet that affect temperature readings. The conclusion is simple – underlying data are garbage, and the conclusion is proportional.

  28. Layman Lurker said

    … as Ryan has pointed out here one needs to be very careful in interpreting principle components that have no direct physical basis.

    I think Ryan did some model tests using synthetic data with known factors as a check for spurious correlations. Anything similar for your work?

  29. Layman Lurker said

    Sorry, question in #28 is for Kenneth.

  30. Steven Mosher said

    WRT data.

    The most important work is Rons work in metadata.

    Adjustments are not the issue ( uncertainty with them is)

    Just sayin

  31. Brian H said

    When the “uncertainty” extends to their rationale and even sign, they become an issue. A wide black line should divide adjusting and fudging.

  32. Re: Kenneth Fritsch (Jul 14 17:44),
    I’ve put up a new post about the “Bolivia effect” using GSOD data. From 1990 on there were about 30 stations reporting in the GSOD database, so that’s a check on how well GHCN was able to cope for the region without that data. It seems it did pretty well.

  33. Steve Fitzpatrick said

    Al Tekhasski #27,

    Fair enough, there are issues to examine for the land record, and it is reasonable to expect both land use and UHI effects to contaminate the station data. But these issues certainly do not apply to ocean data, and there is also a substantial upward trend in ocean surface temperature data. The ocean temperature rise is about 50%-60% as much as the rise over land since ~1880. This would seem to put an absolute lower bound on the warming that has actually taken place.

    In addition, satellite measurements of the lower troposphere temperature since 1979 show a clear trend which tracks short term changes in the combined land station/ocean reconstructions remarkably well, albeit with a somewhat lower overall rate of increase than the reconstructions. Since satellite data is for certain not subject to UHI and local land use effects, the satellite trend appears to me to represent a more reliable lower bound for the ‘uncontaminated’ temperature trend. The difference between satellite data and the reconstructions over land could be considered an upper bound for land use and UHI effects. (I think Roy Spencer has already looked at this in some detail.)

    In any case, there does not seem to me to be any reason to doubt there has been warming, although there is reason to examine the influence of siting and land use on the land station data.

  34. Kevoka said

    #12

    “I also puzzle why more use is not made of the more complete satellite coverage instead of climate model data. Maybe the satellite data is not as complete as I suspect it must be.”

    Two things:

    1) I have been trying to find an explanation of why the near surface channel (ch04)for the ASMU-A shows the temperature to be 255K to 259K. Yet the sea surface channel shows a more expected 294K – 295K

    http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps

    2) Roy Spencer has said that there are emissivity issues with the surface readings over land. But I do not know how much that explains question 1.

  35. stan said

    I have a big problem affording any credibility to “scientists” who never gave any thought to checking their instruments until the question was raised from the outside. Anyone who gets paid to do science understands that the quality of the data is important and accurate instruments are a basic fundamental for getting quality data. We now have convoluted efforts at ass-covering trying to argue that data quality doesn’t matter. This only lowers their credibility even further.

    Competent, honest scientists with the moral maturity to understand that billions of people will be affected by their work would immediately recognize the importance of addressing the thermometer siting. Incompetent, reckless, morally retarded hacks would pretend that the failure to meet basic scientific standards makes no difference.

    Much can be learned about competence and moral fiber by the responses to problems and the adherence to standards. Character is about doing the right thing. People without character ought not be trusted with matters which affect the public wellbeing.

  36. Kenneth Fritsch said

    Steve Fitzpatrick @ Post # 33:

    While the ocean data does not have UHI effects (I personally think that changing micro climate effects at stations will place a larger uncertainty on station data/trends than changing UHI) it does have its own issue of buckets versus ship intake measurements. It also has issues shared by land measurement of areas that are not sampled well. The satellite measurements started in 1979 so they hardly qualify for testing any long term trends. That the land based measurements get it near correct 1980 forward does not say anything about the pre-1980 years.

    Your point here also brings forth the issue of uncertainty of the global and regional temperature anomaly time series. Climate models deal with both global and regional temperature trends and land and sea and their capability to hindcast these temperatures is used to validate the models. While global temperatures may have smaller estimated uncertainties, the partitioning into regions can have larger uncertainties. These uncertainties have to be considered in the model validation process. And a model that gets the land or NA right but not the ocean or SH is not a good model. Or the same for getting the globe right but not its regional components.

  37. Kenneth Fritsch said

    I think Ryan did some model tests using synthetic data with known factors as a check for spurious correlations. Anything similar for your work?

    LL, no I have not and in fact I am a point where I have my PCA results and now I have to figure out what it means – if anything. As it turns out doing the PCA was the easy part. I’ll have to revist what Ryan did.

  38. Steve Fitzpatrick said

    Kenneth Fritsch #36,

    I was really trying to address the statement by Al Tekhasski #27: “Therefore, almost EVERY STATION IN GLOBAL DATABASE is subject to strong effect of LAND USE. UHI or else, the environment does change where people live – airports are constructed, water retaining dams are erected, agriculture spreads out, whatever.”

    If every station in the global data base produces only garbage data, how then do reconstructions based on those data match the satellite data pretty well post-1979?

    From your comment, it sounds like we can at least agree that the 1979 to present reconstructions are reasonably accurate. Is that right? If the satellite data can be believed, then the warming since 1979 has to be about 0.42 – 0.5C, which is a little less than the temperature reconstructions say. It seems you are then arguing that all the reconstructions are about right for the last 31 years, but that they all fail miserably for periods earlier than 31 years ago. This is possible, of course, but I think you then have to show specifically why this is the case (incorrect adjustments applied pre-1979, siting issues were very different pre-1979, UHI and local land use effects very different pre-1979 than post 1979, etc.). Occam’s razor may be useful here. I am not saying that the reconstructions are perfect (most certainly they are not), but it strikes me as a bit of a stretch to suggest that they are completely worthless pre-1979 but quite good post-1979.

    With regard to buckets (insulated and uninsulated) versus engine intakes on ships: Sure, there is uncertainty, but even in the worst case (about a 0.3C artificial reduction in pre-1941 temperatures), there is still substantial warming (about 0.4C – 0.5C) in the ocean data. This again argues that a substantial fraction of the ocean warming is most likely real. One final point about ocean temperatures: the correlation between ocean and land temperatures is quite reasonable over the whole instrument record, with the land temperature consistently changing a little less than twice as much as the ocean, both up and down. If there were large errors in the ocean temperature data (say due to changing sampling methods) and in the land temperature data as well, then we would expect very poor correlation, or no correlation at all.

  39. Just some random thoughts on land use changes.

    I’ve read that well over 50% of the land surface,a/> has been modified by people. “One study showed that 20 percent of the continental U.S. land mass is within 500 meters of a paved road – an area equal to about five and a half football fields.” We are now consuming about 20% of total land based biomass production. Trying to eliminate stations influenced by the activities of homo sapiens would create a data set that does not reflect the real world. Human activity is sufficiently ubiquitous that it might not make much sense to discuss a land temperature absent human influence (except as a modelling and attribution study). Welcome to the Anthropocene.

  40. Brian H said

    #36, Kenneth;
    I’m sure you know this already, but …

    Models cannot be “validated” by “hindcasting”, of course. All that can do is suggest tweaks. Validation is achieved ONLY by fore-casting, with the model, data set used, and weighting etc. all FROZEN until the pre-selected forecast period expires. Given the hypersensitivity of computer models, even the smallest tweak means starting over and then waiting.

    And only a failure is directly informative: it proves the model wrong. Even waiting 30 years for results to come in, and finding a tolerable match, gives little confidence about a 50 or 100-year subsequent forecast.

    This unseemly rush of the CAGWists to short-cut and ram through utterly unvalidated conclusions on the grounds that the system might tip into a catastrophic state is an attempt to bypass all accepted validation standards.

    Up against the wall with ’em, say I!

  41. Al Tekhasski said

    Steve Fitzpatrick (#38) wrote: “If the satellite data can be believed, then…”

    As a hardcore skeptic/denier, I find it really difficult to believe, given all these inverse-calculated weighting functions, their dependence on the angle of observation, contamination from surface emissivity variations, corrections for temperature drifts, non-corrected aging of electronics due to thermal cycling, electromigration and radiation, non-corrections for deterioration of instrument surfaces, non-simultaneous sampling of data over the globe, etc. The device cannot be calibrated for real global temperature. Given so small change of the global emission over so long time, I remain skeptical.

    Same goes for sea measurements – recent discoveries of substantial eddies rises the same kind of question about sampling density.

  42. Kenneth Fritsch said

    I would think that a model would have to, at the least, be able to reproduce the historically observed data (hindcasting) before it would even be considered for validation with forecasting.

    I also think what these discussions tend to overlook, and climate papers, for that matter, is the proper estimation of uncertainty in all the data used and including temperature. We talk easily about temperature anomaly trends without ever mentioning corresponding CIs.

    An interesting approach is to look for statistically significant differences between temperature data sets such as amongst the and between the surface and satellite measurements. Pick a time period and region of the globe and I know there are significant differences – which, of course, means one or both of the series is wrong, at least for that time period and regtion.

  43. Steve Fitzpatrick said

    Al Tekhasski #41,

    So you are saying (if I understand correctly) that during the period 1979 to 2010

    1) the ground station data has been badly corrupted by a series of factors (including micro-climate, land use, siting, and UHI effects), and
    2) the ocean data has been badly corrupted by a different set of factors,
    3) the satellite data has been corrupted by another completely different set of factors (aging electronics, orbital decays, etc), yet
    4) all these unrelated factors have somehow combined in such a way that the post 1979 satellite data and the post 1979 temperature reconstructions track each other month by month and year by year almost perfectly, save for a modest difference in overall upward trend (~25% lower for the satellite data).

    Is this what you think? If so… then all I can say is ‘wow!’

  44. Brian H said

    Steve;
    The rationale for asserting that climate can be forecast even though weather can’t (beyond a few iffy days) is that all the errors and fudges and uncertainties cancel out over the long run. Which is pure-quill drivel, of course.

    The true result of all that cancelling is the Null Hypothesis: ‘no change’ is the best forecast.

  45. … even the smallest tweak means starting over and then waiting.

    Yeah. That’s been a huge problem with cosmological models.😉

  46. Kenneth Fritsch said

    I have no reason to think we have not had warming over the past few decades. My problem is that I am not at all comfortable with the CI limits that some climate scientists have attempted to assign to that global and regional temperature trend. For all I know,better estimated CIs might suggest we could have more warming at the upper limits than we now commonly quote.

    Steve Fitzpatrick, your four items need to be considered separately and the question should be whether we understand the uncertainty that these items contain. You know you can get a “correct” answer with a faulty method (and with one that is more correct – and which one would one want to use in the future). Certainly understanding the long term temperature trend and its CIs, as it would be affected by GHGs, needs to go back further than 1979 and certainly the bucket issue affects the pre-satellite area. It is also important to note that SH and NH trends can vary as well as those for land and ocean and that we need to address these regions separately to better understand the magnitude of the effects of GHGs on temperature.

  47. Al Tekhasski said

    Steve Fitzpatrick (#43) wrote ” … all these unrelated factors have somehow combined in such a way that.. ”

    Being on an extremely skeptical mood, I would say that you are missing one very important factor. These factors are not unrelated and just “somehow combined”, they were specifically combined by certain group of people with vested interests.

  48. Steve Fitzpatrick said

    Al Tekahasski #47,

    Conspiracy?

    I don’t think so. Ask Roy Spencer if he is part of any climate science conspiracy.

  49. The conspiracy goes back further than you might imagine.
    You should look into the history of the Smithsonian Institute some time.

  50. JR said

    Re: Steven Fitzpatrick #43
    4) all these unrelated factors have somehow combined in such a way that the post 1979 satellite data and the post 1979 temperature reconstructions track each other month by month and year by year almost perfectly, save for a modest difference in overall upward trend (~25% lower for the satellite data). (emphasis mine)

    Your not really serious are you?!

    http://www.woodfortrees.org/plot/gistemp/from:1998/to:2010/plot/rss/from:1998/to:2010/plot/gistemp/from:1998/to:2010/trend/plot/rss/from:1998/to:2010/trend

  51. Or this?
    http://www.woodfortrees.org/plot/gistemp/from:1988/to:2010/plot/rss/from:1988/to:2010/plot/gistemp/from:1988/to:2010/trend/plot/rss/from:1988/to:2010/trend

  52. JR said

    Well Ron, that’s a nice little tit-for-tat, but I did not say that the satellite and land station observations never agree at all post 1979. I objected to Steve’s characterization that month by month and year by year the satellite and land records follow each other almost perfectly, which I have shown they do not.

  53. Steve Fitzpatrick said

    JR,

    What the records show is a clear concordance. Sure they vary some, but if one moves up in a particular month, most of the time the other does as well. Look at the shapes of the two traces and ask yourself if there is not a very similar pattern present. Were either or both records mainly driven by spurious factors (as has been claimed on this thread) how could there be any correlation on a month by month basis?

    There is uncertainty in the temperature records, of course, but IMO it is silly to suggest that the measured warming is spurious. Could it in fact be a little more or a little less than the records indicate? Sure, but it simply not credible to suggest the records show no warming.

  54. What you did show, and I did not know, is that GISTEMP and RSS anomalies converge during strong El Nino. If you leave the ‘2010’ of your ‘to’ endpoint, WFT will calculate the latest data (just FYI, the ‘from’ endpoint is inclusive, the ‘to’ endpoint is exclusive). Then you can see the same phenomena during the latest El Nino.

    The same thing occurs with UAH.
    http://www.woodfortrees.org/plot/gistemp/from:1988/plot/uah/from:1988/plot/gistemp/from:1988/trend/plot/uah/from:1988/trend

  55. Kenneth Fritsch said

    Steve Fitzpatrick, I do not disagree with what you say in this thread, but I tink it is critical to always keep the uncertainty limits in mind when considering and comparing temperature datasets. The Santer and Douglass debates on the ratio of tropical surface to troposphere temperature trends brought my above caution to the fore rather nicely. A couple of points were:

    1. The longer term trends (used by McIntyre and McKitrick in an unpublished paper and longer than Santer chose to use) from UAH, but not RSS, showed a statistically significant differences between climate models and observed values.

    2. Also in that Santer paper the authors went to great lengths in an attempt to show no significance difference between observed and modeled results by showing the very dramatic differences between models and between temperature data sets, and particularly for the radio sonde measurements, i.e. they presented an eye opener for some on the uncertainty of temperature data sets.

    The great debate on the effects of GHGs on temperature trends focuses on the portion that feedback plays into that expected effect. It takes rather small absolute changes in the those temperature trends and the accompanying CI envelope to change our estimates of the feedback effects. I think most would agree that AGW without any feedback would not be a major issue going forward, and particulary so when one considers the uncertainty of the detrimental/beneficial effects of warming at even the higher levels of warming predicted with feedback.

  56. @Ron #54

    Spurious observation based on arbitrary baseline definition.

  57. JR said

    Re: Steve F

    My bad for reading more into your words than what you actually said. I agree that the land and satellite records are in concordance in regards to the ups and downs.

    Re: Ron

    That’s an interesting observation about El Ninos. Also, I did not know the difference between specifying the endpoint and not specifying. That’s good to know.

  58. Al Tekhasski said

    Steve F. wrote: “Conspiracy? I don’t think so. Ask Roy Spencer if he is part of any climate science conspiracy.”

    This is not a conspiracy in a nominal criminal sense. It is a normal behavior of a professional group of people engaged in a profitable business. Climate change is a multi-billion dollar industry that employs many thousands of people. Most of these people are vitally dependent on success of these “products” and marketing (of impeding disasters). It is impossible for them to behave differently, all without any explicit conspiring. It is just a normal “optimization” of individual businesses in this industry. It is no different from “Beauty Care” industry, or tobacco industry.

  59. Brian H said

    #58;
    Yeah, sort of. But science has explicit standards which value “disproof” more highly than “proof”, since the latter is only failure of many determined efforts to find the former. There is a distinct apparent reluctance to look for the former in “Climate Science”, which renders its conclusions and projections and recommendations suspect, not to say worthless.

    And the venality of the motivation is particularly egregious when it is combined with explicit power-hunger (control of all industrial and agricultural and silvicultural activity on the planet).

  60. Al Tekhasski said

    Brian H wrote: “But science has explicit standards which value “disproof” more highly than “proof””

    True for science, but “climate change” is an applied climatology, and is not a science. Examples of brutal disregard for scientific standards (as sampling of experimental data, calculating averages of algebraic products of fluctuating fields as product of their averages, “hiding the decline” etc.) are the proof. When research enterprises advertise their musings and data as “products” (as an apparent offer for sale), this is not a science any longer.

  61. Al Tekhasski said

    Steve Fitzpatrick wrote (#53): “Were either or both records mainly driven by spurious factors (as has been claimed on this thread) how could there be any correlation on a month by month basis?”

    The problem here is that monthly and yearly changes are high-amplitude changes. It is easily possible to have high correlation for relatively short term and high amplitude fluctuations, while long-term trends (measurement of which requires much higher instrumental effort in terms of instrument drifts etc.) are entirely opposite. For example, I got a high correlation, 0.7658, between two stations, Ada and Pauls Valley over 1907-2009 time span, while they have opposite trends and are only 63km apart.

  62. Brian H said

    Heh. Just got a spam (virus?) email (12 copies!) targetted at wannabe climatologists. From “United Nations London“, titled “Your United Nations Grant Payment Notification“.

    I didn’t open it, of course. But if you got one and did, let us know what happens! 😉

  63. Steve Fitzpatrick said

    Al Tekahasski #61,

    The overall trend is relatively clear if you look at a larger number of individual trends. Sure, there will be pairs of stations where local effects can swamp the overall tend (we are only talking about 0.8C in overall trend after all!)…. the noise level in any given station or pair of stations is too high to see a trend of 0.8C over the instrument record. But lots of averaged data reduces the noise level (by the classic 1/N^0.5 formula), so you really can see relatively small trends in noisy data if you look at enough data. The satellite data is the extreme case of this, since there are many thousands of individual readings used in calculating a single daily average.

    The point is that whatever uncertainty there is in the instrument temperature reconstructions, it is simply not credible that each temperature trend, from whatever source is both garbage, due to overwhelming corrupting factors, and still in reasonable agreement with the others. IMO it is tilting at windmills to try to convince people that the temperature records tell us nothing about temperature trends.

    A far more credible issue is why a large radiative forcing (not counting any feed-backs), currently near 3 watts per square meter, has not caused the kind of dramatic temperature increases that would be expected (>2C on average) if the climate sensitivity really were high. All the appeals to aerosol effects and long (multi-decade and more) delays in warming due to ocean heat accumulation are highly uncertain, since they are weakly or not at all supported by data. It is easy to show that model projections of high climate sensitivity and catastrophic warming are ALL based on these assumed compensating factors, and so can be most charitably characterized as “highly speculative”. Heck, different models use vastly different levels of assumed aerosol effects in order to match the historical record…. nearly all of them therefore MUST be simply quite wrong. And if most must be wrong, and if they are all based on the same “well known physical principles” as the modelers like to say, then it is reasonable to argue the real possibility that NONE of them are close to correct.

    I sure wish people would concentrate on the truly weak part of the CAGW projections, and stop spending so much time arguing about the accuracy of temperature reconstructions. There is virtually nothing to be gained.

  64. Brian H said

    Temperatures cannot be averaged. Only energy levels, after deducting the work performed by said energy (for which there are no plausible data sources nor formulae nor computational resources nor coherent models.)

  65. Steve Fitzpatrick said

    Brian H #64,
    “Temperatures cannot be averaged.”

    I honestly have no idea what you mean by that. Certainly it ts simple to form an average, so I imagine you are trying to say something completely different from what the words themselves appear to say; what, I do not know.

  66. Brian H said

    My bad. I should have said “meaningfully” averaged. The average temperature of the water surface of a pot of boiling water and an equal expanse of skin on your body is not a useful number. Try combining them and see.

    Temperature is a derivative of so many energy transactions that it is a useless exercise to “average” two spots and treat each as though they shared that “average” number. Yet this is what is being done across the planet. The objection that G&T and others have to the huge size of the grid cells in the data sample versus the actual operative scale of the processes causing temperature is a rejection of this inappropriate and meaningless exercise.

  67. Brian H said

    E.g. Since b-body radiation scales as the 4th power of temperature, the average radiant output of two patches, one of which is 270°K and the other 370°K is as (270^4 + 370^4)/2 = (5,314,410,000+18,741,610,000)/2 = 12,028,010,000. The output of a patch of the same area at (270°+370°)/2 is as 320^4 = 10,485,760,000, which is to say
    ~87.18% of the actual radiance of the two patches considered separately.

  68. Brian H said

    P.S. Since 270° and 370° are unrealistic contrasts for the planet’s surface, using 220° and 320°, which do occur, produces a ratio of ~(5.3/6.4) or 82.85%

  69. Nullius in Verba said

    “But lots of averaged data reduces the noise level (by the classic 1/N^0.5 formula)”

    That’s the formula for independent errors, isn’t it? If they’re correlated, you can get a different answer.

    Say the flow of heat in or out of the Earth’s atmosphere/oceans is controlled by day-to-day random weather, then the accumulated heat, and hence the temperature, is related to the integral of this random series. Ultimately, there has to be some feedback to keep it within bounds, but there’s no reason that should show up in the short-term. So if the temperature is the integral of a random series, is there any reason why the short-term trends should converge?

    There’s a statistician’s story about the Emperor of China’s nose. Maybe you have already heard it?

  70. Brian H said

    #69;
    A drunkard’s walk by a giant with VERY long legs?!😀

  71. Steve Fitzpatrick said

    Brian H,

    Wow. Averaging the trend in temperature of a pot of boiling water is not a meaningful representation of anything… even if you include in the average the temperature of your skin. Discussion of the 4th power law of radiative losses is not related to heat loss from the Earth’s surface in any simple way, because this rate of loss is controlled mainly by convective transport, not radiative transport.

    This issue of average temperature is not so complicated. If the Earth is in rough radiative balance (what is gained from the sun is equal to what is lost to space), and this balance is controlled mainly by convection to the upper troposphere, combined with radiative loss to space from the upper troposphere, then there should be some average temperature of the surface of the Earth. Will this average vary a bit over time? Sure, the Earth has seasons, and the Earth’s atmosphere is chaotic (AKA weather); gains and losses will change over time, even in the very short term. But the average temperature should respond to changes in energy gain and loss… if the sun got 20% brighter, we would for sure expect the average temperature of Earth’s surface to rise. It is simply (IMO) a bit crazy to suggest that no meaningful measurement of the average temperature can be formed.

    What is your background Brian? Are you a scientist or engineer, or is your training in some other field? I am honestly an bit puzzled by your analysis of this subject.

  72. Steve Fitzpatrick said

    Nullius in Verba,

    Presumably, the temperature in Bangkok is not closely correlated with the temperature in Oslo. If individual stations are subject to great local variation (which is what has been repeatedly discussed here), then the 1/N^0.5 drop in noise with increasing number of stations should not be a bad guess.

    The temperature of the Earth’s surface is a combination of 1) an integral of the radiative history (combined with the efficiency of transport of heat into and out of the oceans surface, which has a range of time constants), and 2) a very short term (essentially instantaneous) gain/loss of heat. The short term flow of heat is of course influenced by weather, but the long term trend should still reflect the overall heat balance for the system.

    The climate is just an energy balance. If the supplied energy increases, then the temperature pretty much has to rise to maintain a long term balance. Do you really doubt this?

  73. Al Tekhasski said

    Steve Fitzpatrick wrote: “But the average temperature should respond to changes in energy gain and loss… if the sun got 20% brighter, we would for sure expect the average temperature of Earth’s surface to rise.”

    This is one of classic climatological blunders, and it is not generally true. Average temperature index is not a proxy for radiative imbalance. That’s why physics says “you can’t average temperatures”. One of course can do anything, but it will not have much sense. Let me illustrate again.

    Let a planet to have only two climate zones, 50% equatorial with flat temperature T1, and 50% polar, with T2. Let’s consider the following “individual” cases of temperatures distribution:

    (A) T1=295K, T2=172.8K
    (B) T1=280K, T2=219.4K
    (C) T1=270K, T2=236.9K
    (D) T1=260K, T2=249.8K

    These temperatures are within reasonable planet’s range. The “global average temperature” in these cases varies from 234K (case A) to 255K(case D), a swing of 21K. Yet all above combinations give you the same OLR of 240W/m2, a perfect stationary state and global energy balance.

    Obviously, with small modifications of this example one can get a small warming trend in “global index of temperatures” while the system could be actually losing heat, and a “cooling trend” while the system is gaining heat.

    The above example is an illustration why the “global temperature” is unphysical, and therefore application of basic physics to this “index” may give a misleading impression, just like the conclusion about imminent planetary imbalance due to alleged radiative forcing from increasing CO2.

  74. Al Tekhasski said

    Steve Fitzpatrick(#63), you wrote several points that are questionable. You seem to be under hard influence of standard climatological “simplifications”.

    (a) “The overall trend is relatively clear if you look at a larger number of individual trends. … the noise level in any given station or pair of stations is too high to see a trend… But lots of averaged data reduces the noise level (by the classic 1/N^0.5 formula), so you really can see relatively small trends in noisy data if you look at enough data.”

    No. Individual trends are not “noise”; as I argued in my initial post, they are relatively accurately measured individual trajectories of local dynamics. Each location has had this history of temperatures, these temperatures contributed to the integral of planet’s energy balance and IR emission correspondingly.

    More, my example of neighbor stations having opposite 100-yr trends shows that you can NEVER have enough data with current number of stations and their fixed locations. To have even a crude (but correct) estimation of global trend, you need to start with a set of met station on a regular 25×25 km grid, and have a 12x12km grid to prove that your results converge. This would amount to 800,000 to 3.2M stations with 100 years-long records. It might appear that a 50x50km (or even bigger) grid would suffice, but you must have at least certain areas with 25×25 and 12x12km coverage to be reasonably sure.

    (b) “The satellite data is the extreme case of this, since there are many thousands of individual readings used in calculating a single daily average.”

    Not really. As I agued in my other post, you can average daily-monthly data, but you cannot trust 100-year trend, because of lack of possibility to calibrate the equipment over that long period.

    (c) “A far more credible issue is why a large radiative forcing (not counting any feed-backs), currently near 3 watts per square meter, has not caused the kind of dramatic temperature increases that would be expected (>2C on average) if the climate sensitivity really were high.”

    I agree that climate sensitivity is completely open question. However, there is also a big doubt that the “radiative forcing” is really that high.

  75. Brian H said

    Steve;
    That word Al used, “unphysical”, is scientist-talk for “unscientific, in violation of the laws of physics”.

    Just so you know.

  76. Brian H said

    A point Al makes about the required minimum grid: that it is regular. That is, the station positions are arbitrarily placed by preset numeric rule, not by ANY whim, choice, or judgment of the Team (or anyone). That is a) the only way to assure statistical impartiality, and b) pragmatically impossible, for obvious reasons (some would be inside barns, some in the middle of rivers, others halfway down cliff-faces, etc., etc.)

    That is to say, the data-acquisition requirements of even a minimally adequate model of the globe and atmosphere are intractable. It is not adequate to pretend the errors average out. They do not.

  77. Brian H said

    Here is the conclusion of a paper by another of those nasty punctilious Germans, from 1998:

    The Climate Catastrophe
    – A Spectroscopic Artifact?

    by Dr. Heinz HugCrucial is the relative increment of greenhouse effect . This is equal to the difference between the sum of slope integrals for 714 and 357 ppm, related to the total integral for 357 ppm. Considering the n3 band alone (as IPCC does) we get

    (9.79*10-4 cm-1 – 1.11*10-4 cm-1) / 0.5171 cm-1 = 0.17 %

    Conclusions

    It is hardly to be expected that for CO2 doubling an increment of IR absorption at the 15 µm edges by 0.17% can cause any significant global warming or even a climate catastrophe.

    The radiative forcing for doubling can be calculated by using this figure. If we allocate an absorption of 32 W/m2 [14] over 180º steradiant to the total integral (area) of the n3 band as observed from satellite measurements (Hanel et al., 1971) and applied to a standard atmosphere, and take an increment of 0.17%, the absorption is 0.054 W/m2 – and not 4.3 W/m2.

    This is roughly 80 times less than IPCC’s radiative forcing.

    If we allocate 7.2 degC as greenhouse effect for the present CO2 (as asserted by Kondratjew and Moskalenko in J.T. Houghton’s book The Global Climate [14]), the doubling effect should be 0.17% which is 0.012 degC only. If we take 1/80 of the 1.2 degC that result from Stefan-Boltzmann’s law with a radiative forcing of 4.3 W/m2, we get a similar value of 0.015 degC.

    Kondratjew and Moskalenko are referring to their own work [15] – but when we checked their Russian book on that page, it turned out that this was nothing but an index of terms and nowhere else a deduction of this broadly referred 7.2 K figure [16] could be found. It should be mentioned that the radiative forcing for the present CO2 concentration varies considerably among references. K.P. Shine [17] specifies a value of 12 K whereas according to R. Lindzen CO2 only accounts for about 5% of the natural 33 degC greenhouse effect. This 1.65 degC is less than a quarter of the value used by IPCC and leads to a doubling sensitivity of 0.3 to 0.5 degC only [18].

    What is really true? Is there anybody to present a scientific derivation or a reference where this figure is not copied or just stated from assumptions, but properly calculated?

    (Here’s hoping I got the tags right! No preview here, like modern comment systems have …😉 )

  78. Brian H said

    Oil OK, except there should be a linefeed after “Heinz Hug”. 🙂

  79. Brian H said

    In some of the voluminous discussions (linked to a zipped file in the original location) that followed the above paper’s appearance, Jack Bateman made the following (IMO very important) comments about arbitrarily assumed equilibria:

    With regard to the cooling of the atmosphere at high altitude there is no doubt that it is by radiative emission from water and carbon dioxide molecules (and the other greenhouse gases to a very slight extent) where their excited states are produced by collisional processes. The Nimbus satellite data show that the overall contribution of carbon dioxide to the warming process is about 17% and its contribution to cooling is 7%. In a system in proper thermal and radiative equilibrium and adhering to the principle of microscopic reversibility these figures would be identical. That they are not is simply because the system is not in a proper state of equilibrium.

    The warming and cooling mechanisms operate at all parts of the Earth’s surface at all times and lead to a quasi-thermal equilibrium of the atmosphere over a long time-period in that the total quantity of energy reaching the atmosphere/surface system from the Sun is, within error limits of plus or minus 4%, equal to that which is lost to space. There is an annual variation in the total amount of radiation received from the Sun (and that lost to space) of around 7% because of the ellipticity of the Earth’s orbit. In July when the Earth is farthest away from the Sun the daily dose of radiation is reduced to 96.5% of the mean annual dose. This coincides with the Summer in the Northern Hemisphere in which the warming is enhanced because of the tilt of the Earth’s axis with respect to its orbit. In the Southern Summer when these geometrical effects are reversed the daily dose of radiation goes up to 103.5% of the annual daily mean. Nevertheless, the global temperature in the Southern summertime is lower than that in the Southern wintertime. This is because the surface of Southern Hemisphere is mainly water which takes a longer time to warm up than the solid surface which constitutes more of the Northern Hemisphere. Superimposed on these annual changes are those attributable to the greater concentrations of carbon dioxide which the IPCC calculates to be two degrees of warming overall for a doubling of the pressure of the offending gas and arising from an increase of received radiation of some 1.7% annually. My objections to the calculations are well known and documented in several places. The predictions do not coincide with any observations. Any warming this century, as indicated by the flawed terrestrial record, occurred before 1940 and since that date the terrestrial and satellite records have shown trends that are insignificant from zero.

    The recent papers in Nature (to which I have referred to in a previous message, Vol 398, 11 March, 1999, page 121) and Science (Vol 283, 12 March, 1999, page 1712) show that the pressure of carbon dioxide has been rising for the last 8000 years, that the pressure of the gas rose significantly after the last three periods of glaciation had ended and that even in periods of decreasing temperature the gas pressure remained high. These data are simply not consistent with science as it is interpreted by the IPCC. Anti-science seems to be attractive to the media. For instance, the Today programme last week reported that because of global warming more carbon dioxide was dissolving in the oceans which were then becoming more acidic and causing death of corals. Discuss!

    The same fall-back hand-waving is still going on 12 years later: Death of Corals! That in many cases evolved with CO2 levels 10X higher! (But, of course, the warmer waters were dumping their CO2 because it doesn’t dissolve well in warm water so actually warmer seas would be more alkaline?) Confusion deliberately twice (or more) confounded! 😀

  80. Steve Fitzpatrick said

    Al Tekhasski #73,

    I think you have this all terribly confused. The total loss of energy by radiation to space has to be quite close to what is received from the sun (were this not true, the Earth’s temperature would have to change quite rapidly). Since the sun’s intensity is reasonably constant, the “power averaged radiating temperature”, has to also be reasonably constant, and near 255K. The surface temperature is not the temperature of the radiating level, and changes in surface temperature are driven by factors other than changes in the temperature at the average radiating level. If you look at the actual measured rate of heat loss from the Earth (for example http://wattsupwiththat.files.wordpress.com/2009/08/fitzpatrick_image1.png ) you can see that over most of the surface the rate of infrared loss ranges from ~200 to ~300 watts per square meter, corresponding to effective emission temperatures of ~240K to ~270K. Of course, in winter at the poles, the effective emission temperature will fall to less than 200K, but this represents only a very small fraction of the total surface area. It falls this much because there is no sunlight warming the surface at the poles during winter, so that all heat loss from the wintertime poles comes from transport of heat by the oceans and atmosphere.

    The point is that the average surface temperature can change independent of the average emission temperature. Stefan-Boltzman is not what directly controls the surface temperature; it is the rate of transport of heat from the surface to the effective emitting level of the atmosphere that matters, along with the rate of transport of heat across the globe by the atmosphere and ocean currents.

    “You seem to be under hard influence of standard climatological “simplifications””

    I have no idea what that is supposed to mean. I am a scientist, but not a climatologist. The physics of radiative transfer is something I learned 30 years before global warming became a political issue; I don’t see how climatology could have influenced my understanding of this subject, or anything else I learned many years ago.

  81. Steve Fitzpatrick said

    Brian H,
    “That word Al used, “unphysical”, is scientist-talk for “unscientific, in violation of the laws of physics”.

    Just so you know.”

    I could be wrong of course, but based on several of your comments, I suspect you have enough understanding of the technical issues involved for a continued dialog to be very productive… for either of us.

    But I do wish you well.

  82. Steve Fitzpatrick said

    Sorry, that was supposed to be “do not have”. Trying to type too fast…

  83. RomanM said

    Jeff, I have written up a post looking at some of the “raw” GHCN data here.

    It raises some interesting questions about the meaning of the word “duplicates”…

  84. Al Tekhasski said

    Steve (#80), Let’s do some analysis of our seemingly conflicting points.

    You wrote: “The total loss of energy by radiation to space has to be quite close to what is received from the sun …. Since the sun’s intensity is reasonably constant, the “power averaged radiating temperature”, has to also be reasonably constant, and near 255K.”

    Averaged emission power in my example is 240W/m2 +-0.1. Is this reasonably constant? I think yes. Whether it is “255K” or whatever temperature is artificially assigned to this average emission is not an issue. So far my example is not in disagreement with your construction.

    You continue: ” The surface temperature is not the temperature of the radiating level, and changes in surface temperature are driven by factors other than changes in the temperature at the average radiating level.”

    Yes, I intentionally omitted the greenhouse effect, for simplicity. However, if you invoke the standard GH model (lapse rate temperature projected down to surface from effective emission height), you should have nearly proportional ground effect, on zonal average: tropics are warmer, poles are colder. Therefore this clarification of yours about difference between the surface and the emission layer is inconsequential for the purpose of my illustration.

    You continue: “The point is that the average surface temperature can change independent of the average emission temperature.”

    Yes, this is exactly the point I am making, the point which contradicts your other statement that an uptrend in global temperature index is an [unconditional] indication of warming. At the level of my illustrative example it does not matter what else controls surface temperatures. My point was, climatologically speaking, that a variance in global temperature index is not formally constrained by variance in planetary radiative imbalance without invoking additional unspelled and untested assumptions.

    So, which ones are the terrible confusions of mine?

  85. Steve Fitzpatrick said

    Al #84,

    You suggest, it seems, that there is a fixed connection between the surface temperature and the temperature of the emitting level. This is not really so.

    The greenhouse effect that you choose do not consider (for simplicity) is the real issue.

    The surface temperature can change (upward or downward) while maintaining the overall radiative balance of the earth (save for a whatever heat must be accumulated in or lost from the ocean as a result of the change) because of changes in infrared absorbing gases, weather, and other factors. That is the whole point. If infrared absorbing gases increase in the atmosphere, then all else being equal, the surface temperature has to on average increase by some (unknown) amount so that radiative balance at the emitting level of the atmosphere is maintained. Averaging the temperature of the emitting level (on a emitting power basis) is not terribly informative, because you will find that it basically can’t change very much if considered over any extended period…. the sun doesn’t vary much in intensity.

    What can change is the average of the surface temperature. It can change due to weather (AKA noise), short (ENSO) and long term (PDO, AMO) natural cycles, and of course, changes in GHG’s.

  86. Brian H said

    Radiative balance applies (if anywhere) only to a system in equilibrium, which the atmosphere, except by presumptive fiat, is not.

    Work is energy deducted, encapsulated, withdrawn for a time (possibly a very long time) from radiative circulation. I.e., it need not be “radiatively balanced” anytime soon. The lag between fern photosynthesis, tectonic burial, reduction of carbohydrates to oil, drilling and release and combustion may be a multi-million year side trip.

  87. Al Tekhasski said

    Steve Fitzpatrick wrote: “The greenhouse effect that you choose do not consider (for simplicity) is the real issue.”

    We obviously look at things from opposite ends. With my example I have demonstrated that global surface temperature index can change in any direction without disturbing overall radiative balance. Therefore the variations in temperature trends may not need any more complicated factors or changes in magnitude of GH effect or else. The system can walk randomly without violations of any law of physics.

    “The surface temperature can change (upward or downward) while maintaining the overall radiative balance of the earth”

    Is not it what I said, twice?

    ” … because of changes in infrared absorbing gases, weather, and other factors. That is the whole point.”

    No, I have demonstrated that the surface global temperature can change without all that stuff and without any greenhouse effects while maintaining the overall radiative balance. My point was to illustrate that the change in global temperature index tells nothing about direction of radiative imbalance. From the global index you cannot tell if GH gas change causes radiative forcing or not, and in which direction. In other words, “global temperature” is physical nonsense, as people are trying to tell you here. Do you agree with this eventually?

    “If infrared absorbing gases increase in the atmosphere, then all else being equal, the surface temperature has to on average increase by some (unknown) amount so that radiative balance at the emitting level of the atmosphere is maintained.”

    I wonder, what kind of assumptions do you use to arrive at this construction? Are you considering that gases have a very “uneven” spectrum of absorption, like a comb? Are you considering that some areas of atmosphere have no lapse rate (tropopause), and some layers have negative lapse rate (stratosphere, where higher=warmer)? So, you seem to believe that “higher is always colder”, right?

    “Averaging the temperature of the emitting level (on a emitting power basis) is not terribly informative, because you will find that it basically can’t change very much”

    This is odd. Above you just subscribed to the concept of radiative forcing due to GH increase, which _require_ a change in emitting level. This is the entire concept behind radiative forcing and entire AGW theory. Now I am really confused about your position on the whole climate change issue.

    And again, you are boldly repeating standard climatological dogmas. Weather is not noise, temperature does not change because of change in weather, temperature is _defined_ by weather, it is a part of it.

  88. Steve Fitzpatrick said

    Al,

    “Above you just subscribed to the concept of radiative forcing due to GH increase, which _require_ a change in emitting level.”

    Yes, there has to be a change in emitting level (altitude), but not in emitting temperature. To maintain energy balance, the power-weighted emission temperature must be reasonably constant in the long term. The effective emitting level does change with latitude and season, of course, and can approach the surface (eg. Antarctica in winter); adding infrared absorbing gas is expected to increase the average altitude of the effective emitting level a little, but should not change the power-weighted average emission temperature in the long term.

    “you are boldly repeating standard climatological dogmas.”

    Well, I am not a climatologist, and I am very skeptical of most of climatology, especially the large projected temperature increases (high climate sensitivity) claimed by most climatologists; I find these claims are a) not formally testable, and b) not well supported by the preponderance of the data. I am also very skeptical of ocean circulation models, which seem to me designed mainly to be consistent with a supposed high climate sensitivity, not to be consistent with ocean heat and CO2 absorption data. But I am a physical scientist, and I try to rationally evaluate how the world works. Certain “dogmas” of climatology, like the influence of infrared absorbing gases on heat transport to space, seem to me perfectly consistent with physical reality as I understand it. Sometimes “dogmas” (like E= MC^2, Heisenberg uncertainty principle, thermodynamics, etc.) are very good representations of reality, sometimes they are not (eugenics).

    I am by no means alone in my assessment; if you investigate, you will find that most physical scientists of all stripes (not just climatologists) accept that adding infrared absorbing gas to the atmosphere ought to increase surface temperature by some amount; how much is the important question. Even well known skeptical climatologists like Richard Lindzen agree with this. Heck, our gracious host Jeff Id agrees with this, and he is surely a bona fide climate skeptic! An old undergraduate professor I had (now long retired,) who taught physical chemistry and instrumental analysis, agrees, even though he says most of climatology is “rubbish”. This broad agreement on radiative forcing has nothing to do with “dogma”, conspiracy, politics, or group-think, it has to do with rational evaluation of the physical processes involved.

    “Weather is not noise, temperature does not change because of change in weather, temperature is _defined_ by weather, it is a part of it.”

    This is a word game that does not contribute to understanding the system.

  89. Al Tekhasski said

    Steve, you wrote: “Yes, there has to be a change in emitting level (altitude), but not in emitting temperature.”

    The entire central idea of “radiative forcing” is that increase in altitude of overall (average) IR opacity leads to reduced OLR, because the lapse rate is considered as fixed, and the surface boundary condition (SST) is also fixed due to huge thermal inertia (mostly of oceans). The OLR is supposed to be reduced because “higher = colder” in the standard AGW theory of averages. Therefore, within the standard AGW theory your statement is incorrect, unless you have something deeper in mind but are not telling us.

    There is no doubt that doubling of CO2 will affect radiative properties of atmosphere, especially in 14-16um range, which constitutes 9% of IR energy containing interval. You seem to be saying that increasing IR opacity along all CO2 lines does not change OLR (“emitting temperature”), so no actual imbalance occur, and therefore CO2 change does not affect GW. Incidentally, I share this idea, and even presented on why it could be so.

    Do you have a different theory in mind?

  90. Al Tekhasski said

    Steve, you wrote: “Yes, there has to be a change in emitting level (altitude), but not in emitting temperature.”

    The entire central idea of “radiative forcing” is that increase in altitude of overall (average) IR opacity leads to reduced OLR, because the lapse rate is considered as fixed, and the surface boundary condition (SST) is also fixed due to huge thermal inertia (mostly of oceans). The OLR is supposed to be reduced because “higher = colder” in the standard AGW theory of averages. Therefore, within the standard AGW theory your statement is incorrect, unless you have something deeper in mind but are not telling us.

    There is no doubt that doubling of CO2 will affect radiative properties of atmosphere, especially in 14-16um range, which constitutes 9% of IR energy containing interval. You seem to be saying that increasing IR opacity along all CO2 lines does not change OLR (“emitting temperature”), so no actual imbalance occur, and therefore CO2 change does not affect GW. Incidentally, I share this idea, and even presented an example here on why it could be so.

    Do you have a different theory in mind?

  91. Steve Fitzpatrick said

    Al # 89/90

    “and the surface boundary condition (SST) is also fixed due to huge thermal inertia (mostly of oceans).”

    The surface temperature is most certainly not fixed; it varies daily and seasonally over substantial ranges, even over the ocean. While the ocean certainly presents a range of response times, most data I have seen suggests the majority of the response is relatively quick. What makes the average emission level move higher is mainly absorption/re-radiation in the 14-16 micron region (but there is some band broadening on both sides, depending on the altitude). The presence of any infrared absorbing gas (not just CO2) slows loss of heat to space; CO2 represents only about 65% of current radiative effect from IR absorbing gases.

    All else being equal, a higher emitting level would be on average cooler, but cooler means less radiation lost to space. The temperature profile of the entire atmosphere (surface to tropopause) must increase slightly so that the energy lost equals the solar energy gained. Note that IR absorbing gases make no difference below the effective emitting level…. radiative transfer at lower levels is minor, and physical transfer (convection) dominates.

  92. Al Tekhasski said

    Steve, “The surface temperature is most certainly not fixed; it varies daily and seasonally”

    Why it should be so difficult? Obviously I was talking about general climatological “average”, which is the standard term of standard AGW model of “radiative forcing”. Of course surface temperatures vary, locally and temporary. But, how long does it take to heat all oceans by 1K?

  93. Steve Fitzpatrick said

    Al #92,

    “Why it should be so difficult?”

    Mostly because it is a very complicated system. Assumed simplifications can make the analysis “easier”, but they run the risk of generating incorrect conclusions unless the simplifications are carefully examined. In discussions of global warming, it seems to me altogether too many simplifications are accepted without critical analysis, and lead to crazy conclusions.

    “But, how long does it take to heat all oceans by 1K?”

    I am not certain if you mean the ocean surface (mixed layer) of if you mean warming the entire ocean by 1K.

    Warming the entire ocean by 1K would take 1000+ years (if it ever happened at all!), so that doesn’t seem terribly relevant in evaluating the response to forcings that change over 1-50 years. Most all fossil reserves will be exhausted within 100 or 150 years if consumption trends do not change a lot, and atmospheric CO2 levels will then HAVE to fall, so calculation of the “ultimate response” to a constant high level of CO2, 500+ years out, is both silly and terribly misleading. Yet this “ultimate response” sensitivity is what CAGW climatologists always talk about. The “immediate” (50 year response) is far lower, even according to the GCM’s. The “ultimate response” is just a scare story that can’t happen.

    What matters is how quickly the ocean accumulates or releases heat at a significant rate, and this is poorly known. Ocean circulation models seem to me to be in clear disagreement with ARGO data. Much of the top 100 meters in the sub-tropics and mid latitudes experiences far more than 1K change on a seasonal basis, so the seasonal heat flux into and out of the ocean is huge compared to the rate of net accumulation that might be expected from radiative forcing. Much of the tropical ocean changes less than 1K seasonally.

    To me it looks like the ocean response to an instantaneous change in forcing (Pinatubo for example) mostly takes place within 5 years. Longer ocean lags, and certainly these are also present, look like relatively small contributors; most of the response to a change in radiative forcing should take place pretty quickly. Steve Schwartz at Brookhaven National Laboratory drew a huge amount of criticism from CAGW climatologists when he suggested the effective ocean lag is only ~8.5 years, corresponding to best estimate “ultimate response” to doubling of CO2 of only 2/3 the IPCC “best estimate”. He has not changed his mind.

  94. Brian H said

    There’s also the wee issue of the 85% of the planet’s volcanoes pumping away under the oceans, putting huge amounts of CO2-saturated water into play. That input, counter-balanced by the conversion of CO2 into various forms of subsea rock by microbiota, actually determines the CO2 content of the atmosphere. All the mega-flora and -fauna like the trees and humanity are minuscule bit players by comparison. 😀

  95. Steve Fitzpatrick said

    Brian H #94,

    Some supporting data or references to studies on the estimated volumes of CO2 released by undersea volcanoes would be helpful. Since samples of deep water have been collected and the concentration of CO2 in them measured (corresponding roughly to the equilibrium concentration expected for very cold water in contact with ~280 PPM of CO2 in the air), I am puzzled why these “saturated” samples of deep water saturated with CO2 have not received wide discussion.

    By the way, at the pressures and temperatures in the deep ocean (300+ atmospheres, ~2-4C temperature) “saturated” CO2 corresponds to a very high weight fraction CO2. It has not received a lot of study, but it appears to be somewhere near 9% by weight CO2 in the saturated water/CO2 solution (see http://pubs.acs.org/doi/abs/10.1021/ja01861a033). Normal concentrations for CO2 dissolved in the ocean are many orders of magnitude lower than “saturated”. Can you point to some study which shows that the deep ocean (anywhere) contains water “saturated with CO2”?

    Ditto on the rate of “conversion of CO2 into various forms of subsea rock by microbiota”.

    It is clear that green plants (ocean and land together) dominate the composition of the atmosphere, since O2 concentration in the atmosphere is orders of magnitude higher than atmospheric CO2. Other sinks and sources for CO2 (usually called “slower” parts of the carbon cycle) are most often reported as having significant influence on much longer time scales (thousands of years to geologic time scales). I have never seen studies that suggests other biological processes (“conversion of CO2 into various forms of subsea rock by microbiota”) absorb CO2 at a rate that dominates the combined absorption of atmospheric CO2 by green plants and the net absorption by the oceans.

  96. Brian H said

    There’s much information coming out on megaplumes and subsea flood basalts, etc., such as http://www.ajsonline.org/cgi/content/abstract/309/9/788, but I see that you are busily building strawmen to attack. Saturated plumes at depth are of course dispersed, and the CO2 follows a longish transition to the surface. And the “net absorption by the oceans” of course is meaningless; the ultimate sinks are the calcite and other formations which ultimately make up much of the seafloor, resulting mostly from bioactivity within the oceans. Much of that ends up in the mantle through subduction, and much later is released.

    As for lumping oceanic “green plants” in with land plants, that’s like combining a moose and a mouse in one package. Now that it is being found that 1 ml of seawater is likely to contain about 1M bacteria and 10M viruses, with more species than all others on the planet combined spread throughout the oceans, not to mention the archaea which also inhabit the crust, possibly down to tens of kilometers.

    In any case, there are instances where massive influx of CO2 from traps failed to cause substantial extinction or disruption ( http://www.semp.us/publications/biot_reader.php?BiotID=681 ). Such exceptions disprove the rule, not prove it.

  97. Steve Fitzpatrick said

    Brian H,

    From your reference: “In drill cuttings that contain epidote, prehnite, quartz and calcite, using measured epidote compositions between the reference temperatures of 275°C and 310°C, calculated values of PCO2 for the geothermal fluids range from ~0.6 to ~6.2 bars. When only epidote, prehnite and quartz are observed in the drill cuttings, the calculated range of PCO2 is from ~1.3 to ~6.8 bars, which provides the maximum value of PCO2 at which calcite will not be present. The present day PCO2 values of geothermal fluids from the Reykjanes system were derived from analytical data on liquid and vapor samples collected at the surface from wet-steam well discharges using both the WATCH and SOLVEQ speciation programs. The geothermal fluids at reference temperature between 275°C and 310°C have PCO2 concentrations ranging from 1.3 bars to 4.0 bars.”

    So the partial pressure of CO2 INSIDE a hydrothermal vent is calculated by these authors to be between 0.6 and 6.8 atmospheres. The combination of CO2 pressure and temperature in your reference suggests somewhere under 0.1 g of CO2 per liter of water (exactly how far under 0.1 g per liter, I do not know). This is really not a very high concentration of CO2. And what has this article to do with your statement:
    “That input, counter-balanced by the conversion of CO2 into various forms of subsea rock by microbiota, actually determines the CO2 content of the atmosphere. All the mega-flora and -fauna like the trees and humanity are minuscule bit players by comparison”?

    To actually show that “mega-flora and -fauna like the trees and humanity are minuscule bit players”, you would have to show the rates of the additions and sinks that you point to compared to the best estimates for trees and animals. But why would you not include mini and micro-flora/fauna (bacteria, phytoplankton, etc.) as an important part of relatively short term biological sources and sinks for CO2? Why limit this comparison to “mega” size species? I have not a clue.

    I suggest that you consider using references with data which actually supports your arguments instead of pointing to some article that happens to contain words you imagine are relevant.

    As I said once before, exchanging comments with you is clearly not a productive use of time for either of us; I should have stuck to my plan to avoid further exchanges. Scientists and lawyers do not appear to think about the technical issues involved in global warming in even remotely similar ways.

  98. […] Jeff Id and Roman M Joseph at Residual Analysis Zeke Hausfather Nick Stokes Chad Herman Steven Mosher […]

  99. dance records…

    […]Gridded Global Temperature « the Air Vent[…]…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: