the Air Vent

Because the world needs another opinion

Archive for July, 2011

Polarbear Science Mysteries Exposed?

Posted by Jeff Id on July 29, 2011

There is an interesting dynamic in the full transcript of this interview which I have only read this morning. Carrick’s comment below got me interested enough to read the whole thing in detail and while I don’t believe that the data warrants the conclusions discussed, there is an impression of a beat-up scientist trying to exist inside a political system.

7_28_11_Monnett-IG_interview_transcript[1]

CHARLES MONNETT: They basically blew everybody out of here 6 that showed any, uh, desire to be a conscientious scientist. 7 Jeff Gleason was one of those. Did he tell you his story? 8
ERIC MAY: Not pertaining to this. 9
CHARLES MONNETT: Yeah, well, we got blasted, you know, 10 really, uh, hard, you know, by this agency when, when this finding came out, and if you‟ve been digging in my emails, and I don‟t know if you‟ve dug in my emails, or it‟s just Jeff Loman selecting it, but you‟ll see a lot of emails there, uh, from management to me telling me that I can‟t function as a biologist. I‟m not allowed to talk about this paper or our findings. I‟m not allowed to talk to the media. I can‟t, you know, I can‟t  do these things.
And, and they really dumped on us, um, uh, when this things came out, and then that, um – uh, and, and some other,  um, manipulation and restrictions that Jeff got hit with caused him to bail out of here, and he, he took a cut from a GS-13 to a GS-11 position to go to the Fish and Wildlife Service. He‟s back with MMS now, um, in the Gulf, um, but he, he, um – and he‟s back I think at a level below what he was at here. But his motivation there was he has a, a, a woman down there that he, you know, has linked up with, and he was looking for any job and, you know,  because I think he felt that, uh, the Alaska Region is kind of special in the way it treats its scientists.

I still find it difficult to feel sorry for the astoundingly cushy and high paying jobs these people have, but there is apparently a lot of pressure to toe the line for the boss’s political preferences.  Not a good place to do science of any kind.

Check this out at WUWT, AKA the center of the internet.

Inspector general’s transcript of drowned polar bear researcher being grilled

Posted in Uncategorized | 44 Comments »

A repy to Dr. Jim Bouldin

Posted by Jeff Id on July 27, 2011

I’ve been gone for a while working on other things.  MikeN called my attention to a criticism by  Jim Bouldin, of my ‘probing’ of the hockey stick CPS methods. Since the Air Vent wouldn’t even be a climate blog, if it weren’t for Mann 08, it does seem important to address the criticisms by Jim.  As it is my blog, the cool thing is that I can shove comments right in the middle of his criticisms to point out the issues of disagreement.

——

Oh boy. BIG problem with Jeff ID’s point that you quoted above.

To summarize: He is arguing that a hockey stick emerges as an artifact of the method used for screening proxies to include in a reconstruction (with specific reference of course, to *Mike Mann’s* reconstructions). The cause of this artifact production is supposed by him to be due to the fact that: “…The series are scaled and/or eliminated according to their best match to measured temperature which has an upslope. The result of this sorting is a preferential selection of noise in the calibration range that matches the upslope, whereas the pre-calibration time has both temperature and unsorted noise.”

This statement is entirely *false*, and it is so on several levels (including use of poor terminology such as “sorted” to mean screened). Not only is it false, it shows a phenomenal lack of attention to the most basic of facts, as presented by Mann et al in their 2008 paper in PNAS, both in the main paper and in the supplemental material. To wit:

There were 1209 proxies (from some larger candidate set) that met three initial screening criteria, (such as minimum length of record and stated minimum correlation among the individual members at a given site). From these 1209 records, a nominal screening cutoff of p < .10 with either of the two closest instrumental temperature grid points, was established. (After accounting for temporal autocorrelation, this p value rises slightly to p < .128). If one assumes a positive relationship between ring measure and temperature (i.e. one tailed test), the expected number of sites meeting this criterion is: 1209 * .128 = 155. (If one assumes that either a positive or negative relationship might occur, which they do not, the number is half that, about 78.)

The actual number of sites that passed this screening: 484, or over 3 times the number expected based on chance alone, (i.e. assuming no relationship between rings and temperature, and using a one tailed test).

See, Jim has several misunderstandings.  The point he repeats from M08 is that Mannian correlation passed so many proxies, it couldn’t  be by accident and they must be truly temperature!! This is the same point proven wrong here so often. Jim doesn’t read here for sure.

First we can remember Tiljander which was simply flipped to improve the high number of correlated series, from memory, this counts for 3? of 484 series but it is worse than that.  Luterbacher, which included 71 series of ‘ACTUAL’ physical temperature data, also skewed the results – so subtract another 71 bs proxies from 484 as the portion of the proxy Mann checked for correlation to temperature – was actual temperature.

Don’t worry, I’m not done yet!

Read the rest of this entry »

Posted in Uncategorized | 70 Comments »

Roman M’s anomaly combination incorporated into R

Posted by Jeff Id on July 23, 2011

As long time readers know, I’m a fan of Roman’s temperature combination method which doesn’t require a base period window to offset individual station anomalies in global temperature averaging.   Steve Mosher has incorporated Roman’s work into his own code for R.

Roman’s Method – Steve Mosher

Posted in Uncategorized | 8 Comments »

Introduction to RghcnV3 – Steve Mosher

Posted by Jeff Id on July 18, 2011

Steve has put together a high quality series of algorithms for combining surface station data in R.  He has combined methods from a wide variety of sources with the following example incorporating ideas from my favorite blog by Tamino.  This is the kind of science I like, open source, nothing hidden, no motivations, just basic numbers combined in reasonable fashion.  I do hope that these packages get recognized for their value as they have quite a bit of power for general climatology use.  — Jeff

Today I’m going to show you some of the basics of the RghcnV3 R package. Version 1.2 is staged for release on CRAN (http://cran.r-project.org/ ) and should be going live any day now. If you can’t wait just ping me and I’ll ship you sources which will work on any OS.

First, a few preliminaries. The package is designed as a set of common functions that a programmer can use to quickly write his own studies. In this example I’ve gone very light on the graphics code, I could spend hours crafting graphics functions dedicated to these problems , but in the end its very hard to parameterize or abstract that part of analysis. In short, for now, you have to make your own pretty graphs. What I’ll show here is just basic sanity checks.

The data this package is focused on is GHCN v3, which has just come out of beta. We Start by downloading the data.

files<- downloadV3()

For every type of file you want to download in RghcnV3 there is a corresponding download function, cleverly named download*. Each of these functions has defaults:

downloadV3(url = V3.MEAN.ADJ.URL, directory = getwd(), overwrite = TRUE, remove = FALSE)

The functions download the data and unpack the data using theR.utils package. Hands free! The download functions are all designed to return a file name(s) to the uncompressed file.

>files

$DataFilename

[1] “C:/Users/steve/Documents/GHCNPackage2/ghcnm.v3.0.0.20110714/ghcnm.tavg.v3.0.0.20110714.qca.dat”

$InventoryFile

[1] “C:/Users/steve/Documents/GHCNPackage2/ghcnm.v3.0.0.20110714/ghcnm.tavg.v3.0.0.20110714.qca.inv”

$Date

[1] “2011-07-14″

For every filetype there is also a function cleverly named read*

v3Mean <- readV3Data(filename = files$DataFilename)

v3Inv <- readInventory(filename = files$InventoryFile)

GHCN v3 has a new format with 51 columns of data. They’ve added QC data to every temperature data file. To maintain compatibility with old code that works with Version 2 formats, readV3Data() outputs a data structure that looks just like V2 format: 14 columns of data: id, year, and 12 months of data. Next, we want to trim or window the temperature data because it starts in 1801:

startYear<- 1900

endYear<- 2010

v3Mean <- windowV3(v3Mean, start = startYear, end = endYear)

If you are familiar with the “window” command from using time series data you’ll understand what window does. windowV3() “windows” a v3 object.

For this study I’m going to use Tamino’s regional station combing algorithm, so I have to select a region of the world and get the stations for that region. That means working in the inventory data which contains location information. I have a couple ways to do that. The first is a gross spatial “crop” of the inventory using the“cropInv()” command which takes a geographical “extent” extent(lonMin,lonMax…) But I also have a much slicker way to skin this cat. Using the “maps” package, I’ll select those stations deep in the heart of Texas

Read the rest of this entry »

Posted in Uncategorized | 32 Comments »

Subsampled Confidence Intervals – Zeke

Posted by Jeff Id on July 15, 2011

Sub-sampling is hard to argue with.  Zeke has done a post on global temperatures at the Blackboard taking 500 stations at a time and looking at the extremes of averages.   It presented a set of very tight error bars based on weather variance, sampling errors, and any other random events which affect measurements.  The error bars don’t incorporate any systematic bias but there is an amazing amount of detail in the result.

Global temperature extremes using 5 percent of available stations selected at random 500 times

Method:

To test if this is in fact true, we can randomly select subsets of stations to see how a global reconstruction using only those records compares to other randomly selected stations. Specifically, I ran 500 iterations of a process that selects a random 10% of all stations available in GHCN v3 (which ends up giving me 524 total stations to work with, though potentially much fewer for any given month), created a global temperature record with the stations via spatial gridding, and examined the mean and 5th/95th percentiles for each resulting month.

The plot above uses only 5% of the data but the point of this exercise is that Zeke has proven beyond any shadow of doubt that we do have enough station data to determine temperatures to an effectively consistent level.   Is the data clean enough to have a high quality trend is another question as systematic bias is also perfectly recorded in the above record.  Note how the uncertainty expands both in the past and in recent years due to lack of station data in both times.

Back when I still worked with numbers, I performed asimilar analysis on the Ljungqvist proxy data.  The method is just too simple and direct to argue with.

 

Read the rest of this entry »

Posted in Uncategorized | 47 Comments »

Chugging Along

Posted by Jeff Id on July 13, 2011

Steve Mosher has continued his work on GHCN station global temperature reconstruction package in R.  The most recent version is documented in his blog here.  He has developed an elegant set of commands for computing global temperatures in R and is working on release of an R library package for that purpose.   Recently he incorporated a method from Berkley which Tamino posted.  Steve has decided to work entirely in the global temperature datasets on his blog and originally I wasn’t sure if he would keep going or what it would turn into.   I check out what has been happening from time to time and got tired of having to google him to find it so I placed the links on the right.  He appears to be doggedly developing a very complete package of commands which will process the raw data by a variety of reasonable (and documented) methods.

Posted in Uncategorized | 10 Comments »

Decisions we don’t need your help with

Posted by Jeff Id on July 10, 2011

One of the several ignorant things that president Bush did was sign a law banning incandescent lights. Yes it came from a liberal House and Senate but he did sign it. It looks like the House will send a repeal of it up for vote shortly but it won’t likely get past our anti-economic growth president. In the past, I’ve made the point that incandescent bulbs are wonderfully efficient distributed heaters. Lubos Motl also made the point, humorously referring to them as heatballs which happen to produce some light to indicate they are working. Of course, plenty of others have made this point as well.

As an engineer, these concepts are very familiar. The problem is that ‘all of the above‘ greens saw an incandescent lamp’s efficiency of under 5% energy conversion to visible light and decided that we could save the world incrementally with one step being the banning of the incandescent light.

First, even if you accept IPCC CO2 global warming sensitivity values, I flatly deny that the microscopic improvement in CO2 savings by the ban is worth consideration. Even replacing all the incandescent lights in the world with CFL tomorrow won’t make a detectable difference in atmospheric CO2. Shutting off all the lights might, but even most hardcore greens usually would not support that.

Read the rest of this entry »

Posted in Uncategorized | 140 Comments »

Uncertainty in Air Temperatures

Posted by Jeff Id on July 9, 2011

Pat Frank has a new article recently published in E&E on the statistical uncertainty of surface temperatures.  He has requested an open venue for discussion of his work here.  This is an opportunity for readers to critically asses the methods and understand whether the argument/conclusion is sound. – Jeff

—————-

Uncertainty in Surface Air Temperature -by Pat Frank

All you fellow evil climate-change deniers out there may recall that December last year, Energy and Environment published my paper [1] on the unapt statistics used by the scientists at the CRU/UEA and UK Met [2] to represent uncertainty in the global average surface air temperature (GASAT).

The paper uncovered two mistakes in the published CRU error model:

1) The statistics of random error were uncritically and wrongly applied.

2) Systematic instrumental error was completely neglected.

The GISS methodology is as poorly realized, apparently, given the very small error bars that consistently accompany their published anomaly trends (described below).

When these mistakes were rectified the estimated lower limit of uncertainty in any given global annual surface air temperature anomaly proved to be (+/-)0.46 C. This uncertainty is sufficient to make the surface air temperature anomaly trend statistically indistinguishable from 0 C, at the 1-sigma level, over the entire 20th century.

Figure 3 from that paper:

For those interested, the reprint is freely available, courtesy of Bill Hughes at Multi-Science Publishing (pdf download). A summary discussion also appeared last January here on tAV and on WUWT.

That paper represented only half the study, however. The second half is now just out, again in E&E, (2011) 22(4). It’s open access and a reprint is available here (pdf download).

The new paper uncovers and explores the consequences of two further mistakes in the CRU (UK Met, GISS) GASAT statistical uncertainty model. The CRU model:

  1. Implicitly asserts that all daily temperatures are constant across any given month.
  2. Completely neglects magnitude uncertainty [designated as (+/-)s], which is the uncertainty in a mean that must be applied when averaging a measured set of observables that have inherently variable magnitudes.

In Ref [1], the CRU statistical model was shown to strictly follow signal-averaging “Case 1,” wherein the 1/(sqrtN) error reduction is applied to the measurement mean, and magnitude uncertainty, (+/-)s, is absent.

Given these statistical conditions, the necessary physical corollary to Case 1 is that all the measurement error is random and that all the observables are of inherently constant magnitude.

What does it mean in terms of air temperature, when “all the observables are of inherently constant magnitude”? It means that the CRU statistical uncertainty model has an unstated and implicit physical assumption: one of constant air temperature.

Read the rest of this entry »

Posted in Uncategorized | 312 Comments »

Climate Sensitivity – NicL

Posted by Jeff Id on July 8, 2011

Nic Lewis asked me to post a link to Judith Curry’s blog carrying a letter to Gabi Hegerl regarding the significant problems in AR4 working group 1 forcings.   As I understand it, this was the only non-modeled forcing analysis presented in AR4.  In other words, the only truly observed data had a substantial change made to it prior to presentation.  I strongly recommend that readers continue to follow this situation closely as it gets to the heart of what appears to be a significant problem in AR4.  Click the title below for a link to the article at Dr. Curry’s blog.  — Jeff

———————————————————-

Climate sensitivity follow-up

by Nicholas Lewis

JC note:  Pursuant to Nic’s post on “The IPCC’s alteration of Forster & Gregory’s model-independent climate sensitivity results,” he has sent a letter to Gabi Hegerl, who was coordinating lead author on chapter 9 of the IPCC AR4.

Posted in Uncategorized | 3 Comments »

The IPCC’s alteration of Forster & Gregory’s model-independent climate sensitivity results

Posted by Jeff Id on July 5, 2011

Nic Lewis has made an interesting discovery deep within AR4, he sent some information on it to me last month.   It appears that the likelihood of high climate sensitivity was a little tweaked with respect to the cited literature.  He’s asked me to carry the link as he posted it on Judith Curry’s site.  It is fairly technical so I’ve requested to repost it below here.

In the meantime, this is not a small deal.  It certainly leaves one wondering how it happens by accident in such a key portion of the report.

Check it out here.

Posted in Uncategorized | 29 Comments »

Updated Spencer Ocean Model

Posted by Jeff Id on July 2, 2011

Roy Spencer pointed out that his simple ocean model had an error of a factor of ten in the heat capacity of water.  It is a simple spreadsheet linked below that anyone can work with.

simple-forcing-feedback-ocean-heat-diffusion-model-v1.0
FOLLOWUP NOTE: The above spreadsheet has an error in the equations, which does not change the conclusions, but affects the physical consistency of the calculations. The heat capacity used for water is 10 times too low, and the diffusion coefficients are also 10x too low. Those errors cancel out. I will post a new spreadsheet when I get back to the office, as I am on travel now.

Paul K also noted the error claiming to have found an energy conservation error in the missing heat thread.

Jeff et al,
I pulled down Dr Spencer’s spreadsheet with a view to testing higher order integration, and discovered that there are two major errors in the spreadsheet, which probably make further conversation on his findings a bit useless, at least until he has had a chance to review and correct the errors, and modify his conclusions accordingly.
1) Dr Spencer noted (in an update) an error of a factor of about 10 in the heat capacity term, but argued that this was compensated for by a change in the heat diffusion term of the same order. In fact, the argument for compensatory errors is only valid for the calculations below the first layer, where the calculation involves only terms from interlayer heat-flow. The argument is not valid for the calculation of the temperature of the first layer. This critically includes the integration (over time) of the total heat flux due to radiative imbalance – expressed in the model in the form of : F(t) – lamda * DeltaT [F is the forcing, lamda is the feedback parameter and T is temperature] There is no compensatory mechanism for the error in heat capacity, and this introduces a substantial error in the first-layer temperature calculation.
2) The heat capacity term in the model for each layer is given by 50*418000/86400. It is not clear where these values come from, but it is easily confirmed that the final value from this expression is too small by a factor of about 10. I calculated it should be 2555 on the back of an envelope. However, the heat flow term out of layer 1 into layer 2 includes a factor of 41,800 for the layer 1 calculation and a factor of 418,000 for the layer 2 calculation of heat flow from layer 1 into layer 2, which causes the model to bust conservation of energy.

These two errors are sufficient for me to throw the towel in. Pity. I’m going to bed.

I fell asleep early and woke up at 2am so the house is quiet and I began reading the equations carefully.  It turns out that they are both right.  The spreadsheet cells from the air/water boundary contained the following equation:

D12+C13*$AJ$14*(86400*$AJ$10/($AJ$11*418000))-(D12-E12)*D$6*(86400*$AJ$10/($AJ$11*41800))-D12*$AJ$12*1*(86400*$AJ$10/($AJ$11*418000))

Read the rest of this entry »

Posted in Uncategorized | 72 Comments »

A Travesty for Colose, Show the Data

Posted by Jeff Id on July 1, 2011

The main reason I tried to quit blogging is because I don’t have time to work math.  The family, business and life in general are far more important as so many of you were very adamant (and correct) in explaining to me.   It is nice though to talk with a bunch of very smart people on line about whatever topic, but my favorite part of blogging was the hours of reading papers and messing with data.  I simply can’t do it these days.  Today my data consists of corporate performance, planning, CAD and corporate efficiency more than anything else.   The rest of the data is food stains, learning to fly kites,  upgrading electric toy trucks etc.

A little while ago I correctly bashed Chris Colose, a young and budding yet self-assuredly world-wise climatoknowledgist,  for his comparison of Venusian atmosphere to Earth and the evil CO2 which caused the incredible hell hole temperatures on the planet surface.  My point was that Venus is a common scare tactic employed by climatoknowledgists who know there is a segment of the public that can’t tell you if an electron is bigger than an atom.

Chris Colose is apparently still mad about the Venus callout and has critiqued “the Skeptics” that’s me and by proxy you (sorry folks),  for uncritically accepting Roy Spencers recent disclosure on deep ocean temps dramatically lagging even the weakest warming climate models.

See the problem is that if CO2 is really trapping/retarding/backradiating heat, we would be able to see it in ocean temps more accurately than air.  That’s because so much of the earth’s surface heat capacity is in the water.

All the energy is in the water, whatever happens to air is moot.  And the plot worth  a thousand words from Roy Spencer’s post shows this:

Read the rest of this entry »

Posted in Uncategorized | 13 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 147 other followers