the Air Vent

Because the world needs another opinion

Hurricanes

Posted by Jeff Id on December 6, 2009

This is an interesting exchange. I was going to excerpt parts but everyone’s points need to be read. Dr. Christy was pointing out that hurricanes have not grown detectably worse or stronger – according to the evil data. He got a bit of a reaction. He also ended up pointing out that the gatekeepers were preventing non-conforming science from being included in the IPCC process.

From: Ben Santer <santer1@llnl.gov>
To: “Thomas R. Karl” <Thomas.R.Karl@noaa.gov>
Subject: Re: [Fwd: Re: [Fwd: concerns about the Southeast chapter]]
Date: Thu, 30 Jul 2009 18:41:44 -0700
Reply-to: santer1@llnl.gov
Cc: Virginia Burkett <virginia_burkett@usgs.gov>, Thomas C Peterson <Thomas.C.Peterson@noaa.gov>, Michael Wehner <mfwehner@lbl.gov>, Karl Taylor <taylor13@llnl.gov>, peter gleckler <gleckler1@llnl.gov>, “Thorne, Peter” <peter.thorne@metoffice.gov.uk>, Leopold Haimberger <leopold.haimberger@univie.ac.at>, Tom Wigley <wigley@cgd.ucar.edu>, John Lanzante <John.Lanzante@noaa.gov>, Susan Solomon <ssolomon@frii.com>, “‘Philip D. Jones'” <p.jones@uea.ac.uk>, carl mears <mears@remss.com>, Gavin Schmidt <gschmidt@giss.nasa.gov>, Steven Sherwood <Steven.Sherwood@yale.edu>, Frank Wentz <frank.wentz@remss.com>

<x-flowed>
Dear Tom,

Thanks for forwarding the message from John Christy. Excuse me for being
so blunt, but John’s message is just a load of utter garbage.

I got a laugh out of John’s claim that Santer et al. (2008) was “poorly
done”. This was kind of ironic coming from a co-author of the Douglass
et al. (2007) paper, which used a fundamentally flawed statistical test
to compare modeled and observed tropospheric temperature trends. To my
knowledge, John has NEVER acknowledged that Douglass et al. used a
flawed statistical test to reach incorrect conclusions – despite
unequivocal evidence from the “synthetic data” experiments in Santer et
al. (2008) that the Douglass et al. “robust consistency” test was simply
wrong. Unbelievably, Christy continues to assert that the results of
Douglass et al. (2007) “still stand”. I can only shake my head in
amazement at such intellectual dishonesty. I guess the best form of
defense is a “robust” attack.

So how does John support his contention that Santer et al. (2008) was
“poorly done”? He begins by stating that:

“Santer et al. 2008 used ERSST data which I understand has now been
changed in a way that discredits the conclusion there”.

Maybe you or Tom Peterson or Dick Reynolds can enlighten me on this one.
How exactly have NOAA ERSST surface data changed? Recall that Santer et
al. (2008) actually used two different versions of the ERSST data
(version 2 and version 3). We also used HadISST sea-surface temperature
data, and combined SSTs and land 2m temperature data from HadCRUT3v. In
other words, we used four different observational estimates of surface
temperature changes. Our bottom-line conclusion (no significant
discrepancy between modeled and observed lower-tropospheric lapse-rate
trends) was not sensitive to our choice of observed surface temperature
dataset.

John next assets that:

“Haimberger’s v1.2-1.4 (of the radiosonde data) are clearly spurious due
to the error in ECMWF as published many places”.

I’ll let Leo Haimberger respond to that one. And if v1.2 of Leo’s data
is “clearly spurious”, why did John Christy agree to be a co-author on
the Douglass et al. paper which uses upper-air data from v1.2?

Santer et al. (2008) comprehensively examined structural uncertainties
in the observed upper-air datasets. They looked at two different
satellite and seven different radiosonde-based estimates of tropospheric
temperature change. As in the case of the surface temperature data,
getting the statistical test right was much more important (in terms of
the bottom-line conclusions) than the choice of observational upper-air
dataset.

Christy’s next criticism of our IJoC paper is even more absurd. He
states that:

“Santer et al. 2008 asked a very different question…than we did. Our
question was “Does the IPCC BEST ESTIMATE agree with the Best Data
(including RSS)?” Answer – No. Santer et al. asked, “Does ANY IPCC
model agree with ANY data set?” … I think you can see the difference.

Actually, we asked and answered BOTH of these questions. “Tests with
individual model realizations” are described in Section 4.1 of Santer et
al. (2008), while Section 4.2 covers “Tests with multi-model
ensemble-mean trend”. As should be obvious – even to John Christy – we
did NOT just compare observations with results from individual models.

For both types of test (“individual model” and “multi-model average”),
we found that, if one applied appropriate statistical tests (which
Douglass et al. failed to do), there was no longer a serious discrepancy
between modeled and observed trends in tropical lapse rates or in
tropical tropospheric temperatures.

Again, I find myself shaking my head in amazement. How can John make
such patently false claims about our paper? The kindest interpretation
is that he is a complete idiot, and has not even bothered to read Santer
et al. (2008) before making erroneous criticisms of it. The less kind
interpretation is that he is deliberately lying.

A good scientist is willing to acknowledge the errors he or she commits
(such as applying an inappropriate statistical test). John Christy is
not a good scientist. I’m not a religious man, but I’m sure willing to
thank some higher authority that Dr. John Christy is not the
“gatekeeper” of what constitutes sound science.

I hope you don’t mind, Tom, but I’m copying this email to some of the
other co-authors of the Santer et al. (2008) IJoC paper. They deserve to
know about the kind of disinformation Christy is spreading.

With best regards,

Ben

Thomas R. Karl wrote:
> FYI
>
> ——– Original Message ——–
> Subject: Re: [Fwd: concerns about the Southeast chapter]
> Date: Mon, 27 Jul 2009 09:54:22 -0500
> From: John Christy <john.christy@nsstc.uah.edu>
> To: Thomas C Peterson <Thomas.C.Peterson@noaa.gov>
> CC: Thomas R Karl <Thomas.R.Karl@noaa.gov>
> References: <4A534CF9.9080700@noaa.gov>
>
>
>
> Tom:
>
> I’ve been on a heavy travel schedule and just now getting to emails I’ve
> delayed. I was in Asheville briefly Thursday for a taping for the CDMP
> project at the Biltmore estates (don’t know why that was the backdrop)
> while traveling between meetings in Chapel Hill, Atlanta and here.
>
> We disagree on the use of available climate information regarding the
> many things related to climate/climate change as I see by your responses
> below – that is not unexpected as climate is an ugly, ambiguous, and
> complex system studied by a bunch of prima donnas (me included) and
> which defies authoritative declarations. I base my views on hard-core,
> published literature (some of it mine, but most of it not), so saying
> otherwise is not helpful or true. The simple fact is that the opinions
> expressed in the CCSP report do not represent the real range of
> scientific literature (the IPCC fell into the same trap – so running to
> the IPCC’s corner doesn’t move things forward).
>
> I think I can boil my objections to the CCSP Impacts report to this one
> idea for the SE (and US): The changes in weather variables (measured in
> a systematic settings) of the past 30 years are within the range of
> natural variability. That’s the statement that should have been front
> and center of this whole document because it is
> mathematically/scientifically defensible. And, it carries more weight
> with planners so you can say to them, “If it happened before, it will
> happen again – so get ready now.” By the way, my State Climatologist
> response to the CCSP was well-received by legislators and stakeholders
> (including many in the federal government) and still gets hits at
> http://*vortex.nsstc.uah.edu/aosc/.
>
> There also was a page or so on the tropical troposphere-surface issue
> that I didn’t talk about on my response. It was wrong because it did
> not include all the latest research (i.e. since 2006) on the continuing
> and significant difference between the two trends. Someone was acting
> as a fierce gatekeeper on that one – citing only things that agreed with
> the opinion shown even if poorly done (e.g. Santer et al. 2008 used
> ERSST data which I understand has now been changed in a way that
> discredits the conclusion there, and Haimberger’s v1.2-1.4 are clearly
> spurious due to the error in ECMWF as published many places, but
> analyzed in detail in Sakamoto and Christy 2009). The results of
> Douglass et al. 2007 (not cited by CCSP) still stand since Santer et al.
> 2008 asked a very different question (and used bad data to boot) than we
> did. Our question was “Does the IPCC BEST ESTIMATE agree with the Best
> Data (including RSS)?” Answer – No. Santer et al. asked, “Does ANY IPCC
> model agree with ANY data set?” … I think you can see the difference.
> The fact my 2007 tropical paper (the follow-on papers in 2009 were
> probably too late, but they substantiate the 2007 paper) was not cited
> indicates how biased this section was. Christy et al. 2007 assessed the
> accuracy of the datasets (Santer et al. did not – they assumed all
> datasets were equal without looking at the published problems) and we
> came up with a result that defied the “consensus” of the CCSP report –
> so, it was doomed to not be mentioned since it would disrupt the
> storyline. (And, as soon as RSS fixes their spurious jump in 1992, our
> MSU datasets will be almost indistinguishable.)
>
> This gets to the issue that the “consensus” reports now are just the
> consensus of those who agree with the consensus. The
> government-selected authors have become gatekeepers rather than honest
> brokers of information. That is a real tragedy, because when someone
> becomes a gatekeeper, they don’t know they’ve become a gatekeeper – and
> begin to (sincerely) think the non-consensus scientists are just nuts
> (… it’s more comfortable that way rather than giving them credit for
> being skeptical in the face of a paradigm).
>
> Take care.
>
> John C.
>
> p.s. a few quick notes are interspersed below.
>
>
> Thomas C Peterson wrote:
>> Hi, John,
>> I didn’t want this to catch you by surprise.
>> Tom
>>
>> ——– Original Message ——–
>> Subject: concerns about the Southeast chapter
>> Date: Tue, 07 Jul 2009 09:25:45 -0400
>> From: Thomas C Peterson <thomas.c.peterson@noaa.gov>
>> To: jim.obrien@coaps.fsu.edu
>> CC: Tom Karl <Thomas.R.Karl@noaa.gov>
>>
>>
>>
>> Dear Jim,
>>
>>
>> First off and most importantly, congratulations on your recent
>> marriage. Anthony said it was the most touching wedding he has ever
>> been to. I wish you and your bride all the best.
>>
>> Thank you for your comments and for passing on John Christy’s detailed
>> concerns about the Southeast chapter of our report, /Global Climate
>> Change Impacts in the United States/. Please let me respond to the key
>> points he raised.
>>
>> In Dr. John Christy’s June 23, 2009 document “Alabama climatologist
>> responds to U.S. government report on regional impacts of global
>> climate change”, he primarily focused on 4 prime concerns:
>>
>> 1. Assessing changes since 1970.
>>
>> 2. Statements on hurricanes.
>>
>> 3. Electrical grid disturbances (from the Energy section).
>>
>> 4. Using models to assess the future.
>>
>>
>>
>> /1. Assessing changes since 1970./
>>
>> The Southeast section has 5 figures and one table. One figure is on
>> changes in precipitation patterns from 1901-2007. The next figure is
>> on patterns of days per year over 90F with two maps, one 1961-1979,
>> the other 2080-2099. One figure is on the change in freezing days per
>> year, 1976-2007. The next figure is on changes to a barrier island
>> land from 2002 to 2005. And the last figure was on Sea Surface
>> Temperature from 1900 to the present. The table indicates trends in
>> temperature and precipitation over two periods, 1901-2008 and
>> 1970-2008. As Dr. Christy indicates in his paper, the full period and
>> the period since 1970 are behaving differently. To help explain this,
>> the table shows them both. Of the 5 figures, only one shows the
>> changes over this shorter period.
>>
>> Since, as the IPCC has indicated, the human impact on climate isn’t
>> distinguishable from natural variability until about 1950, describing
>> the changes experienced in the majority of the time since 1950 would
>> be a more logical link to future anthropogenic climate change. In
>> most of the report, maps have shown the changes over the last 50
>> years. Because of the distinct behavior of time series of
>> precipitation and temperature in the Southeast, discussing the period
>> since 1970 seemed more appropriate. Though as the figures and table
>> indicate, this shorter period is not the sole or even major focus.
>
> See crux of the matter in email above – looking at the whole time series
> is demanded by science. Any 30 or 50-year period will give changes –
> blaming the most recent on humans ignores the similar (or even more
> rapid) changes that occurred before industrialization (e.g. western
> drought in 12th century). The period since 1970 WAS the major focus in
> the SE section (mentioned 6 times in two pages). And, OF COURSE any
> 30-year sub-period will have different characteristics than the 100-year
> population from which it is extracted … that doesn’t prove anything.
>>
>>
>>
>> /2. Statements on hurricanes./
>>
>> Dr. Christy takes issue with the report’s statements about hurricanes
>> and quotes a line from the report and quotes an individual hurricane
>> expert who says that he disagrees with the conclusions. The line in
>> the report that Dr. Christy quotes comes almost word for word out of
>> CCSP SAP 3.3. While individual scientists may disagree with the
>> report’s conclusions, this conclusion came directly out of the
>> peer-reviewed literature and assessments. Dr. Christy also complains
>> that “the report did not include a plot of the actual hurricane
>> landfalls”. However, the section in the Southeast chapter discussing
>> landfalling hurricanes states “see /National Climate Change/ section
>> for a discussion of past trends and future projections” and sure
>> enough on page 35 there is a figure showing land falling hurricanes
>> along with a more in depth discussion of hurricanes.
>>
> You didn’t read my State Climatologist response carefully – I mentioned
> page 35 and noted again it talked about the most recent decades (and
> even then, the graph still didn’t go back to 1850). This hurricane
> storyline was hit hard by many scientists – hence is further evidence
> the report was generated by a gatekeeper mentality.
>>
>>
>> /3. Electrical grid disturbances (from the Energy section)./
>>
>> Moving out of the Southeast, Dr. Christy complains about one figure in
>> the Energy Chapter. Citing a climate skeptic’s blog which cites an
>> individual described as the keeper of the data for the Energy
>> Information Administration (EIA), John writes that the rise in weather
>> related outages is largely a function of better reporting. Yet the
>> insert of weather versus non-weather-related outages shows a much
>> greater increase in weather-related outages than non-weather-related
>> outages. If all the increases were solely due to better reporting,
>> the differences between weather- and non-weather-related outages would
>> indicate a dramatic decrease over this time period in non-weather
>> related problems such as transmission equipment failures, earthquakes,
>> faults in line, faults at substations, relaying malfunctions, and
>> vandalism.
>>
>> Thanks to the efforts of EIA, after they took over the responsibility
>> of running the Department of Energy (DOE) data-collection process
>> around 1997, data collection became more effective. Efforts were made
>> in subsequent years to increase the response rate and upgrade the
>> reporting form. It was not until EIA’s improvement of the data
>> collection that the important decoupling of weather- and
>> non-weather-related events (and a corresponding increase in the
>> proportion of all events due to weather extremes) became visible.
>>
>> To adjust for potential response-rate biases, we have separated
>> weather- and non-weather-related trends into indices and found an
>> upward trend only in the weather-related time series.
>>
>> As confirmed by EIA, *if there were a systematic bias one would expect
>> it to be reflected in both data series (especially since any given
>> reporting site would report both types of events).*
>>
>> As an additional precaution, we focused on trends in the number of
>> events (rather than customers affected) to avoid fortuitous
>> differences caused by the population density where events occur. This,
>> however, has the effect of understating the weather impacts because of
>> EIA definitions (see survey methodology notes below).
>>
>> More details are available at:
>> http://*eetd.lbl.gov/emills/pubs/grid-disruptions.html
>
> The data were not systematically taken and should not have been shown
> .. basic rule of climate.
>>
>>
>>
>> /4. Using models to assess the future./
>>
>> Can anyone say anything about the future of the Southeast’s climate?
>> Evidently according to John Christy, the answer is no. The basic
>> physics of the greenhouse effect and why increasing greenhouse gases
>> are warming and should be expected to continue to warm the planet are
>> well known and explained in the /Global Climate Change/ section of the
>> report. Climate models are used around the world to both diagnose the
>> observed changes in climate and to provide projections for the
>> future. There is a huge body of peer-reviewed literature, including a
>> large number of peer-reviewed climate change assessments, supporting
>> this use. But in Dr. Christy’s “view,” models should not be used for
>> projections of the future, especially for the Southeast. The report
>> based, and indeed must base, its results on the huge body of
>> peer-reviewed scientific literature rather than the view of one
>> individual scientist.
>
> No one has proven models are capable of long-range forecasting.
> Modelers write and review their own literature – there are millions of
> dollars going into these enterprises, so what would you expect?
> Publication volume shouldn’t impress anyone. The simple fact is we
> demonstrated in a straightforward and reproducible way that the actual
> trends over the past 30, 20, and 10 years are outside of the envelop of
> model predictions … no one has disputed that finding with an
> alternative analysis – even when presented before congressional hearings
> where the opportunity for disagreement was openly available.
>>
>> I hope this helps relieve some of your concerns.
>>
>> Regards,
>>
>> Tom Peterson
>>
>>
>>
>
>
> —
> ************************************************************
> John R. Christy
> Director, Earth System Science Center voice: 256-961-7763
> Professor, Atmospheric Science fax: 256-961-7751
> Alabama State Climatologist
> University of Alabama in Huntsville
> http://*www.*nsstc.uah.edu/atmos/christy.html
>
> Mail: ESSC-Cramer Hall/University of Alabama in Huntsville, Huntsville AL 35899
> Express: Cramer Hall/ESSC, 320 Sparkman Dr., Huntsville AL 35805
>
>
>
> —
>
> *Thomas R. Karl, L.H.D.*
>
> Director, NOAA’s National Climatic Data Center
>
> Lead, NOAA Climate Services
>
> Veach-Baley Federal Building
>
> 151 Patton Avenue
>
> Asheville, NC 28801-5001
>
> Tel: (828) 271-4476
>
> Fax: (828) 271-4246
>
> Thomas.R.Karl@noaa.gov <mailto:Thomas.R.Karl@noaa.gov>
>
>
>


—————————————————————————-
Benjamin D. Santer
Program for Climate Model Diagnosis and Intercomparison
Lawrence Livermore National Laboratory
P.O. Box 808, Mail Stop L-103
Livermore, CA 94550, U.S.A.
Tel: (925) 422-3840
FAX: (925) 422-7675
email: santer1@llnl.gov
—————————————————————————-

</x-flowed>


11 Responses to “Hurricanes”

  1. dearieme said

    The wind bloweth where it listeth.

  2. dearieme said

    ..and thou hearest the sound thereof, but canst not tell whence it cometh, and whither it goeth…”

    It comes to something when the Bible is more use on a scientific question thatn “Science” is.

  3. Ryan O said

    I will point out that the following excerpt highlights something that was very important in McI and Pielke’s response to Santer 08:

    So how does John support his contention that Santer et al. (2008) was
    “poorly done”? He begins by stating that:

    “Santer et al. 2008 used ERSST data which I understand has now been
    changed in a way that discredits the conclusion there”.

    Maybe you or Tom Peterson or Dick Reynolds can enlighten me on this one.
    How exactly have NOAA ERSST surface data changed? Recall that Santer et
    al. (2008) actually used two different versions of the ERSST data
    (version 2 and version 3). We also used HadISST sea-surface temperature
    data, and combined SSTs and land 2m temperature data from HadCRUT3v. In
    other words, we used four different observational estimates of surface
    temperature changes. Our bottom-line conclusion (no significant
    discrepancy between modeled and observed lower-tropospheric lapse-rate
    trends) was not sensitive to our choice of observed surface temperature
    dataset.

    Santer used a version of ERSST that included satellite data (version 3.a). This version displayed a significantly lower trend than the one used by Douglass et al. Because it displayed a lower trend, the difference in trend between the surface and troposphere was enhanced. However, this version of ERSST is no longer used. It was replaced almost immediately by version 3.b, which no longer includes satellite data and has a trend comparable to the one used by Douglass. In other words, had Santer used the version of ERSST upon which all the major temperature indices are based, then the modelled difference in surface vs. tropospheric temperatures is outside the confidence intervals of the data. His point about the choice of data not affecting the conclusions is only valid because he doesn’t use up-to-date data.

    Santer – like Mann – is completely blind to his own mistakes. I don’t believe he’s just making this stuff up knowingly; I believe he is simply too arrogant to realize he’s made a fundamental mistake.

    I’m undecided on whether the Douglass/Christy method is correct, but Santer completely misses the point:

    Christy’s next criticism of our IJoC paper is even more absurd. He
    states that:

    “Santer et al. 2008 asked a very different question…than we did. Our
    question was “Does the IPCC BEST ESTIMATE agree with the Best Data
    (including RSS)?” Answer – No. Santer et al. asked, “Does ANY IPCC
    model agree with ANY data set?” … I think you can see the difference.

    Actually, we asked and answered BOTH of these questions. “Tests with
    individual model realizations” are described in Section 4.1 of Santer et
    al. (2008), while Section 4.2 covers “Tests with multi-model
    ensemble-mean trend”. As should be obvious – even to John Christy – we
    did NOT just compare observations with results from individual models.

    Santer doesn’t seem to understand that Douglass is comparing the BEST ESTIMATE (which is a single set of numbers) and does not have confidence intervals associated with it. It is simply the BEST ESTIMATE.

    That is a fundamentally different question than comparing the multi-model mean or individual model runs with associated confidence intervals to the observations. Santer’s question involves whether the confidence intervals for the model runs and data overlap.

    They are fundamentally different questions.

    What Santer should have done is argue that Douglass’ question is not valid as posed (and I am undecided on whether it is or not). The fact that he misunderstood the question is why he is so frustrated with Christy’s response.

  4. Jeff Id said

    Santer should have called Ryan. It was bugging me too that there could even be a question.

    So which set is use in surface data, ERSSTa or b?

  5. Ryan O said

    Right now? 3.b. 3.a only lasted a couple of months.

  6. stan said

    Santer is an amateur. Of course he should have called Ryan or someone like him. They need software pros and they need stats pros. But the problem with using pros is that they don’t get answers they like as much.

  7. Layman Lurker said

    #3 Ryan O

    Ryan, thanks for the background. This is a good case example of behind the scenes bias at work. We don’t see any of evidence (except for Wigley) of Team members taking any criticism seriously let alone careful study. Just knee-jerk dismissiveness of “contrarians”, “charlatans”, “complete idiots”, and “deliberate liars”. Yet these players are central to IPCC and peer review. I hope investigators at the UN, UEA, Penn State, etc, can see through this.

  8. John K said

    I know that this is out of context but “getting the statistical test right was much more important (in terms of
    the bottom-line conclusions) than the choice of observational upper-air
    dataset.” sures sounds like “If we do the right math, it doesn’t matter what the data is.”

    Comments similar to that seem to permeate the e-mails. Yes, one does want to do the correct statistical procedure. But it just feels like the team defines the correct procedure based on it giving desired results.

  9. Steve Fitzpatrick said

    I just read santer et al (2008) again (I liked it so much the first time). Santer does indeed miss Christy’s point: his own analysis shows that the best observational estimate of tropospheric warming is clearly below the ensemble model estimate. The fact that the 95% confidence intervals overlap somewhat just means there is too much noise in the data (and too much variability in the models) to be certain the models overstate the tropospheric warming. But that is beside the point: the best estimate is that the models do in fact substantially overstate the tropospheric warming, and so the most likely reality is that the models do not accurately represent reality. That Santer refuses to see that is not at all surprising; he is like an old-time stripper hiding behind his last feather fan when he clings to the inability to reject at 95% confidence.

    Worse yet, Santer limited his analysis to very old data (all pre-2000). As Chad (Trees for the Forrest) has shown, when Santer’s analysis is applied to satellite data through 2008, the model ensemble fails at >95% confidence. The models are just wrong, in that they substantially overstate tropospheric amplification.

    Why the models are wrong is an interesting question. It seems the models fail to properly account for increases in total evaporation and precipitation when the ocean surface temperature increases, and this leads to a substantial underestimate of moist adiabat driven convection between the surface and the upper troposphere, as shown by Wentz, et al (http://www.remss.com/papers/wentz_science_2007_paper+som.pdf).

  10. Jim Bender said

    An interesting feature of mini-ice age were the late season, north latitude hurricanes. Several examples come to mind. The greatest of these was “The Great Storm” (1703). Others include the Armada storm in 1588, the storm in the Shetlands in early August 1652 and the storm off the Texel in early November 1653. These examples make me think that hurricanes were more extreme during periods of global cooling than they are during periods of global warming. Of course, my historical interest in the subject mostly covers the period of 1550 to 1720, so perhaps there are counter-examples from other periods.

  11. hempster said

    Maybe we should try to rename “ClimateGate Letters” to “Letters from Gatekeepers?”
    This would be much more informative and focused on the, imho, main feature of the leaked letters…
    There are actually some letters where they are talking about this gatekeeping.

    > This gets to the issue that the “consensus” reports now are just the
    > consensus of those who agree with the consensus. The
    > government-selected authors have become gatekeepers rather than honest
    > brokers of information. That is a real tragedy, because when someone
    > becomes a gatekeeper, they don’t know they’ve become a gatekeeper – and
    > begin to (sincerely) think the non-consensus scientists are just nuts
    > (… it’s more comfortable that way rather than giving them credit for
    > being skeptical in the face of a paradigm).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: