the Air Vent

Because the world needs another opinion

Proxy Hammer

Posted by Jeff Id on March 16, 2014

This idea has been in my head for some time.   I think RomanM deserves the credit for the concept, I have simply applied it to tree latewood density data.   The idea is to use the actual “thermal hammer” least squares functions to provide a minimized sum of squares of errors fit to tree data.   For readers who don’t recall the method, I have several posts on it:

http://statpad.wordpress.com/2010/02/19/combining-stations-plan-b/

http://statpad.wordpress.com/2010/02/25/comparing-single-and-monthly-offsets/

http://statpad.wordpress.com/2010/03/04/weighted-seasonal-offsets/

http://statpad.wordpress.com/2010/03/18/anomaly-regression-%E2%80%93-do-it-right/

http://noconsensus.wordpress.com/2010/03/21/demonstration-of-improved-anomaly-calculation/

http://noconsensus.wordpress.com/2010/03/24/thermal-hammer/

http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/

Since it is unlikely that many will read those links, the idea behind them was to take care of the anomaly calculation and the offset of temperature station data in different altitudes and regions with a single step least-squares calculation. So when looking for a global average, a colder mountain  based station can be combined with a valley station in a highly optimized manner.

Well, some years ago, Roman left a comment about applying the methods to proxy data.   I found the comment interesting enough that it hasn’t disappeared from my thoughts over all of that time.  One of the main concerns in dendroclimatology is finding the most optimal method for combining series of tree data such that long term signals are not repressed.

Example:

Temperature is rising from 0-1 C over 100 years.

You have two thermometer trees (trees supposedly responding to temperature linearly with their growth) and they measure such that the first tree grows from year 0-55 and the second tree from year 45-100 so each tree has an overalap period from years 45-55.

Results of different methods:

First, the actual temperature which occurred and the two perfect trees which recorded it.   Between years 45 and 55 the black line is covered by the red line.  temp that occurred

Now the typical assumption is that each nearby tree may respond a little differently to temperature due to factors such as soil condition or local competition.  This makes the calibration of each tree a separate calculation.  In my previous post, I used a mean of the tree ring size and got a very flat looking reconstruction with no substantial deviations from an average growth rate.  The calculations subtracted the mean of each tree and scaled them by standard deviation before averaging.   Subtracting the mean is a standard practice in the field of dendrochronology prior to a variety of ad-hoc methods to restore the long term variance.   If we don’t work to restore long-term variance, the following two plots show what happens.

First there are two overlapping thermometer trees with means subtracted.

temp that was recordedThis is what the mean looks like after centering:

temp from simple reconstruction of meansOf note is that while the series is now centered, the original temperature experienced by these thermometers was 0 to 1 degrees, and this curve is from -0.27 to 0.27.  We can see that the long term variance (signal) is suppressed by the method.

Using the least-squares offset calculation functions introduced in the thermal hammer method, we can take the two offset treemometer series and calculate the ideal offset for each series.   The result is shown in the next graph:

temp from least squares reconstruction

The data is from -.5 to 0.5 or exactly 1 and has no steps in the center where the data from the two series overlaps.   The great bit about this multi-variate  least squares approach, is that you can literally ‘stuff’ as many series as you want into it and find the minimum least squares error for every single one.

—–

Growth curve:

The meat of the post in this case is very long and takes some explanation.   I’m going to brush by a few concepts without a lot of explanation, those with familiarity in dendro issues won’t have trouble with this but others may.

First, I piled the polar.mxd latewood density data into a matrix by tree age and calculated a growth curve by mean of the year.   This curve has annual level noise which is clearly not related to the age of the tree (growth curve). I calculated the curve different ways but personally it is an ad-hoc correction for something we know exists so my favorite correction is the simple way of just low-pass filtering the signal.  Spline or other fits make little difference.   For this data, we did demonstrate previously that there is a signal in the growth curve that should to be corrected for, still some may not like the correction I used as it can cause unwanted effects, so I performed the following calculations both with and without age-based corrections.

Method:

I provide software below so readers can check whether my methods are what I claim.

Trees (123 core samples) are arranged into a timeseries of 400 x 123 with each tree starting at year zero

Age curve is calculated by averaging the tree density value for each year

Filtered age curve is subtracted from each tree in no growth signal matrix – ngsalltrees

Trees are compiled into larger matrix with start year being actual year in which tree grew.  Other values are NA.

No offset simple reconstructions were done by taking row means.

Least squares reconstructions were then done to minimize least squares error overlapping data for entire matrix.

Results:

First,  we have the simple mean reconstructions without and with growth curve corrections.   Note the flatness of the long term signal.

no growth correction simple mean reconstruction

The next graph includes the age based growth curve correction.

filtered mean growth curve correction recon

Then we start to get to the important part of this post.  The first least-squares reconstruction from tree data that I have seen anywhere.   To be clear, this is the same data which produced the previous two graphs.

least squares offset reconstruction no growth correction

This next plot has the growth signal removed from each tree, is offset by least squares.least squares offset reconstruction filtered growth correctionConclusions:

I think there is plenty to discuss about the shape of this curve and whether it has anything to do with temperature at all.   From our previous posts, we know there is a good quality high frequency match to regional temperature data.  I will need to take some time to see if trends match better.  While I am still highly skeptical that this is a representation of long term temperature, this method is far more objective than the ad-hoc iterative approach used in the Briffa 2013 “no-signal” dendrocrhonolgy paper.

While I’ve noticed the propensity for people to look at these graphs and assume they are temperature, I must warn that the limitations of this method for recreating long term trends are significant.  Each series knits with residual error to the other series so that as we go further from the calibration period in time a “walk” in the series occurs.  Since calibration is in recent years when thermometers exist, growth curve accuracy is lost in history.   I think some sensitivity analysis of the method to added noise and different growth curve strategies is in order.  If other changes don’t generate significant differences though, and I don’t think they will, the last curve is simply the data and either we accept that it is temperature only or something else.

Each method used in proxy history seems to have its own disadvantages, no-signal dendrochronlogy of Briffa 2013 represses the effect by removing the average signal in as the growth data and employing an average.    While the no-signal method will regularly produce a less dramatic looking chronology, it cannot capture long term variance from tree growth in significantly different altitude or soil conditions.

Although this is the first time I have ever seen this method used in proxy based reconstructions, I cannot take credit for it as RomanM of Climate Audit fame came up with the foundational concept.

I will post the code later tonight!


#ROMAN M LEAST SQUARES OFFSET FUNCTIONS
psx.inv = function(mat,tol = NULL)
{
    if (NCOL(mat)==1) return( mat /sum(mat^2))
    msvd = svd(mat)
    dind = msvd$d
    if (is.null(tol))
    {
        tol = max(NROW(mat),NCOL(mat))*max(dind)*.Machine$double.eps
    }
    dind[dind
    dind[dind="">0] = 1/dind[dind>0]
    inv = msvd$v %*% diag(dind, length(dind)) %*% t(msvd$u)
    inv
}
### subfunction to do offsets
calcx.offset = function(tdat,wts)
{
    ## new version
    nr = length(wts)
    delt.mat = !is.na(tdat)
    delt.vec = rowSums(delt.mat)
    row.miss= (delt.vec ==0)
    delt2 = delt.mat/(delt.vec+row.miss)
    co.mat = diag(colSums(delt.mat)) - (t(delt.mat)%*% delt2)
    co.vec = colSums(delt.mat*tdat,na.rm=T) - colSums(rowSums(delt.mat*tdat,na.rm=T)*delt2)
    co.mat[nr,] = wts
    co.vec[nr]=0
    psx.inv(co.mat)%*%co.vec
}

#### load external functions filtering used
source("http://www.climateaudit.info/scripts/utilities.txt")  #Steve McIntyre

### Gausssian filter
ff=function(x)
{
	filter.combine.pad(x,truncated.gauss.weights(51) )[,2]#31
}

### load briffa data
loc="c:/agw/briffa 2013/data/raw/polar/polar.mxd"
wd=c(12,6,6,6,6,6,6,6,6,6,6)
dat=read.fwf(loc,widths=wd)

treename=substr(dat[,1],1,8)
treeid=levels(factor(treename))
year = as.numeric(substr(dat[,1],9,12))
allyear=levels(factor(year))

alltrees=array(NA,dim=c(400,length(treeid)))
treemin=rep(NA,length(treeid))

### align trees by age rather than year
for (i in 1:length(treeid))
{
	mask= treename==treeid[i]
	da=dat[mask,]
	yr=year[mask]
	treemin[i]=min(yr)
	for(j in 1:length(yr))
	{
		ageindex=yr[j]-treemin[i]+1
		alltrees[ageindex:(ageindex+9),i]=as.numeric(da[j,2:11])
	}
}

mask= alltrees== -9999 | alltrees== -9990
alltrees[mask]=NA
alltrees=ts(alltrees,start=1)

#plot(alltrees[,1:10])

### center and normalize all trees by standard deviation
alltrees=t(t(alltrees)-colMeans(alltrees,na.rm=TRUE)) #center
alltrees=t(t(alltrees)/sd(alltrees,na.rm=TRUE)) #normalize to sdev
alltrees=ts(alltrees,start=1)

## calculate growth curve
## this curve is an age based mean value of each tree's growth
## The curve is low-pass filtered to remove high frequency non-age related noise

par(bg="gray90")
growthcurve=ff(ts(rowMeans(alltrees,na.rm=TRUE),start=1))
plot(growthcurve,main="Growth Signal from Briffa 2013 Polar MXD Data",xlab="Age (years)",ylab="Unitless",ylim=c(-1,1))

##no growth signal version of alltrees
ngsalltrees=alltrees-growthcurve

## create year based matrix from no growth trees start year is 870
tsdat= ts(matrix(nrow=1800, ncol=length(treeid)),start=870)
ngtsdat=ts(matrix(nrow=1800, ncol=length(treeid)),start=870)

for(i in 1:dim(ngsalltrees)[2])

{
	yearindex=treemin[i]-870
	print(yearindex)
	tsdat[yearindex:(yearindex+399),i]=ngsalltrees[,i]
	ngtsdat[yearindex:(yearindex+399),i]=alltrees[,i]
}

#################
# no growth calculations
ngaveragereconstruction=window(ts(rowMeans(ngtsdat,na.rm=TRUE),start=870),end=2010)
averagereconstruction=window(ts(rowMeans(tsdat,na.rm=TRUE),start=870),end=2010)
plot(ngaveragereconstruction,main="Simple Mean Reconstruction\nFiltered Growth Curve Correction",xlab="Year",ylab="Unitless Growth")
lines(ff(ngaveragereconstruction),col="red",lwd=3)

plot(averagereconstruction,main="Simple Mean Reconstruction\nNo Growth Curve Correction",xlab="Year",ylab="Unitless Growth")
lines(ff(averagereconstruction),col="red",lwd=3)

plot(ngaveragereconstruction)
plot(averagereconstruction)

# Least squares offset reconstruction
a=calcx.offset(tsdat,rep(1,dim(alltrees)[2]))  #.1711838
tsdatoffset=t(t(tsdat)-(as.vector(a)))
lsreconstruction=window(ts(rowMeans(tsdatoffset,na.rm=TRUE),start=870),end=2010)
plot(lsreconstruction,main="Least Squares Offset Reconstruction\nNo Growth Curve Correction",xlab="Year",ylab="Unitless Growth")
lines(ff(lsreconstruction),col="red",lwd=3)

# Least squares offset reconstruction
a=calcx.offset(ngtsdat,rep(1,dim(alltrees)[2]))  #.1711838
ngtsdatoffset=t(t(ngtsdat)-(as.vector(a)))
nglsreconstruction=window(ts(rowMeans(ngtsdatoffset,na.rm=TRUE),start=870),end=2010)
plot(nglsreconstruction,main="Least Squares Offset Reconstruction\nFiltered Growth Curve Correction",xlab="Year",ylab="Unitless Growth")
lines(ff(nglsreconstruction),col="red",lwd=3)

Posted in Uncategorized | 14 Comments »

Climate Sensitivity

Posted by Jeff Id on March 6, 2014

I want to urge everyone interested in climate change science to take their time to read this paper by Nic Lewis and Marcel Crok.  Nic was a blog-famous coauthor of the O’Donnell Antarctic correction to Steig 09.  His role in that article was in review of the mathematics and software developed to do the corrected reconstruction.   From that time, his publications, and some email conversations since, I happen to know that Nic is probably the most underrated scientist/mathematician working in the climate field.  He has time and patience beyond most and his work is vetted at a level we currently don’t see in climate science anywhere, in any article, skeptics or otherwise.

Oversensitive – Final

A Sensitive Matter – Final

Now Nic follows data, so the fact that he is probably considered by Real Climate types to be a skeptic is only the fault of the data.   If the data isn’t going your way, you can be sure that Nic will go that way as well.

Judith Curry has a post here

Anthony Watts has a post here

If you are a climate scientist reading these articles, open your mind and look deeply.  Ask questions of the authors.  Find the flaw.

Posted in Uncategorized | 55 Comments »

If the Square Peg Doesn’t Fit – Get a Hammer!

Posted by Jeff Id on February 28, 2014

It seems like I just got done writing a post which incorporated the point that Real Climate leaves much to be mocked, and low-and-behold Gavin Schmidt deals us a whopper.  A fantastic new paper was written as a comment for Nature called “Reconciling Warming Trends”, which proports to explain the lack of observed warming which directly contradicts the bulk of the climate models.   The first thing the media should take note of is that these scientists have finally noticed what us evil skeptics have been telling you for several years –the predicted level of warming didn’t happen!   It is warming, but not enough to be a problem, and that IS a big problem for the multi-billion dollar climate industry.

.
As recently as February 2013, Real Climate had their heads in the sand on models with this quote:

.

The conclusion is the same as in each of the past few years; the models are on the low side of some changes, and on the high side of others, but despite short-term ups and downs, global warming continues much as predicted.
.
In the meantime, more than this one paper was being published that claimed the opposite.  And recently Roy Spencer made a cute plot for which the only rebuttal I’ve heard is that he chose an inconvenient starting year.   Not that it changes the result much:
CMIP5-90-models-global-Tsfc-vs-obs-thru-2013[1]
So for the media who don’t read things like ‘papers’ or data, the blue and green dotted lines have lower slopes than the climate models, therefore the models predicted more warming than was observed.   Just like the Koch funded unfunded skeptics told you.
.
But this new paper by Gavin A. Schmidt, Drew T. Shindell and Kostas Tsigaridis (Schmidt 14) is a true gem.   The crew looked at several observed factors in climate since their last runs and found different values for the years 1990 – 2012. They looked at human aerosols, solar irradiance changes, volcanic aerosols and a “very slightly” modified level  of greenhouse gas forcing.
ScreenHunter_02 Feb. 28 19.48
The resulting change in model forcing brought the models in line with observation — almost.  Well they still are higher than any actual observation but adjusting moisture feedback (a large and uncertain factor) is not a sanctified IPCC consideration.
ScreenHunter_03 Feb. 28 19.49Of course they only show the years since 1990 which is hilarious considering that they are addressing a massive failure of the centennial-scale models to predict even a decade into the future.   Note that despite the efforts to “find” an explanation, moisture feedback, the greatest unknown in climate modeling, was not even mentioned.
.
Still, there is one tiny elephant in the Real Climate corner.   A claim as specious as the claim Michael Mann makes of being exonerated from wrongdoing by the fake Muir Russel climategate report, yet very often made by the Real Climate crowd.
.

Climate Models are Not Tuned to Observation

.
For the heck of it, I searched Real Climate for phrases like – ‘not tuned’.

From RC Frequently asked questions:  Are climate models just a fit to the trend in the global temperature data?

No. Much of the confusion concerning this point comes from a misunderstanding stemming from the point above. Model development actually does not use the trend data in tuning (see below).

Gavin comment response: [Response: If you read our papers (and my comments) we are completely up front about what we tune to - the climatology (average annual values), the seasonal cycle, the diurnal cycle and the energy in patterns like the standing wave fields etc. We do not tune to the trends or the sensitivity. - gavin]

Gavin comment response: I’ve said this before, and I’ll say it again, models are not tuned to match long-term time-series data of any sort. – gavin]

Gavin comment response: [Response: The AR4 runs were done in 2004, using observed concentration data that went up to 2000 (or 2002/3 for a couple of groups). None of them were tuned to the HadCRUT3 temperature data and no model simulations were rejected because they didn't fit. - gavin]

Comment and Gavin response:

It seems clear that each model is tuned to match past temperature trends through individual adjustments to external forcings, feedbacks and internal variability. Then the results from these tuned model are re-presented (via Figure 2 above) as giving strong evidence that nearly all observed warming is anthropogenic as predicted. How could it be anything else ?

[Response: You premise is not true, and so your conclusions do not follow. Despite endless repetition of these claims, models are *not* tuned on the trends over the 20th Century. They just aren’t. And in this calculation it wouldn’t even be relevant in any case, because the fingerprinting is done without reference to the scale of the response – just the pattern. That the scaling is close to 1 for the models is actually an independent validation of the model sensitivity. – gavin]

What is clear to most of us “skeptics”, and should be very clear to any semi-technical type, is that in modeling, with hundreds of tweakable parameters, if the output doesn’t match the observations, you go back and tweak the input until it does.  Gavin’s insistence that models aren’t tuned, is simply his own bias forgetting those hundreds of times when he put CO2 forcing in upside down or with a ridiculous weighting by accident or by test and the result didn’t look at all like he expected, so he adjusted things.   He and many others rightfully find it easy to justify the adjustments post hoc - e.g. the paper they just published.  It’s not wrong to adjust the model, they should match the data, but they universally, definitely and regularly are adjusted until the output matches some observation.
.
In this case, the models were so far out of whack, they quietly admitted that the skeptics were right, and adjusted their favorite inputs only.  Other inputs were quite thoroughly left out.  What is more is that most of the inputs had little effect but by ‘re-analysis’ they made massive corrections to volcanic forcings, only in the recent time-window to correct recent trends.
.
Oddly enough, I think this sentence from their paper’s conclusion represents my own thoughts best:
Nevertheless, attributing climate trends over relatively short periods, such as 10 to 15 years, will always be problematic, and it is inherently unsatisfying to find model–data agreement only with the benefit of hindsight.
For my own conclusion, I am highly skeptical that they got any model-data agreement if the process is hindcast.   I’m also completely unimpressed with the kind of numeric mashing used to claim that models are still somehow ‘on the right track’ but this next sentence in their conclusion is completely unjustified/unsupported/unimagined by any aspect of this paper:
We see no indication, however, that transient climate response is systematically overestimated in the CMIP5 climate models as has been speculated8, or that decadal variability across the ensemble of models is systematically underestimated, although at least some individual models probably fall short in this respect.
.
There is no analysis in the article of expected short term variance which could possibly explain the models failure.   It simply doesn’t exist.  This primary aspect of Gavin’s conclusion is much more like a prayer to Gaia than an article of sicence.
.
As is often the case the Real Climate train-wreck provided us some solid entertainment.   I wonder how many more decades will pass before they will figure out that the modeled climate feedback sensitivity looks a little high?

Posted in Uncategorized | 40 Comments »

Confirmation of Phi’s Reconstruction

Posted by Jeff Id on February 26, 2014

In December last year, reader Phi brought the Briffa MXD data from his 2013 paper to my attention.   He showed the following graph of MXD data vs 3 different temperature series.   Needless to say, it shows an impressive correlation between trees and temperature:

polarh

While Phi made the claim that the trees bear out UAH lower troposphere data over ground temps, I don’t see a single instance of a better fit of tree data to one temp series over another as particularly solid. This is particularly true considering that there are known divergent datasets.  Still, it seemed reasonable that Phi had picked out an excellent dataset from the literature to look at.   I took my time and downloaded UAH and RSS satellite data, the tree data from Briffa 2013 and found gridded data from CRUTEM4 using the google world map application from this RealClimate™ post.   I actually went over to that blog to see if there was anything humorous to tease them about and found a very workable application – so shame on me!  Of course, shame on them for having so much to mock but that is for another post.

It took a bit of fiddling with the calibration and filtering but I was able to reproduce a reasonably similar result to Phi.  All temperatures presented below are from summer (June to August) averages of the 67.5N 67.5E gridcell closest to the tree data.

CALIBRATED yamal cru vs mxd 1880-2010 5yr filter

What I find amazing is how good a fit this data actually is to historic temps in the recorded period.   First, recall that I made this red series above by simply aligning and averaging the data.  I did this simple process with the understanding that some of the variance we see in these MXD series is from a statistically significant age related signal, so this series average is not as good a representation of annual tree MXD as it could be.   Still, the age correction won’t make much difference and even the oldest portion of the data doesn’t diverge terribly from the black observed temperature curve.  One of the main contentions I have with treemometers, besides massive non-linearity, is that the high frequency components and low frequency components aren’t necessarily governed by the same relationship and that those relationships with environmental conditions will change over time.   e.g. how does the same tree respond to temperatures in low water vs high water conditions?

Anyway, I looked at lower frequency response in the following plot:

CALIBRATED yamal cru vs mxd 1880-2010 25yr filterThe data from this set is truly fantastic compared to some we have looked at but you can see a large divergence of temperature above tree latewood density in recent years and a similar problem in 1880-1990.   We could shift the graph up and down to try for a better fit but it seems pretty obvious that the trees are reacting to other environmental conditions than temperature as years go by.   The visual correlation is still amazing though.

While it may be tempting for climate scientists to take this kind of data and paste temperature onto it, calling it a development or something, they may not find the whole reconstruction that exciting.  The rest of the data is fairly interesting to those of us skeptical of the general exaggerations pervading the science of global warming doom.  Below is the full MXD reconstruction with temperatures to 2006 overlaid. CALIBRATED yamal cru vs mxd 1880-2010 25yr filter3

Like this years RedWings, it seems to be a hockey stick without a blade.  I’m still amazed at the quality of the fit to CRUTEM though and have decided to continue this study and take the next step of correcting by the average growth curve and perhaps the pith offset as well.

For completeness, and so the alarmist climate community doesn’t have a heart attack, if we extend CRUTTEM and UAH past the end of the reconstruction to 2012, the graph looks like this:

CALIBRATED yamal cru vs mxd 1880-2010 25yr filter2

As always, I intend to make the code available.   Unless someone is interested, I will clean it up and post it with my future calibrated reconstruction post.

Posted in Uncategorized | 75 Comments »

Another Opinion Post

Posted by Jeff Id on February 23, 2014

I heard a radio host in Kalamazoo, comment on a letter written in by a listener on emotion and opinions.  The idea was that emotions and opinions cannot be wrong and at least the emotion can’t-be-wrong claim is  a common thread in human culture and one of my apparenlty numerous pet-peeves.   First, emotions are chemical and electrical reactions to lifes inputs.  We have little control over them but we can control them.   For example it is quite possible to be angry about something we shouldn’t be.   Say there is an individual who vociferously describes an opinion on a scientific matter that goes against basic scientific observation, sometimes that makes me angry.   Is that anger right or is it wrong?   One could step back and observe that the individual making the false claim is not at fault for lacking the mental faculties to parse the nature of the issues and change the emotion.   Either way, anger, really doesn’t make sense in those sorts of cases, yet I do get grumpy about incorrect statements.  It seems like the emotion ‘anger’ is wrong in this situation, although it is a mild example.

What if someone gets so emotional about some non-threatening issue, they murder or commit extreme violence?   I think those situations are extreme examples of ‘wrong’ emotion.  Or what if someone is diagnosed with some form of psychosis and emotions are seemingly coming at random. I would say from examples like that are clear examples of emotion being ‘wrong’.   Yet somewhere along the way, the hubris of mankind has resulted in a popular culture defininition for our emotions as something that cannot be wrong.  Somehow, unless we are full-on psychotic, our emotions are infallable.  When put that way, I get angry at those who spout such obvious untruths regarding emotion..  :D

Joking asiade, this radio host was hollering amen’s and thank-yous after reading a letter which in paraphrase claimed that not only emotion, but opinion cannot be wrong.  In my “opinion” the host and the writer are both wrong, so someone in our group of opinion holders must by definition be wrong.  To me this whole concept of any form of belief-based infallability is another symptom of our mentally self-pleasuring progressive culture.  The excited host presented the issue so strongly that it left me thinking that infallability of opinion is the next frontier for the only slightly less ignorant concept that emotion cannot be wrong.  I hope our progressive cancer isn’t going in that direction, it literally seems impossible to deny reality to the level we already do in society so I suppose we shouldn’t be surprised.

Posted in Uncategorized | 22 Comments »

The Problem with Ocean Heat Uptake

Posted by Jeff Id on February 12, 2014

A recent article on the global warming hiatus  garnered a bit of attention in blogland and the substantially less technical mainstream media. It was published in Nature Climate Change: Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus.   Of course the media ate up the work as though it were a perfect explanation for the utter failure of climate models and inaccurately assume that it means business as usual for them.

The abstract is reproduced below:

Despite ongoing increases in atmospheric greenhouse gases, the Earth’s global average surface air temperature has remained more or less steady since 2001. A variety of mechanisms have been proposed to account for this slowdown in surface warming. A key component of the global hiatus that has been identified is cool eastern Pacific sea surface temperature, but it is unclear how the ocean has remained relatively cool there in spite of ongoing increases in radiative forcing. Here we show that a pronounced strengthening in Pacific trade winds over the past two decades—unprecedented in observations/reanalysis data and not captured by climate models—is sufficient to account for the cooling of the tropical Pacific and a substantial slowdown in surface warming through increased subsurface ocean heat uptake. The extra uptake has come about through increased subduction in the Pacific shallow overturning cells, enhancing heat convergence in the equatorial thermocline. At the same time, the accelerated trade winds have increased equatorial upwelling in the central and eastern Pacific, lowering sea surface temperature there, which drives further cooling in other regions. The net effect of these anomalous winds is a cooling in the 2012 global average surface air temperature of 0.1–0.2 °C, which can account for much of the hiatus in surface warming observed since 2001. This hiatus could persist for much of the present decade if the trade wind trends continue, however rapid warming is expected to resume once the anomalous wind trends abate.

There are several issues with the work that I find interesting.

An “unprecedented” trade wind in the past two decades leaves a skeptical mind questioning how this was determined and documented.   We are all too familiar with flatly false examples in climate science of claims stating “unprecedented” ala Michael Mann’s hockey stick.  The moment the word is used with weather, I am already on edge.  But it leads me to wonder just what the cause of this unprecedented wind is.   Could this wind be driven in whole or in part by warmer than average air?  An  unexpected negative feedback?

Of course a wind mixing the ocean would create cooler air.  There is massive of heat capacity in the ocean which has been discussed at this blog and at many others.  If the ocean is mixed, the cold water is exposed and more heat transfer ensues.  I’m much more concerned about cold air from a mixed ocean than I am about any form of warming.  Bob Tisdale did a WUWT post on the matter pf ocean temp in models vs observations a couple of years ago.  He showed that in particular the East Pacific was falling way behind model projections.

2h4xrwh[1]

Bob Tisdale – East Pacific Surface Temp Models vs Observation

As people are just now becoming aware, almost one hundred percent of the government funded climate models have a global mean surface temperature trend (not jsut ocean) which is higher than observation.  This is very bad news for models but according to this graph below the trend in the East Pacific is a whopping 6X less than the IPCC A1B model (likely from AR4).  The situation is so bad that scientists who have staked their careers on massive warming are digging deep for explanations for the problems.

What is interesting about this paper to me is what it means if the scientist are actually right.  What sort of implications does it have if a wind came by and knocked global temps down by 0.1 – 0.2 degrees Celsius. This graph below tells the temperature side of the story but immediate temperature change isn’t the only implication.

CMIP5-90-models-global-Tsfc-vs-obs-thru-2013[1]

Dr. Roy Spencer Models VS Observations

Since subtracting from the spaghetti plot of models is difficult, if I visually add 0.1 C to either the red or blue observation line, that would mean that about 80% of the climate models were running too hot.  If I add 0.2, the maximum correction from the paper, HadCRUT4 still falls short of the mean so this paper does not explain the differences between models and observations alone.  From the half dozen other articles and blog posts, even at 1.5C, many of them would still be outside of the CI’s of the over-hot climate models.

Dr. Spencer, not so tongue in cheek for climate science, writes in the graph above that observations must be wrong.  This ocean heat paper actually doesn’t explain the entire model problem but unless you are looking at the data, you would think that it explained everything.

Another implication is that the heat from the air has been trapped in an ocean heatsink resulting in a water temperature rise of probably tenths of a thousandth of a degree.   Basically nothing.   Basic thermodynamics tells us that the temperature change isn’t sufficient for the rate of energy transfer from the now microscopically warmer ocean back to the air to measurably increase.   As far as our Gaian prognosticating scientists are concerned, the heat is functionally lost to their modeled plans.  The observations cannot simply jump back into alignment with models, although the trend could possibly resume as the next hilarious quote shows:

This hiatus could persist for much of the present decade if the trade wind trends continue, however rapid warming is expected to resume once the anomalous wind trends abate.

Ok so lets translate the whole mess so that it is understandable.

 - The scientists didn’t predict the trade wind and so don’t know the cause.

  – Basic common sense will tell us that this wind or a similar wind event very likely happened before and the “unprecedented” claim is likely unrealistic.

 - They modeled the wind mixing the ocean and managed to say that the heat went into the water with the same kinds of swag’s that led to missing the apparently huge pacific “wind” factor.

 - They then claim that perhaps in a decade when the winds stop, the original predictions of warming rate will come true.

The Guardian and many other unbiased sources of knowledge for the thinking public, reported the paper as though it was certain knowledge.  It even contained the typical refrain of all signs pointing to accelerating warming which is a flatly fraudulent statement considering that they are simultaneously composing an article explaining why warming isn’t happening.   No questioning of logic, no notice of the inconsistency with their own or other articles they are publishing about the amazing quality of government science.  And it is all placed right next to the articles bashing “skeptics” like me and you who just happen to be able to read a graph.

Eventually the data will decide the argument but it is very very strange that the data is so heavily on our side and we’re the ones who are marginalized by those who hold themselves out as the intellectuals of our time.

Posted in Uncategorized | 131 Comments »

A Big Project

Posted by Jeff Id on February 11, 2014

Brandon Shollenberger is collecting a list of Michael Mann’s screwups.   I’ve thrown in a short thousand words myself.  I could have kept going but I’m showing more restraint in my old age….:D   If you have your own info to add, I’m sure he would be happy to collect it.

The list is intended as support for Mann’s defamation lawsuit – that I suspect may not go as well as he hopes.

A List of Mann’s Screw Ups

I think Brandon may need to pay for more drive-space before this list is over.

Posted in Uncategorized | 10 Comments »

The Future of Weather – Uncommon Sense

Posted by Jeff Id on February 2, 2014

USA today published yet another climate rant on the state of global warming.  This time they tied it to the polar vortex that is still freezing our ___ (insert anatomy here) off.  The article is filled with the ‘local is not global’ and ‘weather is not climate’ (until they say it is) mantra that has been so common in recent years.   The problem that  AGW scientists and left-wing mouthpieces have is identical, while things have warmed a tiny bit, the stupid thermometers are falling well behind the not-so-clairvoyant, modeled projections of planet-wide doom.

What to do

…what to do.

Well the general media seems to have congealed on a temporary strategy at least. The collection of like-minded opinion is not a conspiracy between writers, but rather an obvious shelter during the cold.  Strategically, it is basically a placeholder until something happens that looks politically better for global warming activists.  The formula is to keep repeating that cold is still consistent with models, ignore the fact that a decade ago they were claiming we wouldn’t have nearly as much snow today, and tell people that warm is still coming tomorrow, and scare them that in the future we won’t have snow.  They could almost run the same articles from years ago and just insert new pro-AGW weather events between claims of weather is not climate.

How many weathers does equal a climate?

I’m sure that all of us agree that a single weather event does not define climate, but even the ever-left USA Today needs to recognize that eventually the summation of weather events does equal climate.   I’m sure the progressives™ would be hard to pin down on this particular question but when the summation of weathers doesn’t exhibit the predictions of climate, a little reality check is in order.

The no “good” data quicksand.

This particular pro-global warming article, which was born during cold weather, seemed to take a defensive tone.  With little helpful weather to work from this winter,  they refer to the summary for policy makers of the IPCC AR5 with a “fairly loose” “somewhat speculative” prevaricatory caricature of the IPCC,  in lieu of an actual quotation (my bold):

But climate scientists are 95% to 100% sure that human activity — emission of greenhouse gases — is the dominant cause of dramatic warming.

It makes me giggle.  What can I say.

The dramatic warming of Earth to date is minimally detectable 0.85C since the beginning of the 1900′s (IPCC AR5), it shows no sign of accelerating and falls under half the rate which the average climate model predicted.   The models require not only more warming, but an accelerated rate of warming for the IPCC doom scenarios to become remotely plausible.  Many of us science minded observers, reasonably question the validity of any of the “doom” scenarios themselves, as they are based on what can only gifted generously with the term – speculation.  What’s more is that the speculation, is being published as though it were science.  Science traditionally requires data, so our hapless author teams are oft pressured into statistical falsification of results, aka”scientific speculation”.    See warming attribution sections of various butterfly, sheep, glacier or fish shrinking studies for endless examples.  This tendency to fabricate the supporting data of a study is to be expected when the speculation in question supports and improves the funding which in turn supports the studies.

It seems to me that the USA Today article found themselves in the same boat as our palm-reading climate scientists.   Since the AR5 summary which USA Today linked to is full of big words that don’t say what USA editors wanted, a little caricature of reality was required to support their progressive™ intent.  I wonder how many of the thousands of readers will check the AR5 link for accuracy?

Exxon – send checks ASAP!

USA ends their prayer to the climate model gods (which are apparently different gods from the climate gods) to bring change to the evil right-wing legislators.

The damage will only be compounded if it becomes an excuse for yet another year of denial and delay in addressing climate dangers.

It seems that USA Today is in denial again. As it stands, the EPA and Obama have been proceeding full speed ahead on their insane concept of building windmills, bankrupting coal power, generally limiting combustion energy wherever possible (even beach fires) and increasing usage costs through regulation and governance at a truly unprecedented rate.  It is so bad that despite massive energy cost increases, brownouts are becoming something we are dealing with more often in the US now.  The authoritarian leftists couldn’t move forward with limitation policy any faster if they tried, but for USA Today it still isn’t fast enough.  Meanwhile, the truly dim-witted liberal politician’s attempts to take the lead from global progressive™ reporters, and change global weather by adding costs to combustion energy for only one country on the globe, have scientifically zero chance of success.

The nothing-new-news is that these writers are very, very, ignorant people with strong opinions and big pens.  We are inundated today with so many anti-progress media voices across the planet hollering the same message, that society unwittingly bends to their will.  Eventually, because CO2 emission won’t actually be stopped or slowed appreciably by government, the data will prove out that warming isn’t actually a bad thing at all.  Unfortunately for us, the law and policy, which are likely the true damage of global warming, are being implemented and tightened today.   Equally unfortunate for us citizens, government policy worldwide has proven much more intractable than the CO2 in our atmosphere could ever be.

small[1]

I have a blog!

There is little we the oppressed can do to fight the global ignorance epidemic, so I blog. Whether it changes opinions or not, it at least puts a little rebuttal to the near-omnipotent global media in public view.

Predictions.

Since, according to climate scientists, there has been literally zero detected increases in hurricanes, tornadoes, rain, snow, earthquakes, locusts, drought, flood, etc.. and since the polar ice cap didn’t choose to melt …. again….. it seems that we need a more pragmatic and more scientific list of global warming effects than can be produced in aggregate by warming-centric government funded scientists. 

To that end, I have compiled a new list of weather trend predictions for the future.  My list is statistically and scientifically falsifiable and even more appropriately, is one that the common person can really get their heads around.   Think of it as common sense.  This list is unabridged and contains every weather event that will statistically change in frequency and strength, and has an asterisk by those that you will experience or scientists will measure that will be attributable to man made temperature change in the next 40 years.

Jeff’s list:

Read the rest of this entry »

Posted in Uncategorized | 19 Comments »

Isn’t it time we gave a little recognition to our friend – Anthropogenic Global Warming

Posted by Jeff Id on January 26, 2014

photo

Raise the price of energy, we need to save the planet

…My hands are cold

…My fingers are cold

…I can’t see the drive

…I think my nose is actually froze

Turn down the heat, we must conserve

…My neck is cold

…My ears are cold

…My back is hurt

…My cheeks are froze

No incandescent bulbs, we must be efficient

…My eyes are cold

…My legs are cold

…My ears hurt

…My toes are probably solidly froze

We need emissions standards for small engines

…My car is cold

…Our school is closed

…My trees are ice and our yard is snow

…Our squirrels have likely froze

We must stop global warming before the planet burns

…Ice cold hair

…Ice cold nose

…Ice cold feet, especially toes

…Fortunately for me, my brain has still not froze.

——

Even at minus 3 Fahrenheit (-19C) I tell myself that we must thank fossil fuels for providing a fraction of the extra almost 1 degree of warmth we have luxuriated in this winter. Without it, we would have been a numbing minus 4 degrees F and that would have been a disaster!

Posted in Uncategorized | 20 Comments »

Briffa 2013 Satellite Temperature Download

Posted by Jeff Id on January 25, 2014

To look at the local temp data in comparison to the Briffa 2013 polar.mxd data, I downloaded both the RSS and UAH gridded lower troposphere satellite data.   Unlike ground temperature data, the lower troposphere data is a layer of air which extends miles above the surface of the earth (Blue curve in Figure 1).

fu1

Figure 1 – From World Climate Report

UAH and RSS regional trends are shown in Figures 2 and 3.

RSS Degrees C per Decade

Figure 2

UAH Degrees C per Decade

Figure 3

From past posts here, we have discussed that UAH estimates the polar regions whereas RSS leaves them out.   For fun, I plotted the difference of the two series globally and placed a green marker over the Yamal area where the data for Briffa 2013 was taken.

UAH - RSS Degrees C per Decade with Yamal

Figure 4

Data from that single gridcell for each series is sown in Figures 5 and 6.

UAH temperature anomaly yamal trend

Figure 5

RSS temperature anomaly yamal trend

Figure 6

The two series are mostly comprised of the same data.  UAH made the decision to change to station keeping satellites in recent years which eliminates some of the corrections necessary for satellites with orbits that decay over years.   Still the series are very highly correlated with an r^2 of 0.93.  Next,  I will get the ground data series for the region and then we can see how well things match up with the treerings.

I almost forgot:  R sourcecode is below and should be turnkey.

Read the rest of this entry »

Posted in Uncategorized | 1 Comment »

MXD Age Based Variance

Posted by Jeff Id on January 22, 2014

I woke up the other day wondering if age of a tree would cause a different latewood density response to environmental factors.   I don’t remember reading anything about it so it seemed worthwhile to check.

I am working with 124 tree samples from the Polar.mxd file here.   This is the same data from my previous post.     I took each MXD curve which we showed previously has a long term age related signal and filtered that curve with a 51 year lowpass Gaussian filter.   The top of figure 1 shows the original curve (black) and the filtered curve (red).  Subtracting the red curve from the black curve gives the HF signal in the second pane of Figure 1 immediately below.  Basically a flat squiggly line.

signal removal process

I took all of the 124 trees and did the same thing.   Then I took the average for each year and calculated the confidence interval from the flattened high frequency curves.  The two sigma CI expands in a trumpet bell shape (dark gray) like the previous post so we know that differences in long term trend weren’t causing the whole shape.   The second pane below is the simple standard deviation (blue line).   Visually it has no appreciable trend over time meaning that as the trees age and gain mass, the response to 50 year and shorter signals isn’t a function of the age of the tree.     The trumpet bell shape of the dark gray region is therefore due to less samples in the older years and we don’t need to correct MXD variance for tree age.  It’s kind of a lame result, but at least we know the MXD answer.

signal removal process SD

Posted in Uncategorized | 9 Comments »

Briffa MXD 2013 #1

Posted by Jeff Id on January 20, 2014

I’ve spent a little time compiling MXD proxies from Briffa 2013. I don’t have a lot of results yet but I thought I would put them up. First, I created a simplified reconstruction by centering the proxies and then normalizing by standard deviation and averaging.  That is why Phi claimed to have done, and my data does seem to match his pretty well.   The graphs below have a 21 year Gaussian filter.

basic reconstruction Briffa 2013

Phi stated in paraphrase that no age correction is needed for MXD data.   I took a look at that claim below.  By aligning the trees so that their first ring densities started in year 1 and averaging, I created the plot below.   The two sigma dark gray region assumes a normal distribution of data but you can see the red curve exceeds the no signal confidence interval expected if MXD didn’t demonstrate a signal by age.

growth curve polar mxd

Taking the examination a step further, I was concerned that younger trees or trees from different periods would show a different general trend with age.   Randomly, I split the trees into two groups, those whose birthday was before 1500AD and those after 1500AD.  Both sets of trees exhibited the same initial hump in the first 150 years of growth.  It is a bit concerning that they have such strong divergence in their older years.  If we normalized everything with a spline to figure 2, we might be able to create quite a blade on the end of our reconstruction.

From this, I think it is quite reasonable to make some kind of RCS based correction to the growth curves.  Ignoring that correction would lead to spurious trends as the data does have a strong tree age related signal.  Finding a reasonable sort of correction though is going to be a bit of work.

growth curve polar mxd sorted by age

The source code is below, it is pretty messy so I will clean it up for future posts.

Read the rest of this entry »

Posted in Uncategorized | 26 Comments »

Flexible Thoughts

Posted by Jeff Id on January 19, 2014

I’ve spent a little time over my years studying the math behind neural networks.   I find the self-organization of neural networks to be mathematically interesting in the same way that evolution is.  While we humans can’t claim to understand the whole of the math of even a single neuron, the basic concepts lead toward some interesting conclusions.   I think of a simplified neuron as a device with inputs and outputs that trigger in a binary fashion to analog style weighting of its own inputs.  The binary threshold, no matter what its nature, creates a mathematical stability when presented with a wide variety of simultaneous inputs.   That trigger threshold is known to self adjust as neurons learn and the combination of inputs from different stimuli on the same neuron means that a single neuron is capable of participation in unrelated decisions at different times.   The result is a certain flexibility of thought over time which is beneath the level of our own perception.

Neruons are known to continually change connections (mathematical weights) and have a random physical structure which by feedback grows in a fashion that is mathematically weighted toward non-random (learning).  This means that the comprehension, memory, and decision making environment in your mind is ever-changing.   Sure, some experiences will guide you, but how you perceive things will be guided by a different set of neural connections who’s weighting has taken over your current ones even if the results are the same.  In addition, the math behind the learning process is fully able to accept all kinds of interrelated information with the right kind of input and interpret even the most nonsensical information as truth.

It is particularly important to understand that emotion plays a role in learning as emotions can direct the physical learning process to absorb or reject information.  If  a mind wants to believe something is true, it will find extraordinary ways to rationalize how that information comports with reality and be satisfied for the effort. Perhaps emotion encourages or discourages connections of reason or maybe it just participates in the assembling of incomplete information, the actual function isn’t as critical for this post as knowing of its existence.  Even more than emotion, learning is especially influenced by being exposed to certain facts in repetition.   It isn’t an unreasonable statement that after years of dedication to the study of a particular phenomenon, confirmation bias is one of the greatest hurdles that a scientist faces.

For these reasons, I believe that the human mind has more flexibility of perception than people often realize.  Rather than being more complex and massive, I believe our personalities are actually smaller and more fluid than is often believed and that highly malleable personalities actually drive us.  You literally won’t be the same person in 10 years that you are today.  I don’t have any physiological evidence to support my statement, so perhaps it is what I want to believe, but it does seem to support my general observations.

So it is a common contention that we humans are all susceptible to this kind of bias, I have a few examples outlandish conclusions people have reached based in my understanding of these principles.

- The oft-stated belief by certain climate scientists that skeptics are oil funded.  I like this one because it is relatively benign, but seems to ignore the possibility that oil companies don’t mind the supply restrictions being caused in the energy industry by production reducing regulations.   I’ve often wondered why people like Michael Mann and Stephan Lewandowsky believe that oil companies don’t want supply limitations which cause people to pay more for oil? Paying more per production gallon means more profit.  No, I’m not saying that some money from oil companies isn’t spent in all directions, but it doesn’t take a big leap of understanding to know why the oil companies fund mainstream climate science and promote expensive “green” energy on their signs.

- Belief that industry or capitalism are making people poor.   This is a very common conclusion today and it is no wonder considering the vast repetition of this concept being fed to society, but people are wealthier in capitalist pro-industry society than at any time in our history.  The bifurcation of logic is extraordinary, yet this message has come to be believed by more people than not.   The popular thought today is that it is the fault of business that people are poor, not the fault of the poor individuals who on immediate inspection are not producing anything of value.

- Lewandowsky and Mann “According to the World Health Organization, climate change is already claiming more than 150,000 lives annually (Patz, Campbell-Lendrum, Holloway, & Foley, 2005), and estimates of future migrations triggered by unmitigated global warming run as high as 187 million refugees “   Despite there being literally zero evidence supporting this statement, this is something these authors have come to fully believe.

- Belief that global warming will not only destroy the world but will destroy it soon.  This belief is common among those who are immersed in the field of climate science.  For example, it isn’t hard to visualize James Hanson literally screaming wildly at his keyboard for this old quote from an article which literally compared coal to the “death trains” of humanities darkest history:

“The climate is nearing tipping points. Changes are beginning to appear and there is a potential for explosive changes, effects that would be irreversible, if we do not rapidly slow fossil-fuel emissions over the next few decades. As Arctic sea ice melts, the darker ocean absorbs more sunlight and speeds melting. As the tundra melts, methane, a strong greenhouse gas, is released, causing more warming. As species are exterminated by shifting climate zones, ecosystems can collapse, destroying more species.

The public, buffeted by weather fluctuations and economic turmoil, has little time to analyze decadal changes. How can people be expected to evaluate and filter out advice emanating from those pushing special interests? How can people distinguish between top-notch science and pseudo-science?

Those who lead us have no excuse – they are elected to guide, to protect the public and its best interests. They have at their disposal the best scientific organisations in the world, such as the Royal Society and the US National Academy of Sciences. Only in the past few years did the science crystallize, revealing the urgency. Our planet is in peril. If we do not change course, we’ll hand our children a situation that is out of their control. One ecological collapse will lead to another, in amplifying feedbacks.”

Clearly, while there are climate models predicting warming, there is no evidence of the kind of destruction tormenting Dr. Hanson’s mind.  He clearly believes in what he is saying, yet for those who haven’t been regularly cuddled by “pro-green” message, we cannot see any of these possibilities as rational.  It is my impression that nothing frustrates Dr. Hansen more than humanities refusal to jump off the cliff with him.

Rationalization:

Now clearly, all of the above anti-reason comments can be rationalized by an individual.  I can say that easily because so many people find the above concepts certain and true despite what appear to be such nonsensical roots.  You don’t even have to agree that ALL of the statements are nonsensical, just pick one of them that is nonsense and we can agree that rational irrationality is part of our human nature.  The human ability to rationalize anything is so powerful that entire governments and societies are based on the unprovable certainty of superiority of their deity over another.  Still, the truly brutal impact of those governments on their populations does disturbingly little to perturb their popular beliefs.

We can conclude from this, that it is an unfortunate fact that it isn’t the quality of the information which determines the number or percentage of humans who subscribe to a belief, nor even the difficulty of rationalization, but rather the emotional investment and degree of repetition which makes the final determination of what you believe.  Today, many IPCC climate scientists fret over the fact that so many of us cannot see their version of reality, while simultaneously recommending extremely dangerous economic measures like massive curtailing of meat and energy production, or the use of food for production of transportation fuel.  These extreme anti-industrial measures clearly contribute to impoverishment and starvation of the poorest societies of the world the scientists profess to protect.  Some find the solution to be to give these poor money, which obviously just enslaves them to the payments. All of this rationalization to address a planetary destruction which at this point isn’t even remotely in evidence.  They are completely immersed in the field, with their like-minded peers reinforcing their political and scientific certainty.  For some, it seems that every piece of their self image has become tied up in this certainty of knowledge of the future climate as well as their authoritarian style global government solutions.

The whole situation is crazy at this point but the global political news machine is still repeating the global doom message at top volume, reinforcing all necessary aspects of global climate doom and promoting the wrong-think it supports.   So far there have been a lot of people who have declared the end of the world, and so far they are batting a perfect zero.

So, since climate models are running hotter than measurements by 2X or more, and since not a single destructive change definitively related to climate change has been recorded in the entirety of human history, how about we just settle down for say 20 or 30 years and see where climate is then?    Doesn’t that seem a tweak more reasonable than making regulations which preemptively shut down energy production or drive food and fuel prices through the roof?

Posted in Uncategorized | 18 Comments »

Adaptation to Global Warming

Posted by Jeff Id on December 15, 2013

A look out the back door in mid December.

photo

Posted in Uncategorized | 111 Comments »

Taxes and Society

Posted by Jeff Id on December 14, 2013

There is a lot of dogma in politics regarding tax rates and general government performance. I personally have found that most people make statements about politics with literally zero data.  On thanksgiving, I spent about 4 hours on the government data website – http://www.bea.gov/iTable/iTable.cfm looking at various numbers reported there.   A reader, who shall be unnamed, even stopped by recently claiming that taxes were lower than 1950, conservatives only make decisions with emotion (a common claim applied to today’s Marxist authoritarian-style liberals) and even that we should have an 80% of GDP tax rate to maximize government revenue.

Why maximization of government revenue (and therefore influence) is assumed a positive goal, is something you should ask one of those authoritarians, because it runs counter to everything a government should be attempting to do.   Still, I found some interesting facts about government tax taking and spending.

Are we paying more taxes than before?   Taxes are taken from us in so many ways, it is very difficult to add it up.   It seems from those of us experiencing it, that taxes are continually on the rise and rarely pull back even a little.  Taxes are a financial load, where they are taken from does have an effect, but how much is being paid in total is important when we consider the cost to society. First I looked at the per capita tax paid into government across all sources.

ScreenHunter_02 Dec. 14 10.46

So in 2012 dollars 1950 population paid on average $4000 per person into the US government.  In 1998 that had increased to 14000 per person!   We are definitely paying more tax than 1950.  What it also means is that if you have a family of 4 and you are paying in less than  4 x $14,000  = $56,000 into the government, you are paying less than your share of those taxes and they are being taken on your behalf from someone else.

Most people don’t realize that even if they don’t write 14,000 in checks, they still end up paying that tax in the form of lower salary, higher prices, etc.  and that means they have a loss of influence over cash.  Loss of income (Taxation) in any form, is a strong limitation of behavior and is a general decrease of personal freedom.  Said another way, when money is taken from your employer, that money was something you helped create, yet high corporate taxes mean you no longer have any control for directing the influence of that cash to your betterment.  You are less powerful and free as an individual.

Many would say Jeff, that’s not fair.   You need to recognize that people make more money since 1950.   The GDP corrected numbers are a more fair comparison they say.   This logical fallacy fails to recognize that we are still asking the government to provide 3 times more service per person than we needed only 60 years ago during our best years.   This increased service represents further loss of control through added regulation and compliance with those regulations is a double hit for the economy on the same tax dollars.  Paying our already massive government to create hurdles for the economy is very expensive, and we see results from it all over the country. Do you realize how many businesses have left California, and why? This also applies to the argument about whether we should ever try to maximize government “revenue” as a means to promote general welfare.  Still the GDP graph doesn’t tell that great of a story either:

 ScreenHunter_03 Dec. 14 10.58

I’ve shown this graph before.   Tax rates as a percent of GDP haven’t changed much since 1960.   They peaked in 1999 under Bill Clinton but the percentage of GDP dropped off precipitously in the last two years he was in office.    Combining the first and second graph with the concept that the additional spending per capita by government results in a double-hit on the economy as businesses and individuals invest more money and time toward compliance with EPA, Education, IRS, traffic laws, employment law, and many many other well-known increases in regulation.   As a percent of GDP, it should be obvious even to the left that the total governmental financial load per capita follows the first “total dollars paid” graph, NOT the percent of GDP graph.  This is a very important point that is lost in the discussion of tax rates – this subtlety is often missed with intent.

We are paying much higher dues for our government than we ever have in history.  With each new group’s pet-peeve, excessive regulation has invaded every aspect of American life.  The land of the free can’t make its own decisions even on the size of soda they buy.   Obviously, compliance dollars are much harder to quantify than governmental budget dollars, so we will move on to some other interesting plots.

ScreenHunter_04 Dec. 14 11.12

I’ve shown a version of this plot before also, the 4 years since Obama took office are incredibly stark considering that we are not at war.   Where this money is going is going to be a bit of a surprise to some here.  I found this next plot worthy of writing this blog post.

ScreenHunter_05 Dec. 14 11.17

What this graph is showing is what percentage of government expenditures is being handed out as checks other than pensions or tax returns.   If you get social security, social security disability, medicade, medicare, food stamps, unemployment checks, etc…  this graph shows the total percentage of cash that is being handed to people in the form of checks.

Until 1970, 20 percent of government revenue was used to help those in need.   By the 1970′s we had reached a full 30%   That held flat all the way until Bush junior took office, he managed to jump socialist style payments to 40% of tax revenue.  I haven’t studied which policies did what, but the website I linked has additional numerical detail that could allow us to figure it out.   What is again clear is that when Obama took office, he jumped total payments to “needy” citizens by an overwhelming 15%.  We are now paying 55 percent of total tax revenue as checks to these deemed by government to be in need -- Starting in the first year he took office!

Of the $14,000 taken from every American in some form or other, $7,700 is being handed out to the less successful.   I can imagine no greater danger to economic health than this situation.   Obama told us during the election that this was his intent. Remember the discussion with ‘Joe the Plumber’ where Obama said – “when you spread the wealth around, it’s good for everybody”.   Now you can see the result of new policy in dollars in the graph above, what many aren’t noticing is what these payments to not work, are doing to society.

A large chunk of this money is a massive incentive to single moms to avoid college and have babies.  You can make solid five figures with health care if you avoid education and work while producing babies in the US and it isn’t hard to do.   We have had dozens of employees who  have chosen this rout.  They work for a bit and when the benefits disappear, they go home and get the benefits back.   They don’t make much money and complain that they are underpaid, but the money which used to go for salary 30 years ago, is being taken out the back doors of the businesses to support these massive social bribe programs, and ever greater regulatory costs.

These people are being effectively enslaved, and when they go past child bearing age, they will have no skills, little personal property, and will end up with no ability to get out of the situation that this government created.   Socialism, is universally ineffective at solving the problems it purports to address.   These people, who are dis-proportionally minorities, are being effectively enslaved to low wage jobs and a long-term mediocre economy.

The same programs have reduced barriers to entry for social security disability, allowing functional people to find loopholes to permanent paychecks.  Back pain, mental issues and other problems which people were forced to work through have now become passes to a soft easy life.

Worse, I don’t see many discussing these real and critical issues in a rational fashion.   The media, which used to be the immune system for political corruption, largely believes in the endless flow of government cash and currently takes no time to call out pro-government politicians of any party.   Republicans and Democrats both continue to march in the same direction as evidenced by the first graph in this post.  Bit-by-bit and none too slowly we are reaping the increasing problems caused by these policies.   It is blindingly obvious now that we should reverse course on much of the social spending and enact common sense pro-business, pro-industry reforms.  Our quality of life in the US is being corroded by a bloated authoritarian central government, and is being rapidly replaced with something much, much bleaker.

Posted in Uncategorized | 44 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 148 other followers