## SNR Estimates of M08 Temperature Proxy Data

Posted by Jeff Id on August 19, 2010

Occasionally when working on one thing long enough, you discover something unexpected that allows you to take a step forward in understanding. At the ICCC conference, I met Steve McIntyre and took time to ask him how come Mann07 “Robustness of proxy-based climate field reconstruction methods” didn’t show any variance loss in the historic signal. The paper makes the claim that CPS is a functional method for signal extraction, which I’ve long and vociferously contested 😉 . Neither of us had a good answer, but I had to know. In Mann07 – Part II at the Air Vent, the mystery was solved. The M07 paper uses model data as a ‘known’ temperature signal and adds various levels of noise to it. While the work oddly uses white noise in most operational tests, it does present the example of ARMA (1,0,0) ρ = 0.32 models, and it showed very little variance loss. Replicating M07 using CPS wasn’t difficult and the results were confirmed – no historic variance loss so no artificially flat handles for the Mann hockeystick.

With white noise or low autocorrelation noise, there will be none variance loss (HS handle) reported in VonStorch and Zorita 04, Christiansen2010, McIntyre Mckitrick o5 or numerous other studies. This is because low AR noise doesn’t create signal obscuring trends on a long enough timescale to make a difference. However, if red noise having autocorrelation which matches observed values in proxies is used, we get a whole different result overturning the conclusions of Mann07. But, this isn’t the topic of this post.

In demonstrating the effect in the recent Mann07 post at the Air Vent, we used model temperature data and added AR1 noise which matched the parameters from the 1209 proxies in Mann08. From that work we discovered that the percent proxies retained using CPS and a correlation threshold of r=0.1 results in a very high acceptance rate of the proxies 87% passed screening – even though they had a signal to noise amplitude of 0.25. This is significant because even 40% was considered a conservatively noisy proxy in M07

From the M07 paper:

Experiments were performed for five different values of SNR: 0.25, 0.4 0.5, 1.0 and 1 (i.e., no added noise).

and:

We adopted as our ‘‘standard’’ case SNR = 0.4 (86% noise, r = 0.37) which represents a signal-to-noise ratio than is either roughly equal to or lower than that estimated for actual proxy networks (e.g., the MXD or MBH98 proxy networks; see Auxiliary Material, section 5), making it an appropriately conservative standard for evaluating realworld proxy reconstructions.

Remember that in Mann08, so thoroughly deconstructed at Climate Audit, they managed to retain only 40% of the proxy data despite testing against the two closest gridcells.

Below is the third time I’ve presented this plot of ar1 coefficients for the 1209 proxies, however, in the recent MW10 paper, they calculated the same thing demonstrating verification of this result. In my case R threw an error after 1043 proxies when fiting the ARMA (1,0,0) model. It was difficult to figure out how to catch the error, so being an engineer, I assumed that this was a good enough sample set of the 1209 and used 1043 autocorrelation rho’s for this work. As confirmation of the assumption, the my histogram was verified by McShane and Wyner’s recent paper so widely discussed in recent weeks.

The lightbulb went off when I realized we can use this approach to make an estimate of the signal to noise level in actual M08 temperature proxies. It’s done by adjusting the signal to noise in the pseudoproxies (model plus AR1 noise) until they match the 40% retention rate of Mann08. The code was already written to create 10,000 pseudoproxies with an autocorrelation histogram matching the above plots so it was a simple matter to finish the job. Sometimes it pays to work hard.

By adjusting the SNR of the pseudo proxies and rerunning the code, **I came to a result of a SNR of 0.085 or 8.5 percent amplitude signal to match the 40 percent proxy retention of M08 at a correlation of 0.1.** Put another way, this is about 12 to 1 noise to signal as compared with Mann07’s allegedly ‘conservative’ estimate of 40 percent signal to noise or 2.5 to 1 noise to signal. Not really very close.

Many could make the case that AR1 is a poor model for proxies and that other models will be better. This may be true but autoregression is a persistence of signal, nothing more. By persistence, I mean each datapoint has some similarity to the last. Changing the mathematical nature of the persistence will cause subtle differences in result but it will *not* reverse a result this strong. The only thing that I believe could reverse a result like this, is a big error on my part in the code.

So I’m forced to clean it up and present it.

But before we have that fun, let’s look at what a 0.1 correlation threshold CPS reconstruction, with model data that has AR1 noise matched to Mann08 , looks like using CPS methods.

The black line is the reconstruction, note the variance loss compared to the actual target model signal – green. The standard deviation of the reconstruction (black) in the pre-calibration period is 32 percent of the target variance of the original (green) signal. At first, I looked at this and thought something doesn’t look right, there is too little variance in the historic signal. To examine that, I visually compared to M08 below. Orange is the comparable line.

There is more curvature in the Mann08 graph but on a two hundred year variance the most I see is -0.2 to -0.8 but from 1500 to 1900 it varies between 0.3 and 0.6. My curve demonstrates less century scale variance in some areas(maybe not), but perhaps the curvature and differences are due to the AR1 vs actual non-temperature signal in proxy data not captured in models. In other words, the tameness of the model temperature signal may not match a systematic non-temp signal contained in the proxies.

Either way, the *signal *is substantially repressed compared to

*and this result will be difficult to reverse by minor adjustments in assumption.*

**actual**As a further confirmation of methods used here, a quote from the M08 SI presented at Climate Audit, because that’s where I happened to be reading when I ran across this point. More on voodoo correlations.

Although 484 (~40%) pass the temperature screening process over the full (1850–1995) calibration interval, one would expect that no more than ~150 (13%) of the proxy series would pass the screening procedure described above by chance alone.

This is also the first time we’ve been able to test the zero signal screening 13% expectation using proxy data as a verification. If my result were different, it’s a demonstration of a problem in the pseudoproxies used for this result or in the statement itself. However, using AR1 over the rho spectrum of M08, I found reasonably good pass screen agreement of 15 percent of pure red noise series having the same histogram as M08.

To grasp the sensitivity of the screening percentage, consider that 8.5 percent signal corresponds to 40% passing screening and 0% signal corresponds to 15 percent passing. I think fifteen is an excellent, and after two years of paleoclimate literature, a little surprising confirmation of Mann’s assertion that 13 percent will pass by random chance. The corroboration here is supportive of the quality of the pseudoproxies and the result that only 8.5 percent of the proxy signal has a temperature component.

Conclusions:

By using pseudoproxies, we have calculated three new estimates as related to Mann08

1 – Signal level estimate of M08 proxy data is about 8.5 percent.

2 – Historic temperature variance in M08 was estimated at 40% of actual signal.

3 – Rejection of AR1 matched proxy with no temperature signal to model temperatures is about 15%.

Caveats and differences:

Mann08 used the two closest temperature gridcells for threshold testing, this result only used one. Comparison to two closest curves would bias the answer in #1 higher than actual, meaning 8.5 percent is a high estimate using this method and data. In #2, the variance loss would be even more significant if a pick two had been used. In #3 the passing of pure random noise would be higher if we took two chances to match. Therefore in all cases, the results presented here would worsen had a pick two method been used.

Mann 08 also used infilled proxies in it’s study. Since we were matching the reject rate of Mann08 with these pseudoproxies, this infilled data could have substantial effect on the signal estimate in #1. The true amount of signal for a single-pass screening rate of 20 percent, could drop from 8.5 to as low as 2 – 4 percent and at this point, there is a possibility that perhaps there isn’t any discernible temperature signal at all in the proxies.

Of course, one could always be wrong, but considering that of the 40% which pass screening, Luterbacher 6% 0f 1209 series, contained actual temperature data, Briffa’s MXD data 8.6% were infilled for 40 years, this result means that it’s no longer outside the realm of reason in my opinion to consider that we might have very little or no signal at all.

———

McIntyre, S. and McKitrick, R. (2005a). Hockey sticks, principal components, and spurious

significance. Geophysical Research Letters 32.

Mann, M. E., Rutherford, S., Wahl, E., and Ammann, C. (2007). Robustness of proxy-based climate field reconstruction methods. Journal of Geophysical Research 112.

Mann, M. E., Zhang, Z., Hughes, M. K., Bradley, R. S., Miller, S. K., Rutherford, S., and Ni, F. (2008). Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millenia. Proceedings of the National Academy of Sciences 105, 36.

von Storch, H. E., Zorita, E., Jones, J. M., Dimitriev, Y., Gonzalez-Rouco, F., and Tett, S.

(2004). Reconstructing past climate from noisy data. Science 306, 679–682.

——–

Added per request by Richard Telford.

——–

Nothing left to do but a bit of documentation.

The following code is designed to be turnkey, however several libraries may be required. I’m not sure because I’ve already installed them. Fortunately, chances are if you are a regular CA reader, you have too. 😀

library(gplots) ##TEMPERATURE CORRELATION #this calculates statlist with 6 items in list ########################################################################### ##################### TURNKEY DATA DOWNLOADER ############################# ########################################################################### #Mann PRoxy Data and Info method="online" if (method=="offline") {load("d:/climate/data/mann08/mann.tab") load("d:/climate/data/mann08/mannorig.tab") load("d:/climate/data/mann08/idorig.tab") names(mannorig)=idorig$idorig load("d:/climate/data/mann08/details.tab") source("d:/climate/scripts/mann.2008/instrumental.txt")} else { url="http://www.climateaudit.info/data/mann.2008" download.file(file.path(url,"mann.tab"),"temp.dat",mode="wb");load("temp.dat") download.file(file.pth(url,"mannorig.tab"),"temp.dat",mode="wb");load("temp.dat") download.file(file.path(url,"details.tab"),"temp.dat",mode="wb");load("temp.dat") download.file(file.path(url,"idorig.tab"),"temp.dat",mode="wb");load("temp.dat") names(mannorig)=idorig$idorig source("http://www.climateaudit.org/scripts/mann.2008/instrumental.txt")} mm=readMat("C:/agw/pseudoproxy paper/proxynorm A.mat")##load model data with no noise ############################################################################ ################ FUNCTIONS ################################################ ############################################################################ ######## CREATE TIME SERIES ########## gg = function(X) { ts(X[,2],start=X[1,1]) } ######## GAUSSIAN FILTER FUNCTION TESTED AT CA ######### ######## set to 11 years ############################### source("http://www.climateaudit.info/scripts/utilities.txt") ff=function(x) { filter.combine.pad(x,truncated.gauss.weights(11) )[,2] } ########################## ##### END FUNCTIONS ###### ########################## ### calculate ARMA (1,0,0) for mann 08 proxies use0="pairwise.complete.obs" arm=array(NA,dim=c(1209,3)) #INITIALIZE ARMA ARRAY for(j in (1:1209))#LOOP THROUGH ALL PROXIES { X=gg(mann[[ j ]]) #GET INDIVIDUAL PROXIES ##convert to time series X=X-mean(X) #REMOVE MEAN - CENTER DATA X=X/ sd(X,na.rm=TRUE) #NORMALIZE DATA BY STANDARD DEVIATON ar=arima(X ,order=c(1,0,0)) #FIT ARIMA MODEL index=j #INDEX OF ARM ARRAY TO STORE DATA IN print(ar[[1]][1]) #PRINT AR COEFFICIENT arm[index,1]=ar[[1]][1] #STORE AUTO REGRESSIVE COMPONENT arm[index,2]=ar[[1]][[2]] #STORE MOVING AVERAGE COMPONENT arm[index,3]=ar$sigma2 #STORE FIT QUALITY } hist(arm[1:1043,1],breaks=20,main="ARMA (1,0,0) AR Coefficient",xlab="Rho")#crashes at 1043 abline(v=.32,col="red") #savePlot("c:/agw/pseudoproxy paper/rho histogram.jpg",type="jpg") ##### create simulated noise with autoregression histogram equal to M08 sim=array(0,dim=c(1131,10000)) #SET SIMULATED DATA STORAGE ARRAY #CREATE 10000 SIMULATED PROXIES WITH AR 1,0,0 MATCHED TO M08 #THIS USES NORMAL DISTRIBUTED RANDOM DATA AND THE PARAMATERS CALCULATED #ABOVE TO CREATE M08 LIKE PROXIES for(j in 1:100) { prox=mm$proxynorm[,j] ss=sample(1:1043, 100, replace=F) #randomly select 100 of 1024 mann proxies for(i in 1:100) { val=ss[i] sim[,i+100*(j-1)]=arima.sim(n = 1131, list(ar=arm[val,1], ma = 0,sd = 1))+prox*.085 } print(j)#track progress } ################################################################### ## IN THIS SECTION WE LOOK FOR A SIGNAL IN THE MODEL SIGNAL RANDOM DATA USING CPS sim=ts(sim,end=1990) val = window(sim, start=1850) #TRUNCATE TIME SERIES TO YEARS WHERE FAKE CALIBRATION TEMP FUNCTION EXISTS #CPS USES ONLY CALIBRATION RANGE TO SCALE AND OFFSET PROXY calsig=window(ts(mm$proxynorm[,1:100],end=1990),start=1850) msig=colMeans(calsig,na.rm=T) # CALCULATE THE MEAN OF CALIBRATION TEMP FUNCTION ssig=sd(calsig[,1:100],na.rm=T) # CALCULATE THE STANDARD DEVIATON OF CALIBRATION TEMP FUNCTION cpsval=array(NA,dim=c(1131,10000)) #SET SIMULATED DATA STORAGE ARRAY for(j in 1:100) { for(k in 1:100) { i=k +100*(j-1) cc=cor(calsig[,j],val[,i]) m0=mean(val[,i],na.rm=T)# CALCULATE MEAN OF PROXY SERIES IN FAKE CALIBRATION TEMP YEAR RANGE sd0=sd(val[,i],na.rm=T) # CALCULATE STANDARD DEVIATION OF PROXY SERIES IN FAKE CALIBRATION TEMP YEAR RANGE ################# CPS METHOD IS APPLIED HERE ####################### y=sim[,i] #ASSIGN SIMULTATED DATA SERIES TO Y FOR CONSISTENCY WITH PREVIOUS DEMO yscale=y-m0 #CENTER TEMP PROXIES BY SUBTRACTING MEAN OF PROXY IN FK CAL TEMP RANGE yscale=yscale/sd0 #SCALE INDIVIDUAL PROXY BY DIVIDING SD OF PROXY IN FK CAL TEMP RANGE - more wiggles stronger division yscale=yscale*ssig[j] #SCALE INDIVIDUAL PROXY BY MULTIPLYING BY SD OF FK CAL TEMP - this scales the proxies to match the wiggles of fk cal temp data yscale=yscale+msig[j] #CENTER INDIVIDUAL PROXY BY ADDING TO OFFSET MATCH FK CAL TEMP if(cc>.1) ###### sort proxies to this correlation level { cpsval[,i]=yscale } print(cc) } } output=rowMeans(cpsval,na.rm=TRUE) #TAKE ANNUAL MEAN OF OUTPUT output=ts(output,end=1990) #MAKE OUTPUT INTO TIME SERIES FOR PLOTTING m=ts(rowMeans(calsig),end=1990) #M IS THE MODEL TEMPERATURE DATA WE'RE CORRELATING TO act=ts(rowMeans(mm$proxynorm[,1:100]),end=1990) output = output * sd(m)/sd(window(output,start=1850)) #rescale CPS to fit output = output + mean(m) -mean((window(output,start=1850))) sum(!is.na(cpsval[1,])) plot(ff(output),type="l",main="Mann07 Pseudoproxy Usin Mann 08 Autocorrelation",ylim=c(-1.25,1),xlab="Year",ylab="Temp") lines(ff(m),col="red",lwd=3) lines(ff(act),col="green",lwd=1) grid() smartlegend(x="left",y="top",inset=0,fill=1:3,legend=c("CPS Result","Artificial Calibration Temperature","Actual Temperature" ) ,cex=.7) #savePlot("c:/agw/pseudoproxy paper/cps on pseudoproxy 40pct retianed.jpg",type="jpg")

## Signal to Noise Ratio Estimates of Mann08 Temperature Proxy Data « Climate Audit said

[…] The following code is designed to be turnkey, however several libraries may be required. I’m not sure because I’ve already installed them. Fortunately, chances are if you are a regular CA reader, you have too. view source […]

## John F. Pittman said

Nice point on infilling and Luterbacher. And we haven’t even examined the linearization assumption in tree-rings, perhaps other proxies as well. I think they need to go back to first botanical principles and start over. Without good physical reasoning, I don’t think reconstructions are going to pass.

## Jeff Id said

#2 Thanks John,

It’s never easy to tell what people will want to comment on. This was one of my favorite posts lately but it has about 200 views already and only one comment. I thought a math demonstration that there is basically zero signal in proxies was pretty exciting. haha.

## stan said

I agree with John (#2),

“I think they need to go back to first botanical principles and start over”

Amen.

## RomanM said

#3 Jeff

Don’t worry about the lack of early comments. People need to download some of your code and run the script first before they can have something cogent to say.

## Brian H said

An interesting study of how to turn signal to noise, and vice versa. 😉

## Jeff Id adds to Manns miseries « Co2fan's Blog said

[…] William M Briggs blog for a review See Jeff Ids latest treatise on the temp signal in the Mann proxies: SNR is way too […]

## Mark T said

Um, I hate to point out the obvious but an SNR of 1 is not “no noise,” an SNR of 1 is equal parts noise and signal. If there is no noise the SNR is infinite (x/0 == infinity for 0 < x < infinity).

Did he really write this? If he used the SNR # anywhere his calculations are incorrect.

Mark

## Jeff Id said

#8 it looks like the infinity symbol didn’t copy right, it’s kind of wierd that it turned into a 1.

## CoRev said

Jeff, thanks for the mathematical proof for what many of us assumed from the beginning. How can tree rings be used as a proxy for warmth? Tree rings are representative of many local conditions so filtering for one is impossible.

So don’t despair re: the comment numbers. We agree! It just didn’t take the math to confirm our beliefs.

## Jeff Id said

#10, It is the first time I’ve quantified the SNR in proxies, it was pretty cool in my view, but horribly unpopular for comments on two blogs. As I’ve continued messing with the code last night, it really is sinking in that the temp signal may in fact be demonstrably not there. What if you can prove that there is no discernible temperature signal in tree proxies? There should be some response to temp but maybe it’s so little as to be undetectable in tree ring widths.

## Mark F said

Didn’t a gent by the name of Viterbi advance the science of extracting useful signal from apparent noise? Just a *very* uneducated thought, as I flee from matrices and DEs…

## Mark T said

There’s more than just Viterbi: literally hundreds, if not thousands, of methods that are useful for extracting signals from noise.

Viterbi is famous for uncovering an iterative way to determine the maximum likelihood estimation of a signal in noise (essentially). More particularly, the Viterbi algorithm was originally designed to decode convolutional codes in noisy communication links. From what I understand, he was not immediately aware that his algorithm boiled down to MLE until someone else pointed that fact out. He went on to found Linkabit and Qualcomm and is now, rightfully, probably a billionaire.

Mark

## Mark T said

Note that as with any other signal extraction algorithm, the distinction between “signal” and “noise” are typically required a priori. Otherwise, there is no way to know what you’ve actually extracted. Just because you have the ML (or MMSE or MAP) estimate from a set of signals does not mean they have any relation to a physical phenomena.

To my knowledge, nobody has ever clearly defined either w.r.t. tree-rings.

Mark

## Mark T said

I doubt it would be provable in a true sense, though possibly something that could be emperically demonstrable. My guess has always been that the divergence problem is precisely why you cannot trust such a “signal” to be there arbitrarily over time.

Mark

## ldlas said

http://climate.arm.ac.uk/publications/tree_rings.pdf

REPLY: I didn’t miss your link, and will try to read it tomorrow. Papers typically take me a long time to absorb so if you want to make a point a few words are usually helpful.

## Kenneth Fritsch said

Jeff ID, I think you might get more responses if you make clear the transition of data and methods from Mann 2008 and 2007 and then clearly show what Mann used for proxy selection and what you used and any differences. You have replicated the no variance attenuation that Mann found in Mann 2007 using his autocorrelation and SN values. It should be made clear where you disagree with Mann or where you are merely making calculations that Mann had ignored or not made. It is easy for me to get lost in these discussions unless I go back and comprehend the papers as well as the person writing about them.

It appears that Mann 2007 somehow estimated a signal to noise ratio of 0.4 as somehow typical of the proxies in the reconstruction and I assume that is all red noise. For the proxies used in Mann 2008 you obtain a much different SN ratio using a proxy selection criteria that is the same or different than Mann used?

The upshot of your analysis would appear to me to be two fold in that a low signal to noise ratio, when that noise is red noise, will definitely attenuate the reconstruction variance and the estimation of Mann for SN ratio in Mann 2007 is critically different than what you found for the proxies in Mann 2008.

## Jeff Id said

#17, Thanks, I really saw this as a good step forward in understanding paleo reconstructions, which is why I offered it to CA. I think you must be right that some of the explanation is missing. It really was an exciting result to me, but for some reason it wasn’t communicated well enough. Perhaps a re-explanation is in order with basic points followed by the more complicated conclusions.

I can’t yet find any errors in this post, and have tried several times because it is an extreme result. Richard Telford is a paleo guy who wasn’t convinced, didn’t like my R (rightfully so), yet didn’t identify any error either.

Just as a tease to you, if I wrote it with a bit of emotion, I bet a hundred people would comment before the end of tomorrow :D.

It doesn’t matter for here but I wish it would have worked better for CA. There are a lot of math guys there which could have figured out what was going on, but probably didn’t spend the time.

## Layman Lurker said

#18

Good job Jeff. If it stands up, I don’t see why this wouldn’t be publishable. It wouldn’t have to be positioned as a rebuttal of Mann 07 or 08 – there are other angles.

## Mark F said

#13:

Duhhh – the instant I hit “send”, I realized that in information theory and data recovery, the signal being sought has a known structure, while the temp or ANY climate signal is most certainly NOT known. I was taking a random walk in the dark and I bumped into a tree with rings.

## Kenneth Fritsch said

I find this comment in your thread introduction could be confusing to some of us readers.

Read as “With white noise or low autocorrelation noise, there will be no variance loss (HS handle) as reported in VonStorch and Zorita 04, Christiansen2010, McIntyre Mckitrick o5 or numerous other studies.” means something entirely different than “With white noise or low autocorrelation noise, there will be none of the variance loss (HS handle) as was reported in VonStorch and Zorita 04, Christiansen2010, McIntyre Mckitrick o5 or numerous other studies.” In the first case you are agreeing with the other authors and in the second you would be disagreeing with them.

Also I thought that red noise

wasthe topic of this post.## RuhRoh said

Jeff;

This reminds me of times when I have a breakthrough to a big insight, and then expect folks to get it by walking them down my pathway to it.

They are inevitably underwhelmed, if not baffled.

Your headline did capture the message, but then you eschewed the traditional operatic overture, teasing the audience with a nifty medley of the upcoming themes.

Instead, you cut directly to the arcane train of thought, that led to the conclusion, and even that is scarcely reiterated. I know, I know, repetition is insulting, but certain things are worth repeating.

I think that many really good engineers have this capability, of exploring and unearthing new insights. (perhaps you meant that a flashbulb ‘went off’, rather than implying that you discovered the ‘dark bulb’ …)

When I get a lukewarm response to a great breakthrough, my algorithm is now to give a lecture to an empty room, and keep track of the things I say that explain my slides. Then I trash the text on the slides and use my verbal explanations (of relevance) as the captions to the figures. Sometimes backwardize the slide order…

Subsequently, I make a slightly more detailed version for people who are intrigued, but unconvinced. and so on…

I read a compelling book recently; ‘Look Me In The Eye’, about a guy who figures out that asperger’s is the syndrome he embodies…

Some of the stuff of his life rang true for me, but then I’d think “Well, at least I never did anything like THAT!” a bit further along in the text…

You’ve done the important synthesis work; now it is just a question of conveying the work product and explaining it in a ‘standalone’ approach.

Your comments about the R code libraries are a reprise of this theme;

You’ve lost the ability to approach the breakthrough comprehension

de novo, because you’ve already pre-loaded a lot of key stuff.So, go play with the kids, and take a break from the mental resonance.

This is fine work. I anticipate comprehending it more fully when you are able to present it without assuming that the reader has equal grasp of the various Mann concepts as you now have.

I’m fairly caught up in the things that only I understand, and am trying to convey them to the world. So, maybe I’m projecting my situation onto you…

Best,

RR

## DeWitt Payne said

Jeff,

If I use a FARIMA model with ar=0.44, d=0.25 and ma=0, I get about 40% of the random series accepted at r=0.1, not 15%. If I add a real signal, the acceptance goes up. but the sine wave part of the signal gets seriously squished at low signal to noise. For this graph, the acceptance was about 45%.

## Jeff Id said

#23, That’s very interesting. It looks like I’ll need to spend some time with different forms of signal persistence. Can you explain why there is so much difference?

## DeWitt Payne said

Re: Jeff Id (Aug 31 21:01),

I think it’s because a fractional difference model is even redder than a simple auto-regression model. There’s more power at the lowest frequencies. About half the proxies that passed the screen have an AR coefficient less than 0.1 and a difference factor more or less evenly distributed between 0.1 to 0.5. The fractional difference coefficient is related to the Hurst coefficient by d = H-0.5 . I need to do an ARMA (1,0) fit on the synthetic series to see how it compares. The next step is to generate a spectrum of synthetic series to match the range of the proxies.

## DeWitt Payne said

I’ve been doing some noise model fitting to the 1209 infilled proxies. An ARIMA(1,0,0) model isn’t really a very good general model. Adding MA, (1,0,1) produces a lot of significant large negative MA coefficients (between -0.1 and -1.0). IIRC, that’s a sign of improper model specification. An ARFIMA (1,d,1) model, where d is in the range 0 – 0.5, shifts most of the MA coefficients to positive territory. Unfortunately, the armaFit function doesn’t do significance statistics on the coefficients. That would seem to make Mann’s test using AR(1) with rho = 0.32, not particularly enlightening.

## DeWitt Payne said

After a few false starts with the loop and array indexes, I created 1209 synthetic random series with the same parameters as the Mann infilled data with an ARFIMA (1,d,1) model. There were 4 series that returned AR coefficients greater than 1 and one with an AR coefficient less than -1. I set those coefficients to 0.999 and -0.999 so I wouldn’t see warnings about non-stationary series. I still get acceptance of 31% of the no signal series, i.e. a Hockey Stick. That’s after scaling all the series to a mean of zero and an sd of 1. I’ll have to try not scaling, but if I don’t scale, the average over all series is not zero. But then maybe it shouldn’t be. Adding the signal 1:1 with the noise, the acceptance was 80%. At a ratio of 5x noise to signal, the acceptance was about 40% with the usual behavior of squashing the ‘reconstructed’ sine wave signal relative to the calibration period.

## Reply to a Believer « the Air Vent said

[…] all contaminated the data to the point of non-usability. Here I discovered that the data has an incredibly low signal to noise ratio. Of course this has little meaning to most of the public, however, it is key to understanding why […]

## Garcinia Cambogia Diet said

Thank you for the good writeup. It actually used to be a enjoyment account it.

Glance complex to far added agreeable from you!

However, how could we be in contact?

## bypass firewall said

great submit, very informative. I’m wondering why the other experts of this sector don’t understand this.

You should continue your writing. I am sure, you’ve a great readers’ base already!