## Running correlations – running into problems

Running correlations are a useful technique to explore how correlations between two variables vary in time or space. The correlation is calculated in a window of the first n observations, then the window is moved by one position, and the correlation recalculated. This is repeated until correlations are calculated for the whole data series. This procedure is analogous to the calculation of a running mean (also known as a moving average).

A running correlation can be calculated in R using the function running() in the gtools library. Here I look at the running correlation between 100 years of sunspot data and a random variable (the relevance of this will become apparent later) with an eleven year window.

```library(gtools)
x<-ts(sunspot.year[time(sunspot.year)>1888], start=1889)#100 years of sunspot data
y=rnorm(100)# 100 random values from a normal distribution

x11(height=5,width=7)
par(mar=c(3,3,1,3),mgp=c(1.5,.5,0), mfrow=c(1,2))
plot(x, type="l", ylab="Sunspot number", xlab="Year CE")
par(new=T)# treat the graph window as a new window, so a new graph can be plotted without erasing the old
plot(1889:1988,y, col=2, ann=F, type="l", axes=F)
axis(4)
mtext(side=4, text="Random variable", line=1.5)

rc<-running(x,y,fun=cor, width=11)#calculate the running correlation
plot(1894:1983,rc, ylab="Correlation", type="l", xlab="Year CE")
```

The left-hand figure shows the sunspot number in black, and the random variable in red. The right-hand plot shows the running correlation between sunspot number and the random variable. As expected, there is no consistent correlation between the sunspot number and the random variable, but some windows have strong correlations of greater than 0.7.

It gets more complicated if significance levels are required. For a single correlation, the statistical significance of a Pearson correlation can be calculate using the t-distribution with n-2 degrees of freedom. This is a standard procedure, done in R with the function cor.test().

```x<-rnorm(100)#100 random values from a normal distribution
y<-rnorm(100)#more random data
cor.test(x,y)
```

By default, cor.test() calculates the Pearson correlation, and the alternative hypothesis is two sided (i.e. large correlations, either positive or negative, will be significant). The output is like this

Pearson’s product-moment correlation

data: x and y
t = -0.8354, df = 98, p-value = 0.4055
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.2759484 0.1142166
sample estimates:
cor
-0.084088

From the bottom, the output of cor.test() tells us the estimate of the correlation between x and y, the 95% confidence interval on that correlation, the alternative hypothesis, in this case a two sided test, and the p-value of this correlation. As the data here are random, the null hypothesis is true and there is a 5% chance that it will be incorrectly rejected – a Type I error.

Unfortunately, this procedure will not give the correct significance levels for a running correlation because of the multiple comparison problem. By testing for significance at the p=0.05 level, we have to be prepared to accept a 5% Type I error rate, a 5% chance of rejecting the null hypothesis even when it is true. If we run this test many times, the probability that at least one test returns a significant result gets very high very quickly and the Type I error rate is much higher than the expected 5%.

Before getting worked up about this, we need to determine how big a problem this is – how often is the null hypothesis being rejected. This can be tested by calculating running correlations for different datasets where the null hypothesis is true, and counting what proportion of times the naive significance levels are exceeded.

```c2<-function(x,y){cor.test(x,y)\$p.value}

res<-replicate(1000, {#run the code in the {} brackets 1000 times
y<-rnorm(100)
z<-running(x,y, fun=c2, width=11)#function c2 calculates the p value of the correlation in each window
sum(z<0.05)#tests if the p value is less that 0.05
})
mean(res>0)#find the proportion of trials where at least some windows appeared to have a significant correlation
```

This finds that over 80% of trials appears to have a statistically significant correlation in at least one window, even though the null hypothesis was true. This is clearly much larger than the expected 5% Type I error rate. The problem would be even larger if a longer data series was used, as there would be more correlations made.

There are procedures for controlling for the effects of multiple comparisons, such as the Bonferroni correction, where the critical p value is divided by the number of trials. The Bonferroni correction is usually conservative, and would be especially so for a running correlation as two overlapping windows are not independent.

The simplest solution is probably to run a Monte Carlo simulation (or perhaps by permuting the data) to determine the null distribution. We can calculate a running correlation for many different random variables (or permutations of one of the observed variables), and find the highest correlation for each trial.

```res<-replicate(1000, {
y<-rnorm(100)
z<-max(running(x,y, fun=cor, width=11))# find maximum correlation
})
quantile(res, 0.95) #critical of maximum correlation at p=0.05 one-sided test.
```

With 100 data points and a window width of eleven, this simulation gives a critical value of 0.83. Only if the maximum correlation in a running correlation is greater than this value is it statistically significant. Obviously this critical value will depend on the length of the data series (longer = higher) and the width of the window (wider = lower).

If a two sided test is needed, we could find the maximum absolute correlation, replacing max(…) with max(abs(…)) in the code above. This will increase the critical value slightly.

This exploration of the properties of running correlations was stimulated by a paper by Kokfelt and Muscheler that recently came online in The Holocene and linked to excitedly by WUWT. Kokfelt and Muscheler calculate

An 11 yr running correlation between summer precipitation
and sunspot numbers [that] reveals an overall positive correlation
after 1960 that, however, is statistically significant at the 95%
level only from 1975 to 1977 and from 1997 to 1998.

However they appear to use the standard significance levels applicable to a single correlation rather than a running correlation.
In view of the higher critical values shown by simulation above for conditions that approximate those of Kokfelt and Muscheler, their conclusion of a solar influence on instrumental record of summer precipitation at Abisko cannot be sustained. This does not affect the conclusions they draw from their palaeodata.

Ecologist with interests in quantitative methods and palaeoenvironments
This entry was posted in EDA, Peer reviewed literature, R. Bookmark the permalink.

### 13 Responses to Running correlations – running into problems

1. brobar says:

Very interesting — thanks!

2. I am very interested to know what you think about the methods used in Polyakova et al., 2006. Can you comment?
Changing relationship between the North Atlantic Oscillation and key North Atlantic climate parameters. doi:10.1029/2005GL024573

• The methods look fine to me. They are using a running correlation to show that the correlation between the climatic variables is not constant. This is a good use of running correlation.

The papers I have been criticising have been using running correlations to find periods where the correlation between two variables appears to be significant but have not corrected for multiple testing.

3. beizhan yan says:

This is a great introduction. I am using this method for my publication. However, one thing I got lost is that it appears the Figure 4 from Kokfelt and Muscheler has more than 100 data points, but their significant level (95) appears to be ~ 0.5, rather than 0.82 calculated by you and me. Do you have some ideas about the reason. Thanks.

• I think the significance levels on figure 4 are based on n = 11, the number of observations in each segment of the running correlation.

4. beizhan yan says: