Hold the front page, Doug Keenan has a confession from the Met Office that “statistically significant temperature rise can’t be supported”.
In a long post, more concerned with the details of which minister would not answer which parliamentary question than the statistics, Keenan gloats over extracting from the Met Office the admission that an ARIMA(3,1,0) model explains global temperature change in the instrumental record better than a linear trend with autocorrelated (AR1) residuals, and hence he declares that the rise in temperatures is not significant.
Keenan assures the reader that “unfamiliarity with the model does not matter here”. I would demur, model choice on purely statistical criteria is a empty pursuit of meaningless models. If we do not understand what the models are doing, we cannot evaluate if the models are sensible.
The linear trend model fits a regression to the data, but rather than assuming that the residuals from this regression are independent, they are expected to be autocorrelated. That is neighbouring residuals are expected to be more similar than residuals selected at random.
Keenan would have us replace this model with ARIMA(3,1,0), an autoregressive integrated moving average model. The 3 designates the number of terms in an autoregressive model. This is not dissimilar to the first model that expected the residuals to be from a first order autoregressive model. More interesting is the second number, 1, which indicated the number of times that the data must be differenced (subtracting the temperature of the previous year from the temperature of the current year) to make the data stationary, ie trendless.
Yes, Keenan argues there is no trend in the data by using a method that removes trends from data. By demonstrating that a differencing is needed, Keenan demonstrates there is a trend in the data. Whether this trend is accounted for by a linear trend or a differencing operation is a choice. Neither carry much physical meaning.
So lets have a look at applying his ARIMA(3,1,0) model to the Met Office data. First we import and plot the data and the differenced data.
x11(6,4);par(mgp=c(1.5,.5,0), mar=c(3,3,1,1), mfrow=c(1,2))
Global temperatures (left) and differenced global temperatures (right)
The strong trend in the left-hand plot is obvious. The right-hand plot shows no trend — the differencing operation has removed it — and the data lack strong autocorrelation. An ACF plot of the differenced data confirms this, there is weak negative autocorrelation for two or three lags.
ACF of differenced global temperature data.
We can then fit the ARIMA(3,1,0) model to the raw data, or equivalently, an ARIMA(3,0,0) model to the differenced data.
The three AR coefficients are all small and negative (-0.38,-0.37, and -0.27), and their physical meaning is not obvious.
We can test if Keenan’s model is a plausible representation of climate by simulating a Holocene-length time series. The Holocene is known to have a rather stable climate, can this model simulate that?
arima.sim(list(order=c(3,1,0), ar=coef(mod1)),10000, sd=sqrt(mod1$sigma2))
matplot(sims, xlab="year", ylab="temperature anomaly °C", type="l", col=1:5, lty=1)
Five Holocene-length realisations of an ARIMA(3,1,0) process
That I think is a NO! In these simulations, there are up to 15°C of global temperature change, rather larger than the actual ~1°C change in the Holocene. All the realisations have more temperature change than the glacial-interglacial temperature difference. But perhaps the climate is usually stationary, maybe from an ARIMA(3,0,0) process, and the non-stationarity that needs differencing only occurs during the last 150 years. What could possibly have caused this non-stationarity, this trend during the last 150 years? It couldn’t be greenhouse gases could it?