Fake climate sceptics love the hiatus, the period since the strong El Niño in 1998 where global mean temperature has not increased according to their simplistic notions of global warming. The longer the “hiatus”, the more they can deny that climate change will be a problem this century. This gives an incentive for developing methods that report the longest possible hiatus, ideally without obviously cherry-picking the start date.

Professor Ross McKitrick has a new paper in the ever so prestigious Open Journal of Statistics where he reports that the hiatus started in the HadCRUT4 global temperature record in 1995 to the delight of several climate sceptic blogs.

McKitrick uses a regression technique that is supposed to be robust to heteroscedasticity (unequal variance) and autocorrelation to find the trend in the temperature time series. He starts with the last five years of data and tests if the trend is statistically different from zero, i.e. does the 95% confidence interval around the mean include zero. He then repeats this analysis with six years of data and so on until the 95% confidence interval does not include zero. This is declared the start of the hiatus.

But McKitrick has missed an obvious trick. If he had used the 99% confidence interval, he would have obtained a much longer hiatus and impressed the credulous even more. And if he had used the 99.9% confidence interval … This is beginning to to show the problems with the method.

Typically, when testing hypotheses we are interested in rejecting the null hypothesis that there is no effect. McKitrick is interested in the converse, in accepting the null hypothesis as much as he can to make the hiatus as long as possible. So whereas normally we need to be certain that the statistical methods we are using don’t report false positives (Type I errors) more often than they are supposed to (i.e. 5% of the time at p=0.05), McKitrick needs to be certain that his test has sufficient power to reject the null hypothesis when the null hypothesis is false. He doesn’t report a power test. Instead he assumes that because his method is robust to heteroscedasticity and autocorrelation it will give good answers.

The easiest way to run a power test is to provide some simulated data with realistic properties where we know that there is an effect, in this case, that there is a constant trend in the data. McKitrick has provided code on his website. The code is written strangely, as if he is not familiar with the language (hint: `matplot`

), but it is well commented and easy to run.

I’ve simulated data that has the same trend, autocorrelation (an AR(2) model) and residual variance as the HadCRUT4 data since 1970 and applied McKitrick’s method to them. I did this 100 times. Ninety-five percent of these trials show an apparent hiatus lasting at least five years even though the trend is constant. In over 70% of trials, the hiatus lasts over 10 years. In 10% of trials the apparent hiatus started in or before 1995 – the year McKitrick reports. With this method, a hiatus lasting since 1995 is not exceptional even if the true trend in the data is constant. McKitrick’s method is not a tool for measuring the length of a hiatus, it is a recipe for making one.

Note my simulations do not include hetroscedasticity, as I’m not sure how to estimate or simulate it in an autocorrelated variable. I think hetroscedasticity would tend to make the apparent hiatus seem longer.