An honest view of sea level change?

Writing for the National Parks Traveller, ecologist  Daniel Botkin claims

the sea level has been rising since the end of the last Ice Age, starting about 14,000 years ago as the continental and mountain glaciers have melted and sea water has expanded with the overall warming. The average rate has been about a foot or two a century (about 23-46 cm per century). Data suggest that the rate was much greater until about 8,000 years ago.

Time to look at the evidence, conveniently published by Lambeck et al a couple of weeks ago.

Solution for the ice-volume Equivalent Sea Level (esl) function and change in ice volume. (A) Individual esl estimates (blue) and the objective estimate of the denoised time series (red line). Inset gives an expanded scale for the last 9,000 y. (B) The same esl estimate and its 95% probability limiting values. Also shown are the major climate events in the interval [the Last Glacial Maximum (LGM), Heinrich events H1 to H3, the Bølling-Allerød warm period (B-A), and the Younger Dryas cold period (Y-D)] as well as the timing of MWP-1A, 1B, and the 8.2 ka BP cooling event. (C) The 95% probability estimates of the esl estimates. (D) Estimates of sea-level rate of change.

Solution for the ice-volume Equivalent Sea Level (esl) function and change in ice volume. (A) Individual esl estimates (blue) and the objective estimate of the denoised time series (red line). Inset gives an expanded scale for the last 9,000 y. (B) The same esl estimate and its 95% probability limiting values. Also shown are the major climate events in the interval [the Last Glacial Maximum (LGM), Heinrich events H1 to H3, the Bølling-Allerød warm period (B-A), and the Younger Dryas cold period (Y-D)] as well as the timing of MWP-1A, 1B, and the 8.2 ka BP cooling event. (C) The 95% probability estimates of the esl estimates. (D) Estimates of sea-level rate of change.

At no time during the last 14 kBP has the sea level rise been 23-46 cm/century. Before 7 kBP, the sea level rose much faster than this. Since 7 kBP, the rate is much lower than this. In the last 3 kBP, the period of most relevance for comparing the modern rise, there was only minor (<20 cm) variability is sea level. Botkin’s numbers are without foundation, designed to distract from the anthropogenic contribution to sea-level rise.

Botkin goes on to cite Ross McKitrick paper approvingly

An important scientific paper published September 1 this year states that Earth’s surface temperature has not changed for the past 19 years, and 16-26 years for the lower atmosphere.

Well at least he gets the publication date correct.

Posted in climate, Fake climate sceptics, Silliness | Tagged | Leave a comment

Climate implications of Holocene Sea level changes

A few years ago Andrew Kemp and coauthors published a sea level reconstruction for the last two millennia from two salt marshes in North Carolina. Reconstructed sea-level was relatively stable until the start of the 20th Century, when it started to rise rapidly, consistent with the instrumental record. Kemp et al compare their reconstruction with sea-levels estimated from a 1500-year long global temperature reconstruction using a semi-empirical model and find that the two reconstructions are consistent. This agreement suggests that the recent warm period is anomalous in the last two millennia.

Climate sceptics didn’t much like the paper, perhaps because it showed that recent sea-level rise is anomalous and perhaps because it confirmed that the 1500-year long temperature reconstruction from Mann et al (2008) is substantially correct. But Kemp et al is a single study, perhaps local factors somehow made this site insensitive to sea-level changes, or chronological uncertainties created artefacts in the reconstruction.

A new paper by Lambeck et al reconstructs sea level over the last 35000 years from sea-level indicators from across the tropics. The sites used are remote from the complicating influences of ice sheets which simplified corrections required for land uplift or subsidence.

Solution for the ice-volume Equivalent Sea Level (esl) function and change in ice volume. (A) Individual esl estimates (blue) and the objective estimate of the denoised time series (red line). Inset gives an expanded scale for the last 9,000 y. (B) The same esl estimate and its 95% probability limiting values. Also shown are the major climate events in the interval [the Last Glacial Maximum (LGM), Heinrich events H1 to H3, the Bølling-Allerød warm period (B-A), and the Younger Dryas cold period (Y-D)] as well as the timing of MWP-1A, 1B, and the 8.2 ka BP cooling event. (C) The 95% probability estimates of the esl estimates. (D) Estimates of sea-level rate of change.

Solution for the ice-volume Equivalent Sea Level (esl) function and change in ice volume. (A) Individual esl estimates (blue) and the objective estimate of the denoised time
series (red line). Inset gives an expanded scale for the last 9,000 y. (B) The same esl estimate and its 95% probability limiting values. Also shown are the
major climate events in the interval [the Last Glacial Maximum (LGM), Heinrich events H1 to H3, the Bølling-Allerød warm period (B-A), and the Younger
Dryas cold period (Y-D)] as well as the timing of MWP-1A, 1B, and the 8.2 ka BP cooling event. (C) The 95% probability estimates of the esl estimates. (D)
Estimates of sea-level rate of change.

I want to focus on the mid to late Holocene part of the sea-level curve, about which Lambeck et al write:

A progressive decrease in rate of rise from 6.7 ka to recent time. This interval comprises nearly 60% of the database. The total global rise for the past 6.7 ka was ∼4 m (∼1.2 × 106 km3 of grounded ice), of which ∼3 m occurred in the interval 6.7–4.2 ka BP with a further rise of ≤1 m up to the time of onset of recent sea-level rise ∼100–150 y ago (91, 92). In this interval of 4.2 ka to ∼0.15 ka, there is no evidence for oscillations in global-mean sea level of amplitudes exceeding 15–20 cm on time scales of ∼200 y (about equal to the accuracy of radiocarbon ages for this period, taking into consideration reservoir uncertainties; also, bins of 200 y contain an average of ∼15 observations/bin). This absence of oscillations in sea level for this period is consistent with the most complete record of microatoll data from Kiritimati (23). The record for the past 1,000 y is sparse compared with that from 1 to 6.7 ka BP, but there is no evidence in this data set to indicate that regional climate fluctuations, such as the Medieval warm period followed by the Little Ice Age, are associated with significant global sea-level oscillations.

This new sea level reconstruction is in substantial agreement with the reconstruction of Kemp et al, and hence confirm that the temperature reconstruction of Mann et al (2008) is substantially correct. It is not possible to have  large multi-century scale changes in mean global temperature as this would cause sea level changes that are inconsistent with the reconstructions.

Rahmstorf’s (2007) semi-empirical model suggests that sea-level should rise after a global temperature increase at the rate of 3.4 mm/yr/°C. This rate will eventually decrease to zero as sea level reached equilibrium with the new global temperature, but can be assumed to be linear for a few centuries. A 0.3°C temperature anomaly over 200 years would cause a sea level change larger than reconstructed. Hence, any temperature anomalies must either be of short duration or of small magnitude.

The progressive rise in sea levels throughout the Holocene gives us some evidence towards resolving the Marcott et al vs Liu et al Holocene climate conundrum. The rise is consistent with Liu et al model-based estimates of increasing Holocene temperatures because of increasing greenhouse forcing (at least in terms of trend, I haven’t checked the magnitude). The progressive rise in sea levels is more difficult, but not impossible, to reconcile with Marcott et al. Sea level could rise despite global cooling if a) contributions from continued ice melt (probably in Antarctica) or b) slow adjustment of the deep ocean to Holocene temperatures after the cold glaciation can overwhelms the effect of cooling surface water and the (relatively small) glacier and ice sheet regrowth in the late Holocene. I don’t know the Antarctic melt history well enough to comment on the first possibility, and am doubtful about the second as much of the ocean overturns relatively quickly so should reach equilibrium within a few thousand years.


Predictably, this paper has caught the attention of WUWT. Just as predictably, they have little sensible to write about it.

First, Eric Worrall writes

The abstract notes that on longer timescales, SLR[sea level rise] up to at least 40mm / year has been observed – so in this context “a few” mm per year does not seem particularly alarming, and is well within the range of natural variation.

This is foolish. Only when the great ice sheet of the last glaciation were melting at their fastest rate did the sea level rise approach 40mm/yr. The last few thousand years without major ice melt is a much more appropriate background for comparison. When the sea level rose at 40mm/yr, there were no towns, cities, agricultural land, nuclear power stations or railway lines that needed protecting. There are now.

Worrall’s second claim is not stupid

The fluctuation claim – the claim that sea level change in the last 150 years is faster than any change over the last 6000 years – is very much dependent on accurate dating of each of the proxy series. As we saw with the Hockey Stick controversies, any uncertainty about dating proxies tends to impose a strong hidden averaging effect on the data series, smoothing away peaks and troughs.

Chronological uncertainty will tend to artificially smooth the reconstruction. This is probably a minor problem. First the 200 yr bins used in the Lambeck et al calculation are large relative to the typical chronological error on a date in the late Holocene. Second, the data would still show the sea-level anomalies and these would propagate into the uncertainty. Woodroffe et al (2012) finds no evidence of substantial sea-level fluctuations in the last 5000 years.

Not to be out done, Anthony Watts adds

So, if we had sea levels of 16-31 feet higher than the present 100,000 years ago, well before the dawn of the industrial revolution, what caused that? Inquiring minds want to know.

Inquiring minds found out long ago: insolation. From the IPCC AR5 5.3.4

LIG WMGHG concentrations were similar to the pre-industrial Holocene values, orbital conditions were
very different with larger latitudinal and seasonal insolation variations. Large eccentricity and the phasing of precession and obliquity (Figure 5.3a–c) during the LIG resulted in July 65°N insolation peaking at ~126 ka and staying above the Holocene maximum values from ~129 to 123 ka. The high obliquity (Figure 5.3b) contributed to small, but positive annual insolation anomalies at high latitudes in both hemispheres and negative anomalies at low latitudes

Posted in climate, Fake climate sceptics, Peer reviewed literature, Silliness, WUWT | Tagged , , , | 8 Comments

Bob Irvine is bringing engineers into disrepute

Bob Irvine is touting his new paper at WUWT. He seems very happy with himself. He shouldn’t be: the paper comparing the efficacy of solar and greenhouse forcing is awful.

The first indication that the paper is going to be bad (other than it being recommended on WUWT), is where it is published: WIT Transactions on Engineering Sciences is not the obvious journal to publish a climate science paper in. Climate sceptic often publish in an off-topic journal, where the editor and the usual set of reviewers have little background in climate science and so cannot properly evaluate the manuscript. Not a good sign.

Of course it is possible to publish a good paper is an inappropriate journal, just as it is possible to publish a bad paper in a relevant journal.

The next indication that the paper is bad is in the second sentence of the abstract.

Most Coupled Model Intercomparison Project Phase 5 (CMIP5) climate models assume that the efficacy of a solar forcing is close to the efficacy of a similar sized greenhouse gas (GHG) forcing.

Irvine makes no attempt to substantiate the claim that the models make this assumption. He cannot, for this claim is simply wrong and demonstrates a profound misunderstanding of how coupled climate models work. Irvine could read the code of a climate model from end to end and he would not find the line that codes the relative efficacy of different forcings, nor of  climate sensitivity, because they are not there. Climate sensitivity and the efficacy of different forcings are not inputs to the model, they are the properties that arise from the basic physics represented in the model.

Irvine argues that solar forcing has a much higher efficacy than greenhouse gas forcing, that is, one watt of solar forcing causes more heating than one watt of greenhouse gas forcing. Coupled climate models suggest that the different forcings have similar efficacies despite their different geographical and seasonal distribution, and physical mechanisms.

Irvine disputes this. Instead he argues that forcing with short and long wave (IR) radiation has very different effects on climate. IR re-emitted from CO2 in the atmosphere is absorbed by the top millimetre of the ocean. Irvine claims that this energy is “returned almost immediately to the atmosphere and space as latent heat of evaporation.”

Short wave radiation direct from the Sun penetrates the ocean more deeply.

For energy of 418nm, light drops to one thousandth of its original intensity after travelling about 1570 meters in pure water.

Irvine goes on to make a “crude forcings model” that matches the observed instrumental climate record much better than a CMIP5 model and performs a simple experiment aimed to show that downwelling IR has little impact in slowing the cooling of water.

So what are the problems.

First the depth which the 0.1% of blue photons reach in pure water is irrelevant and misleading. What matters is where the bulk of the short wave photons are absorbed and warm the ocean: the upper fifty metres in clear ocean, much less where the water is turbid. This is still much deeper than IR, but well over an order of magnitude shallower than the depth Irvine gave.

The “crude forcings model” is discussed at length, but bizarrely, the model is never specified.  The results are apparently just too exciting to bother with tedious details like methods. This is a failure of the peer review process (if any) at the journal. From what is written, the model appears to be a curve-fitting exercise with the observed temperature being a linear function of solar, greenhouse gas and aerosol forcing plus internal variability. The internal variability is included as the sum of the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation. This sum of two indices has little if any physical meaning, but gives Irvine’s model a help in matching the instrumental data. This means that the comparison with the CMIP5 model is not remotely fair as the model’s internal variability is unlikely to be in-phase with the internal variability in the real climate.

Further, Ivine’s model generates decadal mean results which are compared with decadal mean instrumental data, so short term variability such as the El Nino is removed. The CMIP5 model results are presented at an annual resolution and of course the timing of the El Nino in the model and real climate do not match. You would not expect them to even in a perfect model.

I’m not sure that it is possible to meaningfully compare the results of a CMIP5 model with a curve fitting exercise, but if it is, Irvine has done a bad job of it.

Irvine finds that solar forcing has a climate sensitivity of 1.4°C/wm-2 [Irvine's unit notation] and that greenhouse gas forcing has a climate sensitivity of 0.35°C/wm-2. In appendix B, the sensitivity for greenhouse gas forcing is given as 10 times higher but the units are wrong. If I fit a linear model to the numbers in appendix B, only greenhouse gas forcing is a statistically significant predictor of temperature. There is insufficient information to work out what Irvine has done here.

Next the experiment. Irvine takes two bowls of warm water, each beneath a shelter. One shelter is transparent to IR emitted by the water, the other reflects it, while computer fans provide a draft. Initially the water in the bowls is free to evaporate. The cooling rate is the same in both bowls. Then evaporation is stopped by placing cling film over the bowls. The bowl under the IR reflector now cools more slowly. Irvine argues that this experiment shows that the energy from the reflected IR immediately lost as latent heat by driving evaporation.

It is not a well designed experiment, at least not one that resembles reality. Blowing dry air over the water is bound to cause such a large amount of evaporation that the reflected IR will have minimal impact on the rate of cooling.

If Irvine was correct, and that incoming IR is immediately lost by the ocean, it is unclear how the natural greenhouse effect that warms the Earth 33 °C over what is expected for a blackbody this far from the Sun with the current albedo could arise.

So what really happens to downwelling long wave radiation? It warms up the top millimetre of the ocean. By adding energy at the surface it slows down the rate of energy loss by emission of long wave radiation and evaporation. As the net rate of energy loss is reduced, the equilibrium temperature is warmer. Simple. Irvine’s notion that the ocean can recognise the energy added to the surface of the ocean by IR and treat this energy differently from other energy, immediately directing it into evaporation or long wave emission, is simply absurd. An engineer should know better.

Posted in climate, Fake climate sceptics, Silliness, WUWT | Tagged | 23 Comments

Did the Sun tickle the diatoms of Disko Bugt?

Diatoms, transfer functions and claims of palaeoecological evidence of solar variability: how could I resist discussing Sha et al. (2014)?

Sha et al develop a diatom-sea ice transfer function and apply it to a diatom record from a core from Disko Bugt on the west coast of Greenland. They compare the resulting reconstruction with the reconstruction of total solar irradiance from Steinhilber et al. (2012).

I’m not going to discuss the sea-ice transfer function.

OK, well just a bit.

The paper reports the results of a constrained correspondence analysis (an ordination method suitable for ecological data)

The eigenvalues of CCA axes 1 and 2 are 0.441 and 0.165, respectively, indicating that axis 1 captures most of the variance in the data set and therefore is most important.

Since it is guaranteed that CCA axis 1 will be at least as large as CCA axis 2, reporting that CCA axis 1 is largest does not provide any information (at least not about the data).

The initial CCA model contains mean sea-ice concentration of each month of the year as predictors. Months with a variance inflation factor greater than 20 were then deleted as they contain little unique information. This is a poor strategy for simplifying models: variables are deleted on the basis of their correlation with other variables rather than on their ecological importance. There is no guarantee that this procedure will find the most important ecological variables.

Skipping over some other issues with the reconstruction, some of which are generic to sea-ice reconstructions, lets have a look at the evidence for the sea ice-solar relationship.

Figure 1. The relationship between the reconstructed April sea-ice concentrations of marine sediment core DA06-139G, changes in warm-water diatom taxa and Atlantic foraminiferal assemblage, the reconstructed May sea-ice concentration based on diatom data from core MD99-2269 on the North Iceland shelf (Justwan and Koç Karpuz, 2008), as well as total solar irradiance variations constructed from the 10Be record in the Greenland and Antarctica ice cores and tree-ring records of 14C fluctuations (Steinhilber et al., 2012).

Figure 1. The relationship between the reconstructed April sea-ice concentrations of marine sediment core DA06-139G, changes in warm-water diatom taxa and Atlantic foraminiferal assemblage, the reconstructed May sea-ice concentration based on diatom data from core MD99-2269 on the North Iceland shelf (Justwan and Koç Karpuz, 2008), as well as total solar irradiance variations constructed from the 10Be record in the Greenland and Antarctica ice cores and tree-ring records of 14C fluctuations (Steinhilber et al., 2012).

This graph is it.

Focus on the upper and lower curve in figure 1, the total solar irradiance curve and the April sea-ice reconstruction. Are you convinced the records are correlated? No statistical measure of the strength or significance of the correlation of these two records are given in the paper.

A correlation-by-eye of two autocorrelated proxy records is not evidence: it is too easy to see the wiggles that matches in both records, and ignore the wiggles that don’t. There are methods to calculate the correlation between age-uncertain proxy records. Use them!

If you are reviewing a paper that relies on correlation-by-eye, don’t play Lord Polonius,  sycophantically nodding to any suggested resemblance  between shapes; challenge the authors to substantiate their claims. All clouds look like weasels if you have enough imagination.


Sha et al. (2014) A diatom-based sea-ice reconstruction for the Vaigat Strait (Disko Bugt, West Greenland) over the last 5000 yr. Palaeogeography, Palaeoclimatology, Palaeoecology 403, 66–79.

Posted in climate, Peer reviewed literature, solar variability, transfer function | Tagged , , | Leave a comment

The Sun on the Nile: how many degrees of freedom?

In 1978, A. B. Pittock wrote a critical review of long-term Sun-weather relationships, complaining of the low quality of papers reporting solar effects on weather. One of the paper’s recommendations is that authors should

3. Critically examine the statistical significance of the result, making proper allowance for spatial coherence, autocorrelations and smoothing, and data selection

Statistical analysis of climate-sun relationships have, of course, improved greatly since 1978 and the statistical significance of the results will be critically examined. Unfortunately, not always by the authors, reviewers or editors. Today it is your turn.

Hennekam et al (2014) investigate the Holocene palaeoceanography of the Eastern Mediterranean and seek to explain the variability they find with solar forcing. Yes, this is another addition to my critical review of palaeoclimate evidence of solar-climate relationships.

The paper focus on a high resolution δ18O record from the planktonic foraminifera Globigerinoides ruber and the Δ14C record of solar variability from Stuiver et al (1998). It uses a running correlation and find some strong and apparently significant correlations between solar activity and the proxy data.

Figure 6. (b) Comparison of detrended and filtered data (0.256–3.333 kyr) of the time series. Top to bottom: Solar activity Δ14Cres [Stuiver et al., 1998], PS009PC δ18Oruber (this study), Gulf of Guinea G. ruber Ba/Ca [Weldeab et al., 2007], and Oman δ18Ospeleothem [Fleitmann et al., 2003]. Results of a running correlation are indicated in the same color (window width = 1005 year, shift increment = 5 year) of the “monsoon” time series to Δ14Cres. The 99% confidence threshold is indicated by black horizontal dashed lines (note that these are sensitive to the resampling). The asterisk indicates that the running correlation of the Gulf of Guinea G. ruber Ba/Ca has a reversed y axis; for this record a negative correlation indicates a high coherence between increased solar activity and increased monsoon activity. The periods of simultaneous higher Ba/Al, higher V/Al, and negative <em>G. ruber</em> oxygen isotope values, during sapropel S1 formation in core PS009PC, are marked I–V (based on Figure 5).

Figure 1. Comparison of detrended and filtered (0.256–3.333 kyr) time series. Top to bottom: Solar activity Δ14Cres [Stuiver et al., 1998], PS009PC δ18Oruber (this study), Gulf of Guinea G. ruber Ba/Ca [Weldeab et al., 2007], and Oman δ18Ospeleothem [Fleitmann et al., 2003]. Results of a running correlation are indicated in the same colour (window width = 1005 year, shift increment = 5 year) of the proxy time series to Δ14Cres. The 99% confidence threshold is indicated by black horizontal dashed lines (note that these are sensitive to the resampling). The running correlation of the Gulf of Guinea G. ruber Ba/Ca has a reversed y axis; for this record a negative correlation indicates a high coherence between increased solar activity and increased monsoon activity. The periods of simultaneous higher Ba/Al, higher V/Al, and negative G. ruber oxygen isotope values, during sapropel S1 formation in core PS009PC, are marked I–V.

The time series shown in the figure are not the raw data: the plots and the running correlation are of two heavily smoothed time series. What could possibly go wrong? Have the authors followed Pittock’s (1978) advice and critically examined the statistical significance of the result, making proper allowance for spatial coherence, autocorrelations and smoothing, and data selection?

I’m going to ask two questions. 

  • How many degrees of freedom were assumed when calculating the p=0.01 significance threshold of the running correlation in figure 1?
  • How many degrees of freedom should have been allowed?

The methods in the paper are generally well described, but the procedure for estimating the significance threshold is not described, nor is it obvious. The cryptic comment that “note that [the significance thresholds] are sensitive to the resampling” is not explained.

Fortunately we can work out what has been done. The significance threshold is at r = ~0.2. Plugging numbers into into an Pearson’s correlation significance calculator shows that for a two-sided test ,if the number of observations is 201 (df = n-2 = 199) then at p = 0.01, r = 0.18. 201? The Δ14C data have 5 year resolution during the Holocene so there are 201 observations in the 1005 year window used in the running correlation.

Is this the correct number of degrees of freedom for the running correlation? It might be if the resolution of the foram δ18O was 5 years. It isn’t. The forams are sampled every centimetre, which given the sedimentation rate of this core represents ~46 years. About 22 such samples can fit into a 1005 year window. So rather than 201-2 = 199 degrees of freedom, we have 22-2 = 20. With this many degrees of freedom, the p = 0.01 significance threshold is just above r = 0.5. No problem. The running correlation between foram δ18O and Δ14C exceeds this new threshold.

The estimate of 20 degrees of freedom assumes that the observations are independent. If the observations are not independent – the time series is autocorrelated – then the effective number of observations will be smaller and the significance threshold higher. The Δ14C record is strongly autocorrelated, I’m not sure about the foram δ18O record, but it doesn’t really matter. Both times series are low pass filtered to remove frequencies above 1/256 yrs. The filtered times series are very strongly autocorrelated; there are very few effective observations. I’m not sure how few – my guess is four per 1005 year window (i.e. 1005/256), but it might be a little more. Let’s be generous and assume there are eight effective observations. The p = 0.01 significance threshold is now over r = 0.8 and little if any of the running correlation exceeds this new threshold. If my guess of four effective observations is correct, the significance threshold is r = 0.99!

So rather than having fantastically strong correlations between solar variability and the proxy, we have little or no evidence of any relationship. And we still have not discussed the problem of multiple testing in running correlations which will widen the significance thresholds further. How many degrees of freedom will be left?

Somehow, I don’t think that Pittock’s recommendations were followed.

 

I find it rather sad the authors feel that they need combine their high quality palaeoclimate data with low quality statistical analysis to generate a publishable story. It is a Van Gogh in a tawdry frame, sold on the value of the frame.

Posted in climate, Peer reviewed literature, solar variability | Tagged , , | Leave a comment

Diatoms, sea-ice and temperature

Diatoms make good proxies for palaeoenvironmental reconstructions: their exquisite silica cell walls can be identified to species level (mostly); they preserve well in sediments (usually) and they are sensitive to multiple environmental variables.

Being sensitive to multiple environmental variables is both a blessing and a curse for a proxy. The advantage is that the proxy can potentially be used to reconstruct different environmental variables, perhaps even from the same site. The disadvantage is that changes in environmental variables other than the one of interest might cause spurious changes in our reconstruction.

Transfer functions used to reconstruct past environmental conditions from fossil biotic assemblages using the modern relationship between species and the environment make a number of assumptions. One of them is that environmental variables other than the one of interest have negligible influence on the biotic assemblages used in the reconstruction (or that the joint distribution of environmental variables remains the same). If this assumption is violated, entirely spurious reconstructions can be generated – Steve Juggins’ sick science.

This assumption should make us cautious of generating and interpreting multiple reconstructions from a single proxy record. Conceptually, it is not impossible to generate multiple reliable records, but it is difficult enough to test if one reconstruction is reliable, testing two will be so much harder. If there are multiple variables that could justifiably be reconstructed, there is a problem whether or not all the variables are reconstructed. Assumptions need to be checked.

Diatoms have been used in the Southern Ocean to reconstruct both summer sea surface temperature (SST) and summer and winter sea-ice cover by Oliver Esper and Rainer Gersonde at AWI. 

There is an obvious direct physical relationship between summer sea-ice and summer SST – ice melts in warm water. This relationship should be stable through time, although as the summer sea-ice and SST reconstructions are essentially non-linear transformations of each other, there is little extra information in having both reconstructions.

The relationship between summer SST and winter sea-ice is strong but less direct, depending on the thermal inertia of the ocean to stop ice forming where summers are warm. The relationship between SST and winter sea-ice could change, for example, if seasonal insolation changes, resulting in non-analogue condition.

Relationship between sea-ice and summer SST in the Esper and Gersonde (2014) calibration set.

Fig 1. Relationship between sea-ice and summer SST in the Esper and Gersonde (2014) calibration set.

Some parts of the ice-SST phase space shown in figure 1 are clearly unlikely to exist under any plausible climate. High winter ice concentrations and high summer SST cannot coexist because of the thermal inertia of the deep Southern Ocean (in shallow brackish sites like the Baltic, this combination is possible).

Other parts of the ice-SST phase space might exist under other climates. For example, during the early Holocene, summer insolation was higher and winter insolation lower. Did this change the relationship between summer SST and winter sea-ice? The optimistic would argue that transfer functions can extrapolate into areas of the phase space not sampled by the modern environment, but the ability of transfer functions to extrapolate is limited.

We can explore how the sea ice-SST relationship might have changed in the past with the output of CMIP5 climate models and use this analysis to evaluate whether sea-ice and SST reconstructions are likely to be affected by non-analogue conditions.

I’ve used the CCSM4 model runs for the pre-industrial (PI), mid-Holocene (MH) and last glacial maximum (LGM). Very conventionally, monthly climatologies of all the variables are available from the CMIP5 archive. I’ve inspected the data on its native grid rather than converting to an equal area grid, and used all grid points south of 40°S.

Winter sea ice concentration against Summer SST during the PI, MH and LGM in the CCSM4 CMIP5 runs.

Winter sea-ice concentration against Summer SST during the PI, MH and LGM in the CCSM4 CMIP5 runs.

The relationship between sea-ice and SST is a little different in the model PI and the instrumental record. In the latter, there is little winter sea-ice at locations with summer SST above 2°C and below this temperature, sea-ice concentrations rise rapidly. I’m not greatly surprised by the discrepancy between modelled and observed sea-ice, it is a difficult environmental variable to model. More important for my current purpose is the difference between the different time-slices.

The cold left side of the ice-SST relationship is fairly constant in the three time-slices. The warm right side varies though, with the limit of sea-ice associated with 0.5°C warmer temperatures at the LGM. The very warm grid-points at the LGM with some sea-ice are situated in shallow water between Argentina and the Falkland Islands. The LGM grid-points tend to be more concentrated towards the left side of the relationship than the other two time periods. I’m not sure what the physics behind these changes are but might be due to the changing latitude of the sea-ice edge.

Except for the grid points near the Falkland Islands, I don’t think the differences between the phase space in the different time periods are large enough to have a severe impact on the transfer functions. There are a few more models with data available, these need to be examined to check that the CCSM4 results are representative.

The CMIP5 models can be used to check for non-analogue climates relevant to other transfer functions. This test  is probably most useful where the variables are highly correlated in the modern environment.

Even if the CCSM4 output does not indicate non-analogue problems in SST-sea ice phase space, I have other concerns about sea-ice transfer functions, both for diatoms and other proxies like dinoflagellate cysts, as I discussed at the sea-ice proxy workshop in Bremerhaven.

  • As sea ice-SST relationship is so strong, there is little or no extra information in sea-ice reconstructions above that in an SST reconstruction. As SST is not a bounded variable, it is easier to deal with.
  • The ecological link between sea-ice and the biota needs to be demonstrated. For diatoms this has been done, with some taxa, for example those producing IP25 in the Northern Hemisphere, known to be associated with sea-ice. With dinocysts, the link is less clear.
  • The season that is important is not clear. Winter sea-ice might not be important because of the low productivity under snow-covered ice with little or no sunlight. Spring or summer sea-ice is more likely to be directly linked to the biota.
  • It is very difficult to collect observations evenly along the sea-ice concentration gradient because this gradient is very steep. Most sea-ice calibration sets I know have many observations at low/no and high sea-ice, but few at intermediate sea-ice. This will bias performance estimates.
  • The ever present spatial autocorrelation has largely been ignored by papers developing sea-ice transfer functions.
  • The role of other environmental variables, such as nutrient concentrations, which have strong relationships with SST in the Southern Ocean, in driving assemblage composition has been little studied.

Perhaps someone will write a critical review of the methods used to reconstruct sea-ice.

Posted in climate, Peer reviewed literature, transfer function | Tagged , , | Leave a comment

Numerical methods are methods

The methods section of a paper should detail the methods used in the paper so that the results can be replicated and so that potential problems with the methods can be identified by reviewers and readers. Most papers do this, but some papers are good at describing the methods used to collect the data but omit to describe the numerical methods used. This is not good practice.

For example, Chu et al have a biomarker record from a maar lake in north east China and use wavelet analysis to identify solar cycles in the data. The methods section describes the coring and the geochemical analyses, but not the wavelet analysis.

Wavelet analysis of the δ13C27–31 time series during the past 9.0 ka BP. High/low power is indicated by red/blue colors. The black line shows the 95% confidence level and the dark shaded area indicates the cone-of-influence.

Wavelet analysis of the δ13C27–31 time series during the past 9.0 ka BP. High/low power is indicated by red/blue colours. The black line shows the 95% confidence level and the dark shaded area indicates the cone-of-influence.

Spectral methods, such as wavelet analysis, are prone to biases and artefacts. I’ve been exploring some of these problems this summer – these methods are tricksy, they need to be used with caution and the methods described in detail.

There are several methodological details that Chu et al should have reported. I’m most interested in three

  • How they decided which proxy record to subject to wavelet analysis
  • How they coped with their unevenly spaced proxy data
  • What was the null hypothesis

It is impossible to evaluate the report of significant solar frequencies without knowing how the analysis was done. The wavelet analysis is not an incidental part of the paper – the result of this analysis are reflected in the second word of the title “Holocene cyclic climatic variations and the role of the Pacific Ocean as recorded in varved sediments from northeastern China”. The reviewers failed. They should have returned the paper for revision.

We can make some inferences about what Chu et al did to their data and how this is likely to affect their results.

Fig. 6.  Comparison of lipid biomarker proxies. The wavelet analysis is based on the curve (E) The weighted δ13C27–31 values of the n-C27–31 alkanes.

Comparison of lipid biomarker proxies. The wavelet analysis is based on the curve (E) The weighted δ13C27–31 values of the n-C27–31 alkanes.

I can find no indication in Chu et al as to why the δ13C27–31 record is analysed rather than the other proxies. Whenever there are multiple proxies, it is tempting to analyse all of them and only report the most interesting results. Of course such a strategy greatly inflates the risk of a Type I error, reporting significant periodicities where none exist. 

The 9000 year compound-specific stable isotope record is unevenly sampled, with higher resolution in the last 2000 years. The mean sample spacing is probably about 50 years, but the wavelet plot includes periods below 16 years. The highest frequency on the plot is about 10 years, so the data must have been interpolated to at least 5-year spacing.  My guess is that the data have been interpolated to an annual spacing. The effects of interpolating unevenly spaced data on the power spectrum are well known, over sampling the interpolated data as apparently done here will have a large effect, greatly increasing the autocorrelation in the data. The most obvious effect is for there to be very little wavelet power at high frequencies, shown by the dark blue in the wavelet plot.

It is easy to demonstrate the effect of interpolation with a simulation. Here I start with white noise and interpolate it. The white noise has an AR1 coefficient near zero, the interpolated white noise has an AR1 coefficient in excess of 0.97. The effect on the spectrum will be dramatic.

 xt<-seq(10, 1000, 10)
 res<-replicate(100, {
   x<-rnorm(100)#white noise
   x2<-approx(xt, x, xout=10:1000)$y#interpolation
   c(ar(x, order.max=1, aic=FALSE)$ar,
   ar(x2, order.max=1, aic=FALSE)$ar)
 })

 quantile(res[1,])#white noise
#         0%         25%         50%         75%        100%
#-0.27134488 -0.07788977 -0.02006099  0.05662604  0.20494047
 quantile(res[2,])#interpolated white noise
#       0%       25%       50%       75%      100%
#0.9772944 0.9813582 0.9829961 0.9853674 0.9887854

 

The standard null hypotheses is that the proxy data come from a white or a red noise (AR1) process. It looks like Chu et al assumed a red noise background. This can have problems if the data are not from an AR1 process (for example the data are a sum of two different AR1 processes, or AR1 + white noise), but I’m going to ignore this potential problem for now. The key question is how was the red noise background estimated. I can see two options.

The easy option is to fit an AR1 model to the interpolated data. Because the interpolation increases the autocorrelation hugely, the estimated AR1 coefficient will be hugely biased, and the estimated background spectrum wrong. It is easy to demonstrate that this procedure will greatly inflate the risk of a Type 1 error.

The alternative would be to use a Monte Carlo procedure, generating many surrogate time series with the same autocorrelation as the observed record, interpolating them and using the mean of their wavelet power as the background. With this procedure, even if the interpolation has biased the wavelet plot, the significance levels will be approximately correct.

We cannot tell how Chu et al estimated the red noise null, but my guess is that they fitted an AR1 model to the interpolated data. If they did this, the results of their wavelet analysis cannot be trusted and their report of solar powered variability would be unreliable.

Without the numerical methods described, Chu et al is yet another paper reporting solar variability in proxy data that cannot be trusted.


Chu et al. 2014. Holocene cyclic climatic variations and the role of the Pacific Ocean as recorded in varved sediments from northeastern China. Quaternary Science Reviews 102: 85–95

Posted in Peer reviewed literature, solar variability | Tagged , | 3 Comments