A gentle introduction to sea-level fingerprints with an application on Meltwater Pulse 1A

Today, people are worrying about how sea-level rise and global climate change will affect our normal lives. But one interesting fact is that when an ice sheet melts, global sea-level will not rise equally everywhere like our bath tubs. Instead, different regions around the globe will experience difference levels of sea-level rise, for some locations sea-level will actually fall.

Simply speaking, we should expect three things after the ice sheet melting:

1. The most obvious thing we should expect is that, certainly, a massive amount of freshwater has been dumped into the ocean, which will cause global mean sea-level rise. If this happened, just like our bath tub, the water surface will rise, which is shown in the figure below.

2. Unlike our bath tub, the solid Earth is not totally rigid, actually, it behaves more like a silly putty – when you remove a heavy stuff from it, the silly putty beneath it will rebound a bit.

3. Lastly, and perhaps the most interesting thing we should consider is the gravity. Because ice sheet itself is a massive mass body, it will exert huge gravitation force to pull the water nearby toward it. This will make the nearby relative sea level (the distance between sea surface and solid Earth) slightly higher comparing to the sea level without this gravitational pull. And when ice sheet melt, due to the decrease in this gravitational pull, the sea-level near the ice sheet will decrease instead of increase! And to compensate for this decrease, the sea-level rise in the other regions should be higher than global average.

Also, this mass redistribution from ice sheet to the ocean will cause a perturbation to the Earth rotational vector, this will also change the sea-level rise pattern. A widely-used numerical technique to understand these three processes is called glacio-isostatic adjustment (GIA) modelling, which describe the changes in solid Earth deformation as well as the global gravitational field which in turn controls the sea surface height.

Once we work out the physics behind these three phenomenon, we can predict the global sea-level change pattern by the melting from the West Antarctic Ice Sheet, and we refer this pattern as the ‘sea-level fingerprint’ for the West Antarctic Ice Sheet. This technique is pioneered by Mitrovika et al.

Let’s take the West Antarctic Ice Sheet and Greenland Ice Sheet as an example, which have been predicted to contribute a major part of sea-level rise in 2100 due to the recent global atmospheric and oceanic warming.

The figure below is a typical sea-level fingerprint plot, showing if one meter sea-level equivalent ice melt in West Antarctica and Greenland, how global sea-level will change.

Figure from Mitrovika et al., 2011

Based on this pattern, let’s assume ice sheets from West Antarctica and Greenland are the only contributor to global sea level rise. We can then use the observed sea-level rise pattern at different locations to infer how much each ice melt from each ice sheet. From the figures above, 1 mm ice melt from West Antarctica/Greenland will cause 1.1/-0.2 mm sea-level rise in Liverpool and 1.3/0.7 mm sea-level rise in New York. Then we can just go and have a look at the local tide gauge record. If there was a 10 mm sea-level rise in Liverpool during the last 5 years and 25 mm in New York, we can write down two simply equation:

10 = 1.1 ΔWAIS +  -0.2 ΔGrIS

25 = 1.3 ΔWAIS +  0.7 ΔGrIS

By solving these two equations, we find that there should be a 11.7 mm ice melt from West Antarctica and 14.4 mm from Greenland. We can certainly make more measurement in different locations and make this estimate more and more accurate. What a convenient technique! However, in real world there are many other factors that also change the global sea-level, like the dynamic sea-level change caused by global ocean current, thermosteric sea-level change largely caused by ocean warming and water redistribution caused groundwater change and artificial reservoirs etc…

But there was a special rapid global sea-level rise period called Meltwater Pulse 1A happened around 14,500 years ago. And global mean sea level rose around 20 m within 500 years at that time. Within this special period, the above mentioned signals suddenly becomes pretty small comparing to sea-level change signal governed by sea-level fingerprints. And we can use this technique to find out how much each ice sheet contributed to this event, which is really important to understand how paleoclimate works!

If you are interested, please check the video below, which is my presentation for Virtual 2020 AGU Fall Meeting:

Illustration for MWP-1A inversion poster at 2020 PALSEA virtual meeting

The video below shows a simplified process using Monte Carlo linear regression and sea-level fingerprinting to invert the sources of Meltwater Pulse 1A (MWP-1A) using sea-level data from six sites: Tahiti, Barbados, Sunda Shelf, Hydrographer’s Passage, Noggin Pass and Northwest Scotland.

This video is supplementary to my 2020 PALSEA virtual meeting poster, if you do not know how to assess my poster, please do not hesitate to contact me for me details!

BGM: Computer Game – Yellow Magic Orchestra

Using the ensemble mean inversion results from 20,000 Monte Carlo simulation above, we modified MWP-1A ice history for the ANU global ice model to incorporate our result. By combining the modified ice model with 120 Earth parameters, we predict the relative sea level change history from the Last Glacial Maximum to post-MWP-1A:

RSL predictions using the modified ANU model compared with published sea-level index points. (f) Orange solid line indicates the RSL prediction generated using the optimum Earth model (65 km lithosphere thickness, 4/200×10^20Pa s upper/lower mantle viscosity) instead of the ensemble mean as in a-e. Note the different axis. CI = confidence interval.

Latin Hypercube Sampling Radiocarbon Ages with Python

Radiocarbon dating is an important technique that offers insights into the ages of geological/archaeological records. One essential procedure for radiocarbon dating is the calibration process, which adjusts the conventional 14C ages to accurate calendar ages accounting for changes in the radiocarbon level in the atmosphere/ocean through time owing to processes like Earth carbon cycle.

The calibrated radiocarbon age often present a complex probability distribution which requires a large number of sampling process (using standard Monte Carlo method) to get a reasonable representation. Here’s an example of calibrated radiocarbon age distribution generated using the Oxcal 4.4 program:

import numpy as np
import matplotlib.pyplot as plt

#load output from Oxcal 
sample_dis = pd.read_csv('Oxcal_output', sep='\t', lineterminator='\n',header=None)

#----------Display the probability density function----------------
pdf_x,pdf_y = sample_dis.iloc[:,0]-1950,sample_dis.iloc[:,1]  #-1950 for converting AC to BP
plt.plot(pdf_x,pdf_y) 
plt.xlabel('Calibrated Radiocarbon Age (BP)')
plt.ylabel('Probability')
plt.show()

Using 50 Monte Carlo simulations to sample this probability density function (PDF), we get:

# Define a function to do Monte Carlo Sampling

def MC_sample_pdf(pdf_x,pdf_y,n_samples=50):
    from scipy import interpolate
    pdf_x = np.array(pdf_x) 
    pdf_y = np.array(pdf_y)
    bins = np.mean(np.diff(pdf_x))
    cum_values = np.cumsum(pdf_y*bins)
    inv_cdf = interpolate.interp1d(cum_values, pdf_x)
    r = np.random.rand(n_samples)
    return inv_cdf(r)

MC_samples = MC_sample_pdf(pdf_x,pdf_y)

plt.plot(pdf_x,pdf_y) 
plt.hist(MC_samples,density=True)
plt.xlabel('Calibrated Radiocarbon Age (BP)')
plt.ylabel('Probability')
plt.show()

It can be seen that, for this random seed, the original PDF is somehow misrepresented because there should be more samples with age ~-13700. This is because Monte Carlo simulation generates each point independently from the input PDF, it requires a larger number of sampling to get a reasonable representation of the PDF.

For this 14C age, we can easily increase the sample size to perhaps 5,000 and we can get a decent representation of the input PDF. However, for some complicated physical process modelling, it is time-consuming to increase the sample size. In this case, we need a sampling process with higher efficiency. A nice and simple method for achieving this is the Latin Hypercube Sampling (LHS), which generates samples more evenly across all possible values by randomly sampling within each defined sub-interval. Here’s a comparison of using Monte Carlo and Latin Hypercube to generate 100 samples for a normal distribution N(0,1).

n = 100
MC_samples = np.random.normal(0,1,100) #MC sampling

#latin hypercube 
#define intervals
min_prob = 0.0001 
max_prob = 0.9999
lower_limit= np.linspace(min_prob,max_prob,n+1)[:n]
upper_limit = np.linspace(min_prob,max_prob,n+1)[1:n+1]
#generate samples from each interval
sample = np.random.uniform(low = lower_limit,high = upper_limit,size=[1,n])

#project points into cumulative density function
from scipy.stats import norm

cdf_x = np.linspace(norm.ppf(min_prob),norm.ppf(max_prob), 100)
cdf = norm.cdf(np.linspace(norm.ppf(min_prob),norm.ppf(max_prob),100))
inv_cdf = interpolate.interp1d(cdf,cdf_x)
Latin_samples = inv_cdf(sample)[0]

#----------------
t_pdf = norm.pdf(cdf_x) #true pdf

plt.plot(np.linspace(norm.ppf(min_prob),norm.ppf(max_prob),100),t_pdf,label='True PDF')
sns.kdeplot(MC_samples,label='MC') #kernel density estimate
sns.kdeplot(Latin_samples,label='LHS') #kernel density estimate
plt.ylabel('Probability')
plt.legend()
plt.show()

After the sampling process, each 100 samples were used to generate its PDF using the Gaussian kernel density estimate function. It is evident that with 100 samples, the LHS method fits better to the true PDF.

We can then use LHS to sample 14C age:

# Define a function to do Monte Carlo Sampling

def LHS_sample_pdf(pdf_x,pdf_y,n_samples=50):
    from scipy import interpolate
    pdf_x = np.array(pdf_x) 
    pdf_y = np.array(pdf_y)
    bins = np.mean(np.diff(pdf_x))
    cum_values = np.cumsum(pdf_y*bins)
    inv_cdf = interpolate.interp1d(cum_values, pdf_x)
    lower_limit= np.arange(0,n_samples)/n_samples
    upper_limit = np.arange(1,n_samples+1)/n_samples
    points = np.random.uniform(low = lower_limit,high = upper_limit,size=[1,n_samples]).T
    
    return inv_cdf(points).T[0]

LHS_sample = LHS_sample_pdf(pdf_x,pdf_y,50)

plt.plot(pdf_x,pdf_y,linewidth=2,zorder=2)
plt.hist(LHS_sample,density=True,bins=10)
plt.xlabel('Calibrated Radiocarbon Age (BP)')
plt.ylabel('Probability')
plt.show()