Note

This tutorial was generated from an IPython notebook that can be downloaded here.

In this tutorial, we will demonstrate how to fit radial velocity observations of an exoplanetary system using exoplanet. We will follow the getting started tutorial from the exellent RadVel package where they fit for the parameters of the two planets in the K2-24 system.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

x = np.array(data.t)
y = np.array(data.vel)
yerr = np.array(data.errvel)

# Compute a reference time that will be used to normalize the trends model
x_ref = 0.5 * (x.min() + x.max())

# Also make a fine grid that spans the observation window for plotting purposes
t = np.linspace(x.min() - 5, x.max() + 5, 1000)

plt.errorbar(x, y, yerr=yerr, fmt=".k")
plt.xlabel("time [days]") Now, we know the periods and transit times for the planets from the K2 light curve, so let’s start by using the exoplanet.estimate_semi_amplitude() function to estimate the expected RV semi-amplitudes for the planets.

import exoplanet as xo

periods = [20.8851, 42.3633]
period_errs = [0.0003, 0.0006]
t0s = [2072.7948, 2082.6251]
t0_errs = [0.0007, 0.0004]
Ks = xo.estimate_semi_amplitude(periods, x, y, yerr, t0s=t0s)
print(Ks, "m/s")

[5.05069163 5.50983542] m/s


## The radial velocity model in PyMC3¶

Now that we have the data and an estimate of the initial values for the parameters, let’s start defining the probabilistic model in PyMC3 (take a look at A quick intro to PyMC3 for exoplaneteers if you’re new to PyMC3). First, we’ll define our priors on the parameters:

import pymc3 as pm
import theano.tensor as tt

with pm.Model() as model:

# Gaussian priors based on transit data (from Petigura et al.)
t0 = pm.Normal("t0", mu=np.array(t0s), sd=np.array(t0_errs), shape=2)
P = pm.Bound(pm.Normal, lower=0)(
"P",
mu=np.array(periods),
sd=np.array(period_errs),
shape=2,
testval=np.array(periods),
)

# Wide log-normal prior for semi-amplitude
logK = pm.Bound(pm.Normal, lower=0)(
"logK", mu=np.log(Ks), sd=10.0, shape=2, testval=np.log(Ks)
)

# Eccentricity & argument of periasteron
ecc = xo.distributions.UnitUniform(
"ecc", shape=2, testval=np.array([0.1, 0.1])
)
omega = xo.distributions.Angle("omega", shape=2)

# Jitter & a quadratic RV trend
logs = pm.Normal("logs", mu=np.log(np.median(yerr)), sd=5.0)
trend = pm.Normal("trend", mu=0, sd=10.0 ** -np.arange(3)[::-1], shape=3)

# Then we define the orbit
orbit = xo.orbits.KeplerianOrbit(period=P, t0=t0, ecc=ecc, omega=omega)

# And a function for computing the full RV model
def get_rv_model(t, name=""):
# First the RVs induced by the planets

# Define the background model
A = np.vander(t - x_ref, 3)
bkg = pm.Deterministic("bkg" + name, tt.dot(A, trend))

# Sum over planets and add the background to get the full model
return pm.Deterministic("rv_model" + name, tt.sum(vrad, axis=-1) + bkg)

# Define the RVs at the observed times
rv_model = get_rv_model(x)

# Also define the model on a fine grid as computed above (for plotting)
rv_model_pred = get_rv_model(t, name="_pred")

# Finally add in the observation model. This next line adds a new contribution
# to the log probability of the PyMC3 model
err = tt.sqrt(yerr ** 2 + tt.exp(2 * logs))
pm.Normal("obs", mu=rv_model, sd=err, observed=y)


Now, we can plot the initial model:

plt.errorbar(x, y, yerr=yerr, fmt=".k")

with model:
plt.plot(t, xo.eval_in_model(model.bkg_pred), ":k", alpha=0.5)
plt.plot(t, xo.eval_in_model(model.rv_model_pred), label="model")

plt.legend(fontsize=10)
plt.xlim(t.min(), t.max())
plt.xlabel("time [days]")
_ = plt.title("initial model")

WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library. In this plot, the background is the dotted line, the individual planets are the dashed lines, and the full model is the blue line.

It doesn’t look amazing so let’s fit for the maximum a posterior parameters.

with model:
map_soln = xo.optimize(start=model.test_point, vars=[trend])
map_soln = xo.optimize(start=map_soln)

optimizing logp for variables: [trend]
12it [00:03,  3.46it/s, logp=-6.484820e+01]
message: Optimization terminated successfully.
logp: -79.73266285618838 -> -64.8482026233154
optimizing logp for variables: [trend, logs, omega, ecc, logK, P, t0]
93it [00:00, 217.01it/s, logp=-1.427676e+01]
message: Optimization terminated successfully.
logp: -64.8482026233154 -> -14.276760262380932

plt.errorbar(x, y, yerr=yerr, fmt=".k")
plt.plot(t, map_soln["bkg_pred"], ":k", alpha=0.5)
plt.plot(t, map_soln["rv_model_pred"], label="model")

plt.legend(fontsize=10)
plt.xlim(t.min(), t.max())
plt.xlabel("time [days]")
_ = plt.title("MAP model") That looks better.

## Sampling¶

Now that we have our model set up and a good estimate of the initial parameters, let’s start sampling. There are substantial covariances between some of the parameters so we’ll use a exoplanet.get_dense_nuts_step() to tune the sampler (see the PyMC3 extras tutorial for more information).

np.random.seed(42)
with model:
trace = pm.sample(
tune=4000,
draws=4000,
cores=2,
chains=2,
step=xo.get_dense_nuts_step(target_accept=0.95),
)

Multiprocess sampling (2 chains in 2 jobs)
NUTS: [trend, logs, omega, ecc, logK, P, t0]
Sampling 2 chains, 3 divergences: 100%|██████████| 16000/16000 [01:48<00:00, 147.84draws/s]
There was 1 divergence after tuning. Increase target_accept or reparameterize.
There were 2 divergences after tuning. Increase target_accept or reparameterize.
The number of effective samples is smaller than 25% for some parameters.

After sampling, it’s always a good idea to do some convergence checks. First, let’s check the number of effective samples and the Gelman-Rubin statistic for our parameters of interest:

pm.summary(
trace, varnames=["trend", "logs", "omega", "ecc", "t0", "logK", "P"]
)

mean sd hpd_3% hpd_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
trend 0.001 0.001 -0.000 0.002 0.000 0.000 4690.0 4690.0 4720.0 5047.0 1.0
trend -0.039 0.022 -0.081 0.003 0.000 0.000 6301.0 5991.0 6362.0 5583.0 1.0
trend -1.991 0.793 -3.542 -0.585 0.011 0.008 5568.0 5568.0 5638.0 5242.0 1.0
logs 1.037 0.223 0.582 1.424 0.004 0.003 3193.0 3193.0 3237.0 3956.0 1.0
omega -0.285 0.784 -1.436 1.292 0.021 0.016 1433.0 1215.0 1705.0 1254.0 1.0
omega -0.645 2.126 -3.135 2.885 0.043 0.030 2495.0 2495.0 3891.0 7147.0 1.0
ecc 0.238 0.114 0.019 0.433 0.002 0.001 3106.0 3106.0 2999.0 1980.0 1.0
ecc 0.196 0.143 0.000 0.459 0.003 0.002 2764.0 2764.0 2539.0 3093.0 1.0
t0 2072.795 0.001 2072.793 2072.796 0.000 0.000 8058.0 8058.0 8061.0 5554.0 1.0
t0 2082.625 0.000 2082.624 2082.626 0.000 0.000 8114.0 8114.0 8117.0 5931.0 1.0
logK 1.555 0.254 1.071 1.990 0.006 0.004 1882.0 1882.0 2577.0 1737.0 1.0
logK 1.565 0.233 1.143 2.009 0.004 0.003 2844.0 2844.0 3716.0 2027.0 1.0
P 20.885 0.000 20.885 20.886 0.000 0.000 7987.0 7987.0 7996.0 6077.0 1.0
P 42.363 0.001 42.362 42.364 0.000 0.000 7475.0 7475.0 7476.0 5873.0 1.0

It looks like everything is pretty much converged here. Not bad for 14 parameters and about a minute of runtime…

Then we can make a corner plot of any combination of the parameters. For example, let’s look at period, semi-amplitude, and eccentricity:

import corner

samples = pm.trace_to_dataframe(trace, varnames=["P", "logK", "ecc", "omega"])
_ = corner.corner(samples) Finally, let’s plot the plosterior constraints on the RV model and compare those to the data:

plt.errorbar(x, y, yerr=yerr, fmt=".k")

# Compute the posterior predictions for the RV model
pred = np.percentile(trace["rv_model_pred"], [16, 50, 84], axis=0)
plt.plot(t, pred, color="C0", label="model")
art = plt.fill_between(t, pred, pred, color="C0", alpha=0.3)
art.set_edgecolor("none")

plt.legend(fontsize=10)
plt.xlim(t.min(), t.max())
plt.xlabel("time [days]")
_ = plt.title("posterior constraints") ## Phase plots¶

It might be also be interesting to look at the phased plots for this system. Here we’ll fold the dataset on the median of posterior period and then overplot the posterior constraint on the folded model orbits.

for n, letter in enumerate("bc"):
plt.figure()

# Get the posterior median orbital parameters
p = np.median(trace["P"][:, n])
t0 = np.median(trace["t0"][:, n])

# Compute the median of posterior estimate of the background RV
# and the contribution from the other planet. Then we can remove
# this from the data to plot just the planet we care about.
other = np.median(trace["vrad"][:, :, (n + 1) % 2], axis=0)
other += np.median(trace["bkg"], axis=0)

# Plot the folded data
x_fold = (x - t0 + 0.5 * p) % p - 0.5 * p
plt.errorbar(x_fold, y - other, yerr=yerr, fmt=".k")

# Compute the posterior prediction for the folded RV model for this
# planet
t_fold = (t - t0 + 0.5 * p) % p - 0.5 * p
inds = np.argsort(t_fold)
pred = np.percentile(trace["vrad_pred"][:, inds, n], [16, 50, 84], axis=0)
plt.plot(t_fold[inds], pred, color="C0", label="model")
art = plt.fill_between(
t_fold[inds], pred, pred, color="C0", alpha=0.3
)
art.set_edgecolor("none")

plt.legend(fontsize=10)
plt.xlim(-0.5 * p, 0.5 * p)
plt.xlabel("phase [days]")  