Gaussian Process

This class implements gaussian process regression/conditioning for inverse problems, i.e. when conditioning on a linear operator of the field. The situation is we have a gaussian process on some space and a linear operator, denoted \(F\), acting on the process and mapping it to another space. This linear operator induces a gaussian process on the target space. We will use the term model when we refer to the original gaussian process and data when we refer to the induced gaussian process. We discretize model-space and data-space, so that F becomes a matrix.

Notation and Implementation

We denote by \(K\) the covariance matrix of the original field. We assume a constant prior mean vector \(\mu_0 = m_0 1_m\), where \(1_m\) stands for the identity vector in model space. Hence we only have one scalar parameter m0 for the prior mean.

The induced gaussian process then has covariance matrix \(K_d = F K F^t\) and prior mean vector \(m_0 F ~ 1_m\).

Regression/Conditioning

We observe the value of the induced field at some points in data-space. We then condition our GP on those observations and obtain posterior mean vector / covariance matrix.

Covariance Matrices

Most kernels have a variance parameter \(\sigma_0^2\) that just appears as a multiplicative constant. To make its optimization easier, we strip it of the covariance matrix, store it as a model parameter (i.e. as an attribute of the class) and include it manually in the experessions where it shows up.

This means that when one sees a covariance matrix in the code, it generally doesn’t include the \(\sigma_0\) factor, which has to be included by hand.

Covariance Pushforward

The model covariance matrix \(K\) is to big to be stored, but during conditioning it only shows up in the form \(K F^t\). It is thus sufficient to compute this product once and for all. We call it the covariance pushforward.

Noise

Independent homoscedactic noise on the data is assumed. It is specified by data_std_orig. If the matrices are ill-conditioned, the the noise will have to be increased, hence the current value of the noise will be stored in noise_std.

Important Implementation Detail

Products of vectors with the inversion operator R^{-1} * x should be computed using inv_op_vector_mult(x).

Conditioning

Conditioning always compute the inversion operator, and m0 via concentration. Hence, every call to conditioning (condition_data or condition_model), will update the following attributes:

  • inv_op_L

  • inversion_operator

  • m0

  • mu0_d

  • prior_misfit

  • weights

  • mu_post_d (TODO: maybe shouldn’t be an attribute of the GP.)


Module implementation Details

class volcapy.inverse.gaussian_process.GaussianProcess(F, d_obs, sigma0_init, data_std=0.1, logger=None)[source]
Attributes
sigma0_init

Original value of the sigma0 parameter to use when starting optimization.

sigma0: float

Current value of the prior standard deviation of the process. Will get optimized. We store it so we can use it as starting value for optimization of similar lambda0s.

data_std_orig: float

Original data noise standard deviation.

data_std: float

Current data noise deviation. Can change from the original value since we may have to increase it to make matrices invertible/better conditioned.

F: ndarray

Forward operator matrix, n_d * n_m

d_obs: tensor

Vector of observed data values

n_model: int

Number of model cells.

n_data: int

Number of data observations.

mu0_d_stripped: Tensor

The pushforwarded mean vector, stripped of the constant prior means parameter. I.e. this is F * 1_m. Multiply by the prior mean m0 to get the data-side mean.

data_ones
inv_op_L: Tensor

Lower Cholesky factor of the inverse inversion operator R.

inversion_operator
mu0_d
m0
prior_misfit
weights
mu_post_d
mu_post_m
stripped_inv
mu0_m
R: Tensor.

Inverse inversion operator. Given by data_std**2 * 1_{d*d} + sigma0**2 * K_d

logger

An instance of logging.Logger, used to output training progression.

Methods

compute_post_cov_diag(cov_pushfwd, …)

Compute diagonal of the posterior covariance matrix.

concentrate_m0()

Compute m0 (prior mean parameter) by MLE via concentration.

condition_data(K_d, sigma0[, m0, concentrate])

Condition model on the data side.

condition_model(cov_pushfwd, F, sigma0[, …])

Condition model on the model side.

condition_number(cov_pushfwd, F, sigma0)

Compute condition number of the inversion matrix R.

get_inversion_op_cholesky(K_d, sigma0)

Compute the Cholesky decomposition of the inverse of the inversion operator.

inv_op_vector_mult(x)

Multiply a vector by the inversion operator, using Cholesky approach (numerically more stable than computing inverse).

loo_error()

Leave one out cross validation RMSE.

loo_predict(loo_ind)

Leave one out krigging prediction.

neg_log_likelihood()

Computes the negative log-likelihood of the current state of the model.

optimize(K_d, n_epochs, device[, …])

Given lambda0, optimize the two remaining hyperparams via MLE.

post_cov(cov_pushfwd, cells_coords, lambda0, …)

Get posterior model covariance between two specified cells.

to_device(device)

Transfer all model attributes to the given device.

train_RMSE()

Compute current error on training set.

compute_post_cov_diag(cov_pushfwd, cells_coords, lambda0, sigma0, cl)[source]

Compute diagonal of the posterior covariance matrix.

Parameters
cov_pushfwd: tensor

Pushforwarded covariance matrix, i.e. K F^T

cells_coords: tensor

n_cells * n_dims: cells coordinates

lambda0: float

Lenght-scale parameter

sigma0: float

Prior variance.

cl: module

Covariance module implementing the kernel we are working with. The goal is to make this function kernel independent, though its a bit awkward and should be changed to an object oriented approach.

Returns
post_cov_diag: tensor

Vector containing the posterior variance of model cell i as its i-th element.

concentrate_m0()[source]

Compute m0 (prior mean parameter) by MLE via concentration.

Note that the inversion operator should have been updated first.

condition_data(K_d, sigma0, m0=0.1, concentrate=False)[source]

Condition model on the data side.

Parameters
K_d: tensor

Covariance matrix on the data side.

sigma0: float

Standard deviation parameter for the kernel.

m0: float

Prior mean parameter.

concentrate: bool

If true, then will compute m0 by MLE via concentration of the log-likelihood.

Returns
mu_post_d: tensor

Posterior mean data vector

condition_model(cov_pushfwd, F, sigma0, m0=0.1, concentrate=False, NtV_crit=-1.0)[source]

Condition model on the model side.

Parameters
cov_pushfwd: tensor

Pushforwarded covariance matrix, i.e. K F^T

F

Forward operator

sigma0: float

Standard deviation parameter for the kernel.

m0: float

Prior mean parameter.

concentrate: bool

If true, then will compute m0 by MLE via concentration of the log-likelihood.

NtV_crit: (deprecated)

Critical Noise to Variance ratio (epsilon/sigma0). We can fix it to avoid numerical instability in the matrix inversion. If provided, will not allow to go below the critical value.

Returns
mu_post_m

Posterior mean model vector

mu_post_d

Posterior mean data vector

condition_number(cov_pushfwd, F, sigma0)[source]

Compute condition number of the inversion matrix R.

Used to detect numerical instability.

Parameters
cov_pushfwd: tensor

Pushforwarded covariance matrix, i.e. K F^T

F: tensor

Forward operator

sigma0: float

Standard deviation parameter for the kernel.

Returns
float

Condition number of the inversion matrix R.

get_inversion_op_cholesky(K_d, sigma0)[source]

Compute the Cholesky decomposition of the inverse of the inversion operator. Increases noise level if necessary to make matrix invertible.

Note that this method updates the noise level if necessary.

Parameters
K_d: Tensor

The (pushforwarded from model) data covariance matrix (stripped from sigma0).

sigma0: Tensor

The (model side) standard deviation.

Returns
Tensor

L such that R = LL^t. L is lower triangular.

inv_op_vector_mult(x)[source]

Multiply a vector by the inversion operator, using Cholesky approach (numerically more stable than computing inverse).

Parameters
x: Tensor

The vector to multiply.

Returns
Tensor

Multiplied vector R^(-1) * x.

loo_error()[source]

Leave one out cross validation RMSE.

Take the trained hyperparameters. Remove one point from the training set, krig/condition on the remaining point and predict the left out point. Compute the squared error, repeat for all data points (leaving one out at a time) and average.

WARNING: Should have run the forward pass of the model once before, otherwise some of the operators we need (inversion operator) won’t have been computed. Also should re-run the forward pass when updating hyperparameters.

Returns
float

RMSE cross-validation error.

loo_predict(loo_ind)[source]

Leave one out krigging prediction.

Take the trained hyperparameters. Remove one point from the training set, krig/condition on the remaining point and predict the left out point.

WARNING: Should have run the forward pass of the model once before, otherwise some of the operators we need (inversion operator) won’t have been computed. Also should re-run the forward pass when updating hyperparameters.

Parameters
loo_ind: int

Index (in the training set) of the data point to leave out.

Returns
float

Prediction at left out data point.

neg_log_likelihood()[source]

Computes the negative log-likelihood of the current state of the model. Note that this function should be called AFTER having run a conditioning, since it depends on the inversion operator computed there.

Returns
float
optimize(K_d, n_epochs, device, sigma0_init=None, lr=0.007, NtV_crit=-1.0)[source]

Given lambda0, optimize the two remaining hyperparams via MLE. Here, instead of giving lambda0, we give a (stripped) covariance matrix. Stripped means without sigma0.

The user can choose between CPU and GPU.

Parameters
K_d: tensor

(stripped) Covariance matrix in data space.

n_epochs: int

Number of training epochs.

device: Torch.device

Device to use for optimization, either CPU or GPU.

sigma0_init: float

Starting value for gradient descent. If None, then use the value sotred by the model class (that is, the one resulting from the previous optimization run).

lr: float

Learning rate.

NtV_crit: (deprecated)
post_cov(cov_pushfwd, cells_coords, lambda0, sigma0, i, j, cl)[source]

Get posterior model covariance between two specified cells.

Parameters
cov_pushfwd: tensor

Pushforwarded covariance matrix, i.e. K F^T

cells_coords: tensor

n_cells * n_dims: cells coordinates

lambda0: float

Lenght-scale parameter

sigma0: float

Prior variance.

i: int

Index of first cell (index in the cells_coords array).

j: int

Index of second cell.

cl: module

Covariance module implementing the kernel we are working with. The goal is to make this function kernel independent, though its a bit awkward and should be changed to an object oriented approach.

Returns
post_cov: float

Posterior covariance between model cells i and j.

to_device(device)[source]

Transfer all model attributes to the given device. Can be used to switch between cpu and gpu.

Parameters
device: torch.Device
train_RMSE()[source]

Compute current error on training set.

Returns
float

RMSE on training set.