train

Some code to ease up training

A loss function meant to be used with Latent ODEs for Irregularly-Sampled Time Series’s LatentODE. Some modifications were applied:


source

LatentODELoss

 LatentODELoss (noise_std:torch.Tensor,
                prior:torch.distributions.normal.Normal)

A loss function meant to be paired with Rubanova’s LatentODE

Type Details
noise_std Tensor Standard deviation of the noise assumed when computing the likelihood
prior Normal Prior distribution for the initial state

We need the prior distribution, on one hand, and the standard deviation of the noise, on the other,…

prior = torch.distributions.normal.Normal(torch.tensor(0.0), torch.tensor(1.))
noise_std = torch.tensor(0.01)
NameError: name 'torch' is not defined

…to instantiate the class

loss_func = LatentODELoss(noise_std, prior)
loss_func
LatentODELoss with:
    noise standard deviation = 0.009999999776482582
    prior: Normal(loc: 0.0, scale: 1.0)

Some random data for testing purposes

n_time_instants = 12
n_trials = 3
batch_size = 32
features_size = 2
latent_size = 13

pred = torch.randn(n_time_instants, n_trials, batch_size, features_size)
mean = torch.randn(1, batch_size, latent_size)
std = torch.rand_like(mean)
target = torch.randn(batch_size, n_time_instants, features_size)
target_mask = (torch.randn_like(target) > 0.).bool()
kl_weight = 0.2

The loss function is applied

loss_func(pred, mean, std, target, target_mask, kl_weight)
(tensor(9830.3252), {'kl_average': tensor(1.2236), 'mse': tensor(2.0446)})