Priors over starting points
On top of decorating each observation with appropriate information, for stochastic processes we need to provide additional information that describes the starting position of the process (as, for instance, no observation may be made at the initial time). This is done through prior distributions.
All priors over starting points inherit from
ObservationSchemes.StartingPtPrior
— TypeStartingPtPrior{T}
Types inheriting from the abstract type StartingPtPrior
indicate the prior that is put on the starting point of the observed path of some stochastic process. T
denotes the DataType of the starting point.
They all must implement the following methods
Base.rand
— MethodBase.rand(G::StartingPtPrior, [z, ρ=0.0])
Sample a new starting point according to its prior distribution. An implementation with arguments z
, ρ
implements a preconditioned Crank-Nicolson scheme with memory parameter ρ
and a current non-centered variable z
. z
is also referred to as the driving noise.
ObservationSchemes.start_pt
— Methodstart_pt(z, G::StartingPtPrior, P)
Compute a new starting point from the white noise for a given posterior distribution obtained from combining prior G
and the likelihood encoded by the object P
.
ObservationSchemes.start_pt
— Methodstart_pt(z, G::StartingPtPrior)
Compute a new starting point from the white noise for a given prior distribution G
Distributions.logpdf
— Methodlogpdf(G::StartingPtPrior, y)
log-probability density function evaluated at y
of a prior distribution G
and should also implement
ObservationSchemes.inv_start_pt
— Methodinv_start_pt(y, G::StartingPtPrior, P)
Compute the driving noise that is needed to obtain starting point y
under prior G
and the likelihood in P
for MCMC setting. In this package we provide implementations for the following types of starting points
- Known, fixed starting points
- Gaussian priors over starting points
Known starting point
This is the simplest setting in which the starting point is assumed to be known.
ObservationSchemes.KnownStartingPt
— Typestruct KnownStartingPt{T} <: StartingPtPrior{T}
y::T
end
Indicates that the starting point is known and stores its value in y
KnownStartingPt(y::T) where T
Base constructor.
It can be defined with
x0 = [1.0, 2.0]
x0_prior = KnownStartingPt(x0)
a call to rand
or start_pt
will simply return the fixed starting point and logpdf(x0_prior, y)
evaluates to 0
so long as x0 == y
.
Gaussian priors
A Gaussian prior over the starting point.
ObservationSchemes.GsnStartingPt
— Typestruct GsnStartingPt{T,S,TM} <: StartingPtPrior{T} where {S}
μ::T
Σ::S
Λ::S
μ₀::T
Σ₀::UniformScaling
end
Indicates that the starting point is equipped with a Gaussian prior with mean μ
and covariance matrix Σ
(and pre-computed precision Λ
:=Σ
⁻¹). Sampling is always done via non-centred parametrization, by sampling white noise z
according to Gaussian with zero mean and identity covariance: μ₀
and Σ₀
, and then transforming z
to a variable with mean and covariance μ
and Σ
.
GsnStartingPt(μ::T, Σ::S)
Base constructor. It initialises the mean μ
and covariance Σ
parameters and Λ
is set according to Λ
:=Σ
⁻¹.
Can be defined with
μ, Σ = [1.0, 2.0], [1.0 0.0; 0.0 1.0]
x0_prior = GsnStartingPt(μ, Σ)
to set the mean and covariance to μ
and Σ
respectively. The underlying idea behind Gaussian starting point priors is that of non-centred parametrisation, so that a possibility of local updates is granted. More precisely any sampling is done with z∼N(0,Id)
variables, which are then transformed to N(μ,Σ)
via linear transformations. In particular, sampling with rand
can be done with local perturbations via Crank-Nicolson scheme.
Base.rand
— FunctionBase.rand(G::StartingPtPrior, [z, ρ=0.0])
Sample a new starting point according to its prior distribution. An implementation with arguments z
, ρ
implements a preconditioned Crank-Nicolson scheme with memory parameter ρ
and a current non-centered variable z
. z
is also referred to as the driving noise.
rand([rng::Random.AbstractRNG], G::GsnStartingPt, z, ρ=0.0)
Sample new white noise using Crank-Nicolson scheme with memory parameter ρ
and a previous value of the white noise stored inside object G
rand([rng::Random.AbstractRNG], G::GsnStartingPt)
Sample new starting point according to its prior distribution.
rand([rng::Random.AbstractRNG], G::KnownStartingPt, args...)
Starting point is known. Nothing can be sampled. Returning known starting point.
Base.rand([rng::Random.AbstractRNG], o::LinearGsnObs, X)
Sample an observation according to
with $L$, $μ$ and $Σ$ defined in o
.
inv_start_pt
returns the non-centrally parametrised noise z
that produces a given starting point x0
:
ObservationSchemes.inv_start_pt
— Functioninv_start_pt(y, G::StartingPtPrior, P)
Compute the driving noise that is needed to obtain starting point y
under prior G
and the likelihood in P
inv_start_pt(y, G::GsnStartingPt, P)
Compute the driving noise that is needed to obtain starting point y
under prior G
and the likelihood in P
inv_start_pt(y, G::KnownStartingPt, P)
Starting point known, no need for dealing with white noise, use convention of returning y
and start_pt
is the reverse operation
ObservationSchemes.start_pt
— Functionstart_pt(z, G::StartingPtPrior, P)
Compute a new starting point from the white noise for a given posterior distribution obtained from combining prior G
and the likelihood encoded by the object P
.
start_pt(z, G::StartingPtPrior)
Compute a new starting point from the white noise for a given prior distribution G
start_pt(z, G::GsnStartingPt, P)
Compute a new starting point from the white noise for a given posterior distribution obtained from combining prior G
and the likelihood encoded by the object P
.
start_pt(z, G::GsnStartingPt)
Compute a new starting point from the white noise for a given prior distribution G
start_pt(z, G::KnownStartingPt, P)
Return starting point
start_pt(G::KnownStartingPt, P)
Return starting point
To see how to define your own priors over starting points see a How-to-guide on Defining custom priors over starting points