API: Turing.Inference
Turing.Inference.CSMC
— TypeCSMC(...)
Equivalent to PG
.
Turing.Inference.ESS
— TypeESS
Elliptical slice sampling algorithm.
Examples
julia> @model function gdemo(x)
+Inference · Turing API: Turing.Inference
Turing.Inference.CSMC
— TypeCSMC(...)
Equivalent to PG
.
sourceTuring.Inference.ESS
— TypeESS
Elliptical slice sampling algorithm.
Examples
julia> @model function gdemo(x)
m ~ Normal()
x ~ Normal(m, 0.5)
end
@@ -469,7 +11,7 @@
│ Row │ parameters │ mean │
│ │ Symbol │ Float64 │
├─────┼────────────┼──────────┤
-│ 1 │ m │ 0.824853 │
sourceTuring.Inference.Emcee
— TypeEmcee(n_walkers::Int, stretch_length=2.0)
Affine-invariant ensemble sampling algorithm.
Reference
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067
sourceTuring.Inference.ExternalSampler
— TypeExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}
Represents a sampler that is not an implementation of InferenceAlgorithm
.
The Unconstrained
type-parameter is to indicate whether the sampler requires unconstrained space.
Fields
sampler::AbstractMCMC.AbstractSampler
: the sampler to wrap
adtype::ADTypes.AbstractADType
: the automatic differentiation (AD) backend to use
sourceTuring.Inference.Gibbs
— TypeGibbs(algs...)
Compositional MCMC interface. Gibbs sampling combines one or more sampling algorithms, each of which samples from a different set of variables in a model.
Example:
@model function gibbs_example(x)
+│ 1 │ m │ 0.824853 │
sourceTuring.Inference.Emcee
— TypeEmcee(n_walkers::Int, stretch_length=2.0)
Affine-invariant ensemble sampling algorithm.
Reference
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067
sourceTuring.Inference.ExternalSampler
— TypeExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}
Represents a sampler that is not an implementation of InferenceAlgorithm
.
The Unconstrained
type-parameter is to indicate whether the sampler requires unconstrained space.
Fields
sampler::AbstractMCMC.AbstractSampler
: the sampler to wrap
adtype::ADTypes.AbstractADType
: the automatic differentiation (AD) backend to use
sourceTuring.Inference.Gibbs
— TypeGibbs(algs...)
Compositional MCMC interface. Gibbs sampling combines one or more sampling algorithms, each of which samples from a different set of variables in a model.
Example:
@model function gibbs_example(x)
v1 ~ Normal(0,1)
v2 ~ Categorical(5)
end
@@ -477,7 +19,7 @@
# Use PG for a 'v2' variable, and use HMC for the 'v1' variable.
# Note that v2 is discrete, so the PG sampler is more appropriate
# than is HMC.
-alg = Gibbs(HMC(0.2, 3, :v1), PG(20, :v2))
One can also pass the number of iterations for each Gibbs component using the following syntax:
alg = Gibbs((HMC(0.2, 3, :v1), n_hmc), (PG(20, :v2), n_pg))
where n_hmc
and n_pg
are the number of HMC and PG iterations for each Gibbs iteration.
Tips:
HMC
and NUTS
are fast samplers and can throw off particle-based
methods like Particle Gibbs. You can increase the effectiveness of particle sampling by including more particles in the particle sampler.
sourceTuring.Inference.GibbsConditional
— TypeGibbsConditional(sym, conditional)
A "pseudo-sampler" to manually provide analytical Gibbs conditionals to Gibbs
. GibbsConditional(:x, cond)
will sample the variable x
according to the conditional cond
, which must therefore be a function from a NamedTuple
of the conditioned variables to a Distribution
.
The NamedTuple
that is passed in contains all random variables from the model in an unspecified order, taken from the VarInfo
object over which the model is run. Scalars and vectors are stored in their respective shapes. The tuple also contains the value of the conditioned variable itself, which can be useful, but using it creates something that is not a Gibbs sampler anymore (see here).
Examples
α_0 = 2.0
+alg = Gibbs(HMC(0.2, 3, :v1), PG(20, :v2))
One can also pass the number of iterations for each Gibbs component using the following syntax:
alg = Gibbs((HMC(0.2, 3, :v1), n_hmc), (PG(20, :v2), n_pg))
where n_hmc
and n_pg
are the number of HMC and PG iterations for each Gibbs iteration.
Tips:
HMC
and NUTS
are fast samplers and can throw off particle-based
methods like Particle Gibbs. You can increase the effectiveness of particle sampling by including more particles in the particle sampler.
sourceTuring.Inference.GibbsConditional
— TypeGibbsConditional(sym, conditional)
A "pseudo-sampler" to manually provide analytical Gibbs conditionals to Gibbs
. GibbsConditional(:x, cond)
will sample the variable x
according to the conditional cond
, which must therefore be a function from a NamedTuple
of the conditioned variables to a Distribution
.
The NamedTuple
that is passed in contains all random variables from the model in an unspecified order, taken from the VarInfo
object over which the model is run. Scalars and vectors are stored in their respective shapes. The tuple also contains the value of the conditioned variable itself, which can be useful, but using it creates something that is not a Gibbs sampler anymore (see here).
Examples
α_0 = 2.0
θ_0 = inv(3.0)
x = [1.5, 2.0]
N = length(x)
@@ -508,14 +50,14 @@
m = inverse_gdemo(x)
-sample(m, Gibbs(GibbsConditional(:λ, cond_λ), GibbsConditional(:m, cond_m)), 10)
sourceTuring.Inference.GibbsState
— TypeGibbsState{V<:VarInfo, S<:Tuple{Vararg{Sampler}}}
Stores a VarInfo
for use in sampling, and a Tuple
of Samplers
that the Gibbs
sampler iterates through for each step!
.
sourceTuring.Inference.HMC
— TypeHMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())
Hamiltonian Monte Carlo sampler with static trajectory.
Arguments
ϵ
: The leapfrog step size to use.n_leapfrog
: The number of leapfrog steps to use.adtype
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
Usage
HMC(0.05, 10)
Tips
If you are receiving gradient errors when using HMC
, try reducing the leapfrog step size ϵ
, e.g.
# Original step size
+sample(m, Gibbs(GibbsConditional(:λ, cond_λ), GibbsConditional(:m, cond_m)), 10)
sourceTuring.Inference.GibbsState
— TypeGibbsState{V<:VarInfo, S<:Tuple{Vararg{Sampler}}}
Stores a VarInfo
for use in sampling, and a Tuple
of Samplers
that the Gibbs
sampler iterates through for each step!
.
sourceTuring.Inference.HMC
— TypeHMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())
Hamiltonian Monte Carlo sampler with static trajectory.
Arguments
ϵ
: The leapfrog step size to use.n_leapfrog
: The number of leapfrog steps to use.adtype
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
Usage
HMC(0.05, 10)
Tips
If you are receiving gradient errors when using HMC
, try reducing the leapfrog step size ϵ
, e.g.
# Original step size
sample(gdemo([1.5, 2]), HMC(0.1, 10), 1000)
# Reduced step size
-sample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)
sourceTuring.Inference.HMCDA
— TypeHMCDA(
+sample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)
sourceTuring.Inference.HMCDA
— TypeHMCDA(
n_adapts::Int, δ::Float64, λ::Float64; ϵ::Float64 = 0.0;
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)
Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.
Usage
HMCDA(200, 0.65, 0.3)
Arguments
n_adapts
: Numbers of samples to use for adaptation.δ
: Target acceptance rate. 65% is often recommended.λ
: Target leapfrog length.ϵ
: Initial step size; 0 means automatically search by Turing.adtype
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
Reference
For more information, please view the following paper (arXiv link):
Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.
sourceTuring.Inference.IS
— TypeIS()
Importance sampling algorithm.
Usage:
IS()
Example:
# Define a simple Normal model with unknown mean and variance.
+)
Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.
Usage
HMCDA(200, 0.65, 0.3)
Arguments
n_adapts
: Numbers of samples to use for adaptation.δ
: Target acceptance rate. 65% is often recommended.λ
: Target leapfrog length.ϵ
: Initial step size; 0 means automatically search by Turing.adtype
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
Reference
For more information, please view the following paper (arXiv link):
Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.
sourceTuring.Inference.IS
— TypeIS()
Importance sampling algorithm.
Usage:
IS()
Example:
# Define a simple Normal model with unknown mean and variance.
@model function gdemo(x)
s² ~ InverseGamma(2,3)
m ~ Normal(0,sqrt.(s))
@@ -524,7 +66,7 @@
return s², m
end
-sample(gdemo([1.5, 2]), IS(), 1000)
sourceTuring.Inference.MH
— MethodMH(space...)
Construct a Metropolis-Hastings algorithm.
The arguments space
can be
- Blank (i.e.
MH()
), in which case MH
defaults to using the prior for each parameter as the proposal distribution. - A set of one or more symbols to sample with
MH
in conjunction with Gibbs
, i.e. Gibbs(MH(:m), PG(10, :s))
- An iterable of pairs or tuples mapping a
Symbol
to a AdvancedMH.Proposal
, Distribution
, or Function
that generates returns a conditional proposal distribution. - A covariance matrix to use as for mean-zero multivariate normal proposals.
Examples
The default MH
will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal
.
@model function gdemo(x, y)
+sample(gdemo([1.5, 2]), IS(), 1000)
sourceTuring.Inference.MH
— MethodMH(space...)
Construct a Metropolis-Hastings algorithm.
The arguments space
can be
- Blank (i.e.
MH()
), in which case MH
defaults to using the prior for each parameter as the proposal distribution. - A set of one or more symbols to sample with
MH
in conjunction with Gibbs
, i.e. Gibbs(MH(:m), PG(10, :s))
- An iterable of pairs or tuples mapping a
Symbol
to a AdvancedMH.Proposal
, Distribution
, or Function
that generates returns a conditional proposal distribution. - A covariance matrix to use as for mean-zero multivariate normal proposals.
Examples
The default MH
will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal
.
@model function gdemo(x, y)
s² ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s²))
x ~ Normal(m, sqrt(s²))
@@ -574,21 +116,21 @@
),
1_000
)
-mean(chain)
sourceTuring.Inference.MHLogDensityFunction
— TypeMHLogDensityFunction
A log density function for the MH sampler.
This variant uses the set_namedtuple!
function to update the VarInfo
.
sourceTuring.Inference.NUTS
— TypeNUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()
No-U-Turn Sampler (NUTS) sampler.
Usage:
NUTS() # Use default NUTS configuration.
-NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.
Arguments:
n_adapts::Int
: The number of samples to use with adaptation.δ::Float64
: Target acceptance rate for dual averaging.max_depth::Int
: Maximum doubling tree depth.Δ_max::Float64
: Maximum divergence during doubling tree.init_ϵ::Float64
: Initial step size; 0 means automatically searching using a heuristic procedure.adtype::ADTypes.AbstractADType
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
sourceTuring.Inference.PG
— TypePG(n, space...)
+mean(chain)
sourceTuring.Inference.MHLogDensityFunction
— TypeMHLogDensityFunction
A log density function for the MH sampler.
This variant uses the set_namedtuple!
function to update the VarInfo
.
sourceTuring.Inference.NUTS
— TypeNUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()
No-U-Turn Sampler (NUTS) sampler.
Usage:
NUTS() # Use default NUTS configuration.
+NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.
Arguments:
n_adapts::Int
: The number of samples to use with adaptation.δ::Float64
: Target acceptance rate for dual averaging.max_depth::Int
: Maximum doubling tree depth.Δ_max::Float64
: Maximum divergence during doubling tree.init_ϵ::Float64
: Initial step size; 0 means automatically searching using a heuristic procedure.adtype::ADTypes.AbstractADType
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
sourceTuring.Inference.PG
— TypePG(n, space...)
PG(n, [resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])
-PG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a Particle Gibbs sampler of type PG
with n
particles for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
sourceTuring.Inference.PG
— Typestruct PG{space, R} <: Turing.Inference.ParticleInference
Particle Gibbs sampler.
Fields
nparticles::Int64
: Number of particles.
resampler::Any
: Resampling algorithm.
sourceTuring.Inference.PolynomialStepsize
— MethodPolynomialStepsize(a[, b=0, γ=0.55])
Create a polynomially decaying stepsize function.
At iteration t
, the step size is
\[a (b + t)^{-γ}.\]
sourceTuring.Inference.Prior
— TypePrior()
Algorithm for sampling from the prior.
sourceTuring.Inference.SGHMC
— TypeSGHMC{AD,space}
Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e
Fields
learning_rate::Real
momentum_decay::Real
adtype::Any
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
sourceTuring.Inference.SGHMC
— MethodSGHMC(
+PG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a Particle Gibbs sampler of type PG
with n
particles for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
sourceTuring.Inference.PG
— Typestruct PG{space, R} <: Turing.Inference.ParticleInference
Particle Gibbs sampler.
Fields
nparticles::Int64
: Number of particles.
resampler::Any
: Resampling algorithm.
sourceTuring.Inference.PolynomialStepsize
— MethodPolynomialStepsize(a[, b=0, γ=0.55])
Create a polynomially decaying stepsize function.
At iteration t
, the step size is
\[a (b + t)^{-γ}.\]
sourceTuring.Inference.Prior
— TypePrior()
Algorithm for sampling from the prior.
sourceTuring.Inference.SGHMC
— TypeSGHMC{AD,space}
Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e
Fields
learning_rate::Real
momentum_decay::Real
adtype::Any
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
sourceTuring.Inference.SGHMC
— MethodSGHMC(
space::Symbol...;
learning_rate::Real,
momentum_decay::Real,
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)
Create a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
sourceTuring.Inference.SGLD
— TypeSGLD
Stochastic gradient Langevin dynamics (SGLD) sampler.
Fields
stepsize::Any
: Step size function.
adtype::Any
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
sourceTuring.Inference.SGLD
— MethodSGLD(
+)
Create a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
sourceTuring.Inference.SGLD
— TypeSGLD
Stochastic gradient Langevin dynamics (SGLD) sampler.
Fields
stepsize::Any
: Step size function.
adtype::Any
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
sourceTuring.Inference.SGLD
— MethodSGLD(
space::Symbol...;
stepsize = PolynomialStepsize(0.01),
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)
Stochastic gradient Langevin dynamics (SGLD) sampler.
By default, a polynomially decaying stepsize is used.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
See also: PolynomialStepsize
sourceTuring.Inference.SMC
— TypeSMC(space...)
+)
Stochastic gradient Langevin dynamics (SGLD) sampler.
By default, a polynomially decaying stepsize is used.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
See also: PolynomialStepsize
sourceTuring.Inference.SMC
— TypeSMC(space...)
SMC([resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])
-SMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a sequential Monte Carlo sampler of type SMC
for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
sourceTuring.Inference.SMC
— Typestruct SMC{space, R} <: Turing.Inference.ParticleInference
Sequential Monte Carlo sampler.
Fields
resampler::Any
sourceStatsAPI.predict
— Methodpredict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)
Execute model
conditioned on each sample in chain
, and return the resulting Chains
.
If include_all
is false
, the returned Chains
will contain only those variables sampled/not present in chain
.
Details
Internally calls Turing.Inference.transitions_from_chain
to obtained the samples and then converts these into a Chains
object using AbstractMCMC.bundle_samples
.
Example
julia> using Turing; Turing.setprogress!(false);
+SMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a sequential Monte Carlo sampler of type SMC
for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
sourceTuring.Inference.SMC
— Typestruct SMC{space, R} <: Turing.Inference.ParticleInference
Sequential Monte Carlo sampler.
Fields
resampler::Any
sourceStatsAPI.predict
— Methodpredict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)
Execute model
conditioned on each sample in chain
, and return the resulting Chains
.
If include_all
is false
, the returned Chains
will contain only those variables sampled/not present in chain
.
Details
Internally calls Turing.Inference.transitions_from_chain
to obtained the samples and then converts these into a Chains
object using AbstractMCMC.bundle_samples
.
Example
julia> using Turing; Turing.setprogress!(false);
[ Info: [Turing]: progress logging is disabled globally
julia> @model function linear_reg(x, y, σ = 0.1)
@@ -636,10 +178,11 @@
y[1] 20.0342 20.1188 20.2135 20.2588 20.4188
y[2] 20.1870 20.3178 20.3839 20.4466 20.5895
+
julia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));
julia> sum(abs2, ys_test - ys_pred) ≤ 0.1
-true
sourceTuring.Inference.dist_val_tuple
— Methoddist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)
Return two NamedTuples
.
The first NamedTuple
has symbols as keys and distributions as values. The second NamedTuple
has model symbols as keys and their stored values as values.
sourceTuring.Inference.externalsampler
— Methodexternalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)
Wrap a sampler so it can be used as an inference algorithm.
Arguments
sampler::AbstractSampler
: The sampler to wrap.
Keyword Arguments
adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff()
: The automatic differentiation (AD) backend to use.unconstrained::Bool=true
: Whether the sampler requires unconstrained space.
sourceTuring.Inference.getparams
— Methodgetparams(model, t)
Return a named tuple of parameters.
sourceTuring.Inference.gibbs_rerun
— Methodgibbs_rerun(prev_alg, alg)
Check if the model should be rerun to recompute the log density before sampling with the Gibbs component alg
and after sampling from Gibbs component prev_alg
.
By default, the function returns true
.
sourceTuring.Inference.gibbs_state
— Methodgibbs_state(model, sampler, state, varinfo)
Return an updated state, taking into account the variables sampled by other Gibbs components.
Arguments
model
: model targeted by the Gibbs sampler.sampler
: the sampler for this Gibbs component.state
: the state of sampler
computed in the previous iteration.varinfo
: the variables, including the ones sampled by other Gibbs components.
sourceTuring.Inference.gibbs_varinfo
— Methodgibbs_varinfo(model, sampler, state)
Return the variables corresponding to the current state
of the Gibbs component sampler
.
sourceTuring.Inference.group_varnames_by_symbol
— Methodgroup_varnames_by_symbol(vns)
Group the varnames by their symbol.
Arguments
vns
: Iterable of VarName
.
Returns
OrderedDict{Symbol, Vector{VarName}}
: A dictionary mapping symbol to a vector of varnames.
sourceTuring.Inference.isgibbscomponent
— Methodisgibbscomponent(alg)
Determine whether algorithm alg
is allowed as a Gibbs component.
sourceTuring.Inference.mh_accept
— Methodmh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)
Decide if a proposal $x'$ with log probability $\log p(x') = logp_proposal$ and log proposal ratio $\log k(x', x) - \log k(x, x') = log_proposal_ratio$ in a Metropolis-Hastings algorithm with Markov kernel $k(x_t, x_{t+1})$ and current state $x$ with log probability $\log p(x) = logp_current$ is accepted by evaluating the Metropolis-Hastings acceptance criterion
\[\log U \leq \log p(x') - \log p(x) + \log k(x', x) - \log k(x, x')\]
for a uniform random number $U \in [0, 1)$.
sourceTuring.Inference.recompute_logprob!!
— Methodrecompute_logprob!!(rng, model, sampler, state)
Recompute the log-probability of the model
based on the given state
and return the resulting state.
sourceTuring.Inference.requires_unconstrained_space
— Methodrequires_unconstrained_space(sampler::ExternalSampler)
Return true
if the sampler requires unconstrained space, and false
otherwise.
sourceTuring.Inference.set_namedtuple!
— Methodset_namedtuple!(vi::VarInfo, nt::NamedTuple)
Places the values of a NamedTuple
into the relevant places of a VarInfo
.
sourceTuring.Inference.transitions_from_chain
— Methodtransitions_from_chain(
+true
sourceTuring.Inference.dist_val_tuple
— Methoddist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)
Return two NamedTuples
.
The first NamedTuple
has symbols as keys and distributions as values. The second NamedTuple
has model symbols as keys and their stored values as values.
sourceTuring.Inference.externalsampler
— Methodexternalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)
Wrap a sampler so it can be used as an inference algorithm.
Arguments
sampler::AbstractSampler
: The sampler to wrap.
Keyword Arguments
adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff()
: The automatic differentiation (AD) backend to use.unconstrained::Bool=true
: Whether the sampler requires unconstrained space.
sourceTuring.Inference.getparams
— Methodgetparams(model, t)
Return a named tuple of parameters.
sourceTuring.Inference.gibbs_rerun
— Methodgibbs_rerun(prev_alg, alg)
Check if the model should be rerun to recompute the log density before sampling with the Gibbs component alg
and after sampling from Gibbs component prev_alg
.
By default, the function returns true
.
sourceTuring.Inference.gibbs_state
— Methodgibbs_state(model, sampler, state, varinfo)
Return an updated state, taking into account the variables sampled by other Gibbs components.
Arguments
model
: model targeted by the Gibbs sampler.sampler
: the sampler for this Gibbs component.state
: the state of sampler
computed in the previous iteration.varinfo
: the variables, including the ones sampled by other Gibbs components.
sourceTuring.Inference.gibbs_varinfo
— Methodgibbs_varinfo(model, sampler, state)
Return the variables corresponding to the current state
of the Gibbs component sampler
.
sourceTuring.Inference.group_varnames_by_symbol
— Methodgroup_varnames_by_symbol(vns)
Group the varnames by their symbol.
Arguments
vns
: Iterable of VarName
.
Returns
OrderedDict{Symbol, Vector{VarName}}
: A dictionary mapping symbol to a vector of varnames.
sourceTuring.Inference.isgibbscomponent
— Methodisgibbscomponent(alg)
Determine whether algorithm alg
is allowed as a Gibbs component.
sourceTuring.Inference.mh_accept
— Methodmh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)
Decide if a proposal $x'$ with log probability $\log p(x') = logp_proposal$ and log proposal ratio $\log k(x', x) - \log k(x, x') = log_proposal_ratio$ in a Metropolis-Hastings algorithm with Markov kernel $k(x_t, x_{t+1})$ and current state $x$ with log probability $\log p(x) = logp_current$ is accepted by evaluating the Metropolis-Hastings acceptance criterion
\[\log U \leq \log p(x') - \log p(x) + \log k(x', x) - \log k(x, x')\]
for a uniform random number $U \in [0, 1)$.
sourceTuring.Inference.recompute_logprob!!
— Methodrecompute_logprob!!(rng, model, sampler, state)
Recompute the log-probability of the model
based on the given state
and return the resulting state.
sourceTuring.Inference.requires_unconstrained_space
— Methodrequires_unconstrained_space(sampler::ExternalSampler)
Return true
if the sampler requires unconstrained space, and false
otherwise.
sourceTuring.Inference.set_namedtuple!
— Methodset_namedtuple!(vi::VarInfo, nt::NamedTuple)
Places the values of a NamedTuple
into the relevant places of a VarInfo
.
sourceTuring.Inference.transitions_from_chain
— Methodtransitions_from_chain(
[rng::AbstractRNG,]
model::Model,
chain::MCMCChains.Chains;
@@ -665,5 +208,4 @@
julia> [first(t.θ.x) for t in transitions] # extract samples for `x`
2-element Array{Array{Float64,1},1}:
[-2.0844148956440796]
- [-1.704630494695469]
sourceSettings
This document was generated with Documenter.jl version 1.8.0 on Tuesday 10 December 2024. Using Julia version 1.11.2.
-
+ [-1.704630494695469]