From 02170ed3f06bb3e72856d7b5584dca4e9ad28cd4 Mon Sep 17 00:00:00 2001 From: David Widmann Date: Fri, 25 Feb 2022 08:06:16 +0100 Subject: [PATCH] Fix advanced usage guide (#1785) --- docs/src/using-turing/advanced.md | 53 ++++++++++++------------------- 1 file changed, 21 insertions(+), 32 deletions(-) diff --git a/docs/src/using-turing/advanced.md b/docs/src/using-turing/advanced.md index b5c0b5196..49ada472d 100644 --- a/docs/src/using-turing/advanced.md +++ b/docs/src/using-turing/advanced.md @@ -20,8 +20,7 @@ First, define a type of the distribution, as a subtype of a corresponding distri ```julia -struct CustomUniform <: ContinuousUnivariateDistribution -end +struct CustomUniform <: ContinuousUnivariateDistribution end ``` ### 2. Implement Sampling and Evaluation of the log-pdf @@ -31,8 +30,11 @@ Second, define `rand` and `logpdf`, which will be used to run the model. ```julia -Distributions.rand(rng::AbstractRNG, d::CustomUniform) = rand(rng) # sample in [0, 1] -Distributions.logpdf(d::CustomUniform, x::Real) = zero(x) # p(x) = 1 → logp(x) = 0 +# sample in [0, 1] +Distributions.rand(rng::AbstractRNG, d::CustomUniform) = rand(rng) + +# p(x) = 1 → logp(x) = 0 +Distributions.logpdf(d::CustomUniform, x::Real) = zero(x) ``` ### 3. Define Helper Functions @@ -72,20 +74,10 @@ and then we can call `rand` and `logpdf` as usual, where To read more about Bijectors.jl, check out [the project README](https://github.com/TuringLang/Bijectors.jl). -#### 3.2 Vectorization Support - - -The vectorization syntax follows `rv ~ [distribution]`, which requires `rand` and `logpdf` to be called on multiple data points at once. An appropriate implementation for `Flat` is shown below. - - -```julia -Distributions.logpdf(d::Flat, x::AbstractVector{<:Real}) = zero(x) -``` - ## Update the accumulated log probability in the model definition Turing accumulates log probabilities internally in an internal data structure that is accessible through -the internal variable `_varinfo` inside of the model definition (see below for more details about model internals). +the internal variable `__varinfo__` inside of the model definition (see below for more details about model internals). However, since users should not have to deal with internal data structures, a macro `Turing.@addlogprob!` is provided that increases the accumulated log probability. For instance, this allows you to [include arbitrary terms in the likelihood](https://github.com/TuringLang/Turing.jl/issues/1332) @@ -123,11 +115,11 @@ end Note that `@addlogprob!` always increases the accumulated log probability, regardless of the provided sampling context. For instance, if you do not want to apply `Turing.@addlogprob!` when evaluating the prior of your model but only when computing the log likelihood and the log joint probability, then you -should [check the type of the internal variable `_context`](https://github.com/TuringLang/DynamicPPL.jl/issues/154) +should [check the type of the internal variable `__context_`](https://github.com/TuringLang/DynamicPPL.jl/issues/154) such as ```julia -if !isa(_context, Turing.PriorContext) +if DynamicPPL.leafcontext(__context__) !== Turing.PriorContext() Turing.@addlogprob! myloglikelihood(x, μ) end ``` @@ -135,7 +127,9 @@ end ## Model Internals -The `@model` macro accepts a function definition and rewrites it such that call of the function generates a `Model` struct for use by the sampler. Models can be constructed by hand without the use of a macro. Taking the `gdemo` model as an example, the macro-based definition +The `@model` macro accepts a function definition and rewrites it such that call of the function generates a `Model` struct for use by the sampler. +Models can be constructed by hand without the use of a macro. +Taking the `gdemo` model as an example, the macro-based definition ```julia using Turing @@ -152,41 +146,36 @@ end model = gdemo([1.5, 2.0]) ``` -is equivalent to the macro-free version +can be implemented also (a bit less generally) with the macro-free version ```julia using Turing # Create the model function. -function modelf(rng, model, varinfo, sampler, context, x) - # Assume s has an InverseGamma distribution. - s² = Turing.DynamicPPL.tilde_assume( - rng, +function gdemo(model, varinfo, context, x) + # Assume s² has an InverseGamma distribution. + s², varinfo = DynamicPPL.tilde_assume!!( context, - sampler, InverseGamma(2, 3), - Turing.@varname(s), - (), + Turing.@varname(s²), varinfo, ) # Assume m has a Normal distribution. - m = Turing.DynamicPPL.tilde_assume( - rng, + m, varinfo = DynamicPPL.tilde_assume!!( context, - sampler, Normal(0, sqrt(s²)), Turing.@varname(m), - (), varinfo, ) # Observe each value of x[i] according to a Normal distribution. - Turing.DynamicPPL.dot_tilde_observe(context, sampler, Normal(m, sqrt(s²)), x, varinfo) + DynamicPPL.dot_tilde_observe!!(context, Normal(m, sqrt(s²)), x, Turing.@varname(x), varinfo) end +gdemo(x) = Turing.Model(:gdemo, gdemo, (; x)) # Instantiate a Model object with our data variables. -model = Turing.Model(modelf, (x = [1.5, 2.0],)) +model = gdemo([1.5, 2.0]) ``` ## Task Copying