Skip to content

Commit 1c6fad9

Browse files
author
Aoife
committed
Merge branch 'add_notebook_dl' of github.com:TuringLang/docs into add_notebook_dl
2 parents 64c62a4 + 708d22b commit 1c6fad9

File tree

7 files changed

+18
-6
lines changed

7 files changed

+18
-6
lines changed

_quarto.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -134,6 +134,8 @@ website:
134134
- developers/inference/variational-inference/index.qmd
135135
- developers/inference/implementing-samplers/index.qmd
136136

137+
- faq/index.qmd
138+
137139
back-to-top-navigation: true
138140
repo-url: https://github.com/TuringLang/docs
139141
repo-actions: [edit, issue]

core-functionality/index.qmd

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ These respectively correspond to likelihood and prior terms.
5252

5353
It is easier to start by explaining when a variable is treated as an observed value.
5454
This can happen in one of two ways:
55+
5556
- The variable is passed as one of the arguments to the model function; or
5657
- The value of the variable in the model is explicitly conditioned or fixed.
5758

developers/inference/abstractmcmc-turing/index.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -262,7 +262,7 @@ $$
262262

263263
with $\theta_{\text{prop}}$ a sample from the proposal and $x_{\text{obs}}$ the observed data.
264264

265-
This begs the question: how can these functions access model information during sampling? Recall that the model is stored as an instance `m` of `Model`. One of the attributes of `m` is the model evaluation function `m.f`, which is built by compiling the `@model` macro. Executing `f` runs the tilde statements of the model in order, and adds model information to the sampler (the instance of `Sampler` that stores information about the ongoing sampling process) at each step (see [here](https://turinglang.org/dev/docs/for-developers/compiler) for more information about how the `@model` macro is compiled). The DynamicPPL functions `assume` and `observe` determine what kind of information to add to the sampler for every tilde statement.
265+
This begs the question: how can these functions access model information during sampling? Recall that the model is stored as an instance `m` of `Model`. One of the attributes of `m` is the model evaluation function `m.f`, which is built by compiling the `@model` macro. Executing `f` runs the tilde statements of the model in order, and adds model information to the sampler (the instance of `Sampler` that stores information about the ongoing sampling process) at each step (see [here]({{<meta dev-model-manual>}}) for more information about how the `@model` macro is compiled). The DynamicPPL functions `assume` and `observe` determine what kind of information to add to the sampler for every tilde statement.
266266

267267
Consider an instance `m` of `Model` and a sampler `spl`, with associated `VarInfo` `vi = spl.state.vi`. At some point during the sampling process, an AbstractMCMC function such as `step!` calls `m(vi, ...)`, which calls the model evaluation function `m.f(vi, ...)`.
268268

faq/index.qmd

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ sample(m2, NUTS(), 100) # This doesn't work!
4141
The key insight is that `filldist` creates a single distribution (not N independent distributions), which is why you cannot condition on individual elements. The distinction is not just about what appears on the LHS of `~`, but whether you're dealing with separate distributions (`.~` with univariate) or a single distribution over multiple values (`~` with multivariate or `filldist`).
4242

4343
To understand more about how Turing determines whether a variable is treated as random or observed, see:
44+
4445
- [Core Functionality]({{< meta core-functionality >}}) - basic explanation of the `~` notation and conditioning
4546

4647

@@ -50,10 +51,11 @@ Yes, but with important caveats! There are two types of parallelism to consider:
5051

5152
### 1. Parallel Sampling (Multiple Chains)
5253
Turing.jl fully supports sampling multiple chains in parallel:
54+
5355
- **Multithreaded sampling**: Use `MCMCThreads()` to run one chain per thread
5456
- **Distributed sampling**: Use `MCMCDistributed()` for distributed computing
5557

56-
See the [Core Functionality guide]({{< meta core-functionality >}}/#sampling-multiple-chains) for examples.
58+
See the [Core Functionality guide]({{<meta core-functionality>}}#sampling-multiple-chains) for examples.
5759

5860
### 2. Threading Within Models
5961
Using threads inside your model (e.g., `Threads.@threads`) requires more care:
@@ -69,6 +71,7 @@ end
6971
```
7072

7173
**Important limitations:**
74+
7275
- **Observe statements**: Generally safe to use in threaded loops
7376
- **Assume statements** (sampling statements): Often crash unpredictably or produce incorrect results
7477
- **AD backend compatibility**: Many AD backends don't support threading. Check the [multithreaded column in ADTests](https://turinglang.org/ADTests/) for compatibility
@@ -78,12 +81,14 @@ For safe parallelism within models, consider vectorized operations instead of ex
7881
## How do I check the type stability of my Turing model?
7982

8083
Type stability is crucial for performance. Check out:
84+
8185
- [Performance Tips]({{< meta usage-performance-tips >}}) - includes specific advice on type stability
8286
- Use `DynamicPPL.DebugUtils.model_warntype` to check type stability of your model
8387

8488
## How do I debug my Turing model?
8589

8690
For debugging both statistical and syntactical issues:
91+
8792
- [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) - common errors and their solutions
8893
- For more advanced debugging, DynamicPPL provides [the `DynamicPPL.DebugUtils` module](https://turinglang.org/DynamicPPL.jl/stable/api/#Debugging-Utilities) for inspecting model internals
8994

@@ -125,16 +130,18 @@ end
125130
## Which automatic differentiation backend should I use?
126131

127132
The choice of AD backend can significantly impact performance. See:
133+
128134
- [Automatic Differentiation Guide]({{< meta usage-automatic-differentiation >}}) - comprehensive comparison of ForwardDiff, Mooncake, ReverseDiff, and other backends
129135
- [Performance Tips]({{< meta usage-performance-tips >}}#choose-your-ad-backend) - quick guide on choosing backends
130136
- [AD Backend Benchmarks](https://turinglang.org/ADTests/) - performance comparisons across various models
131137

132138
## I changed one line of my model and now it's so much slower; why?
133139

134140
Small changes can have big performance impacts. Common culprits include:
141+
135142
- Type instability introduced by the change
136143
- Switching from vectorized to scalar operations (or vice versa)
137144
- Inadvertently causing AD backend incompatibilities
138145
- Breaking assumptions that allowed compiler optimizations
139146

140-
See our [Performance Tips]({{< meta usage-performance-tips >}}) and [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) for debugging performance regressions.
147+
See our [Performance Tips]({{< meta usage-performance-tips >}}) and [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) for debugging performance regressions.

tutorials/bayesian-differential-equations/index.qmd

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,7 @@ To make the example more realistic, we generate data as random Poisson counts ba
8585
Poisson-distributed data are common in ecology (for instance, counts of animals detected by a camera trap).
8686
We'll assume that the rate $\lambda$, which parameterizes the Poisson distribution, is proportional to the underlying animal densities via a constant factor $q = 1.7$.
8787

88+
8889
```{julia}
8990
sol = solve(prob, Tsit5(); saveat=0.1)
9091
q = 1.7
@@ -111,11 +112,12 @@ Practically, this helps us to illustrate the results without needing to run over
111112
Note we also have to take special care with the ODE solver.
112113
For certain parameter combinations, the numerical solver may predict animal densities that are just barely below zero.
113114
This causes errors with the Poisson distribution, which needs a non-negative mean $\lambda$.
114-
To avoid this happening, we tell the solver to aim for small abolute and relative errors (`abstol=1e-6, reltol=1e-6`).
115+
To avoid this happening, we tell the solver to aim for small absolute and relative errors (`abstol=1e-6, reltol=1e-6`).
115116
We also add a fudge factor `ϵ = 1e-5` to the predicted data.
116117
Since `ϵ` is greater than the solver's tolerance, it should overcome any remaining numerical error, making sure all predicted values are positive.
117118
At the same time, it is so small compared to the data that it should have a negligible effect on inference.
118119
If this approach doesn't work, there are some more ideas to try [here](https://docs.sciml.ai/DiffEqDocs/stable/basics/faq/#My-ODE-goes-negative-but-should-stay-positive,-what-tools-can-help?).
120+
In the case of continuous observations (e.g. data derived from modelling chemical reactions), it is sufficient to use a normal distribution with the mean as the data point and an appropriately chosen variance (which can itself also be a parameter with a prior distribution).
119121

120122
```{julia}
121123
@model function fitlv(data, prob)

tutorials/bayesian-logistic-regression/index.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ chain
155155
::: {.callout-warning collapse="true"}
156156
## Sampling With Multiple Threads
157157
The `sample()` call above assumes that you have at least `nchains` threads available in your Julia instance. If you do not, the multiple chains
158-
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.](https://turinglang.org/dev/docs/using-turing/guide/#sampling-multiple-chains)
158+
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.]({{<meta core-functionality>}}#sampling-multiple-chains)
159159
:::
160160

161161
```{julia}

tutorials/bayesian-poisson-regression/index.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ chain
175175
::: {.callout-warning collapse="true"}
176176
## Sampling With Multiple Threads
177177
The `sample()` call above assumes that you have at least `nchains` threads available in your Julia instance. If you do not, the multiple chains
178-
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.](https://turinglang.org/dev/docs/using-turing/guide/#sampling-multiple-chains)
178+
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.]({{<meta core-functionality>}}#sampling-multiple-chains)
179179
:::
180180

181181
# Viewing the Diagnostics

0 commit comments

Comments
 (0)