You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: developers/inference/abstractmcmc-turing/index.qmd
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -262,7 +262,7 @@ $$
262
262
263
263
with $\theta_{\text{prop}}$ a sample from the proposal and $x_{\text{obs}}$ the observed data.
264
264
265
-
This begs the question: how can these functions access model information during sampling? Recall that the model is stored as an instance `m` of `Model`. One of the attributes of `m` is the model evaluation function `m.f`, which is built by compiling the `@model` macro. Executing `f` runs the tilde statements of the model in order, and adds model information to the sampler (the instance of `Sampler` that stores information about the ongoing sampling process) at each step (see [here](https://turinglang.org/dev/docs/for-developers/compiler) for more information about how the `@model` macro is compiled). The DynamicPPL functions `assume` and `observe` determine what kind of information to add to the sampler for every tilde statement.
265
+
This begs the question: how can these functions access model information during sampling? Recall that the model is stored as an instance `m` of `Model`. One of the attributes of `m` is the model evaluation function `m.f`, which is built by compiling the `@model` macro. Executing `f` runs the tilde statements of the model in order, and adds model information to the sampler (the instance of `Sampler` that stores information about the ongoing sampling process) at each step (see [here]({{<metadev-model-manual>}}) for more information about how the `@model` macro is compiled). The DynamicPPL functions `assume` and `observe` determine what kind of information to add to the sampler for every tilde statement.
266
266
267
267
Consider an instance `m` of `Model` and a sampler `spl`, with associated `VarInfo``vi = spl.state.vi`. At some point during the sampling process, an AbstractMCMC function such as `step!` calls `m(vi, ...)`, which calls the model evaluation function `m.f(vi, ...)`.
The key insight is that `filldist` creates a single distribution (not N independent distributions), which is why you cannot condition on individual elements. The distinction is not just about what appears on the LHS of `~`, but whether you're dealing with separate distributions (`.~` with univariate) or a single distribution over multiple values (`~` with multivariate or `filldist`).
42
42
43
43
To understand more about how Turing determines whether a variable is treated as random or observed, see:
44
+
44
45
-[Core Functionality]({{< meta core-functionality >}}) - basic explanation of the `~` notation and conditioning
45
46
46
47
@@ -50,10 +51,11 @@ Yes, but with important caveats! There are two types of parallelism to consider:
50
51
51
52
### 1. Parallel Sampling (Multiple Chains)
52
53
Turing.jl fully supports sampling multiple chains in parallel:
54
+
53
55
-**Multithreaded sampling**: Use `MCMCThreads()` to run one chain per thread
54
56
-**Distributed sampling**: Use `MCMCDistributed()` for distributed computing
55
57
56
-
See the [Core Functionality guide]({{<meta core-functionality>}}/#sampling-multiple-chains) for examples.
58
+
See the [Core Functionality guide]({{<metacore-functionality>}}#sampling-multiple-chains) for examples.
57
59
58
60
### 2. Threading Within Models
59
61
Using threads inside your model (e.g., `Threads.@threads`) requires more care:
@@ -69,6 +71,7 @@ end
69
71
```
70
72
71
73
**Important limitations:**
74
+
72
75
-**Observe statements**: Generally safe to use in threaded loops
73
76
-**Assume statements** (sampling statements): Often crash unpredictably or produce incorrect results
74
77
-**AD backend compatibility**: Many AD backends don't support threading. Check the [multithreaded column in ADTests](https://turinglang.org/ADTests/) for compatibility
@@ -78,12 +81,14 @@ For safe parallelism within models, consider vectorized operations instead of ex
78
81
## How do I check the type stability of my Turing model?
79
82
80
83
Type stability is crucial for performance. Check out:
84
+
81
85
-[Performance Tips]({{< meta usage-performance-tips >}}) - includes specific advice on type stability
82
86
- Use `DynamicPPL.DebugUtils.model_warntype` to check type stability of your model
83
87
84
88
## How do I debug my Turing model?
85
89
86
90
For debugging both statistical and syntactical issues:
91
+
87
92
-[Troubleshooting Guide]({{< meta usage-troubleshooting >}}) - common errors and their solutions
88
93
- For more advanced debugging, DynamicPPL provides [the `DynamicPPL.DebugUtils` module](https://turinglang.org/DynamicPPL.jl/stable/api/#Debugging-Utilities) for inspecting model internals
89
94
@@ -125,16 +130,18 @@ end
125
130
## Which automatic differentiation backend should I use?
126
131
127
132
The choice of AD backend can significantly impact performance. See:
133
+
128
134
-[Automatic Differentiation Guide]({{< meta usage-automatic-differentiation >}}) - comprehensive comparison of ForwardDiff, Mooncake, ReverseDiff, and other backends
129
135
-[Performance Tips]({{< meta usage-performance-tips >}}#choose-your-ad-backend) - quick guide on choosing backends
130
136
-[AD Backend Benchmarks](https://turinglang.org/ADTests/) - performance comparisons across various models
131
137
132
138
## I changed one line of my model and now it's so much slower; why?
133
139
134
140
Small changes can have big performance impacts. Common culprits include:
141
+
135
142
- Type instability introduced by the change
136
143
- Switching from vectorized to scalar operations (or vice versa)
137
144
- Inadvertently causing AD backend incompatibilities
138
145
- Breaking assumptions that allowed compiler optimizations
139
146
140
-
See our [Performance Tips]({{< meta usage-performance-tips >}}) and [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) for debugging performance regressions.
147
+
See our [Performance Tips]({{< meta usage-performance-tips >}}) and [Troubleshooting Guide]({{< meta usage-troubleshooting >}}) for debugging performance regressions.
Copy file name to clipboardExpand all lines: tutorials/bayesian-differential-equations/index.qmd
+3-1Lines changed: 3 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -85,6 +85,7 @@ To make the example more realistic, we generate data as random Poisson counts ba
85
85
Poisson-distributed data are common in ecology (for instance, counts of animals detected by a camera trap).
86
86
We'll assume that the rate $\lambda$, which parameterizes the Poisson distribution, is proportional to the underlying animal densities via a constant factor $q = 1.7$.
87
87
88
+
88
89
```{julia}
89
90
sol = solve(prob, Tsit5(); saveat=0.1)
90
91
q = 1.7
@@ -111,11 +112,12 @@ Practically, this helps us to illustrate the results without needing to run over
111
112
Note we also have to take special care with the ODE solver.
112
113
For certain parameter combinations, the numerical solver may predict animal densities that are just barely below zero.
113
114
This causes errors with the Poisson distribution, which needs a non-negative mean $\lambda$.
114
-
To avoid this happening, we tell the solver to aim for small abolute and relative errors (`abstol=1e-6, reltol=1e-6`).
115
+
To avoid this happening, we tell the solver to aim for small absolute and relative errors (`abstol=1e-6, reltol=1e-6`).
115
116
We also add a fudge factor `ϵ = 1e-5` to the predicted data.
116
117
Since `ϵ` is greater than the solver's tolerance, it should overcome any remaining numerical error, making sure all predicted values are positive.
117
118
At the same time, it is so small compared to the data that it should have a negligible effect on inference.
118
119
If this approach doesn't work, there are some more ideas to try [here](https://docs.sciml.ai/DiffEqDocs/stable/basics/faq/#My-ODE-goes-negative-but-should-stay-positive,-what-tools-can-help?).
120
+
In the case of continuous observations (e.g. data derived from modelling chemical reactions), it is sufficient to use a normal distribution with the mean as the data point and an appropriately chosen variance (which can itself also be a parameter with a prior distribution).
Copy file name to clipboardExpand all lines: tutorials/bayesian-logistic-regression/index.qmd
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -155,7 +155,7 @@ chain
155
155
::: {.callout-warning collapse="true"}
156
156
## Sampling With Multiple Threads
157
157
The `sample()` call above assumes that you have at least `nchains` threads available in your Julia instance. If you do not, the multiple chains
158
-
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.](https://turinglang.org/dev/docs/using-turing/guide/#sampling-multiple-chains)
158
+
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.]({{<metacore-functionality>}}#sampling-multiple-chains)
Copy file name to clipboardExpand all lines: tutorials/bayesian-poisson-regression/index.qmd
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -175,7 +175,7 @@ chain
175
175
::: {.callout-warning collapse="true"}
176
176
## Sampling With Multiple Threads
177
177
The `sample()` call above assumes that you have at least `nchains` threads available in your Julia instance. If you do not, the multiple chains
178
-
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.](https://turinglang.org/dev/docs/using-turing/guide/#sampling-multiple-chains)
178
+
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.]({{<metacore-functionality>}}#sampling-multiple-chains)
0 commit comments