Skip to content

Commit 71bbd1c

Browse files
authored
docs: improve tutorial (#907)
1 parent 01aa04a commit 71bbd1c

File tree

1 file changed

+11
-4
lines changed
  • DifferentiationInterface/docs/src/tutorials

1 file changed

+11
-4
lines changed

DifferentiationInterface/docs/src/tutorials/basic.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ backend = AutoForwardDiff()
3131
```
3232

3333
!!! tip
34-
34+
3535
To avoid name conflicts, load AD packages with `import` instead of `using`.
3636
Indeed, most AD packages also export operators like `gradient` and `jacobian`, but you only want to use the ones from DifferentiationInterface.jl.
3737

@@ -81,19 +81,26 @@ These objects can be reused between gradient computations, even on different inp
8181
We abstract away the preparation step behind a backend-agnostic syntax:
8282

8383
```@example tuto_basic
84-
prep = prepare_gradient(f, backend, zero(x))
84+
using Random
85+
typical_x = randn!(similar(x))
86+
prep = prepare_gradient(f, backend, typical_x)
8587
```
8688

8789
You don't need to know what this object is, you just need to pass it to the gradient operator.
8890
Note that preparation does not depend on the actual components of the vector `x`, just on its type and size.
89-
You can thus reuse the `prep` for different values of the input.
91+
92+
You can then reuse the `prep` for different values of the input.
9093

9194
```@example tuto_basic
9295
grad = similar(x)
9396
gradient!(f, grad, prep, backend, x)
9497
grad # has been mutated
9598
```
9699

100+
!!! warning
101+
Reusing the `prep` object on inputs of a different type will throw an error.
102+
Reusing the `prep` object on inputs of a different size may either work, fail silently or fail loudly, possibly even crash your REPL. Do not try it.
103+
97104
Preparation makes the gradient computation much faster, and (in this case) allocation-free.
98105

99106
```@example tuto_basic
@@ -122,7 +129,7 @@ gradient(f, backend2, x)
122129
And you can run the same benchmarks to see what you gained (although such a small input may not be realistic):
123130

124131
```@example tuto_basic
125-
prep2 = prepare_gradient(f, backend2, zero(x))
132+
prep2 = prepare_gradient(f, backend2, randn!(similar(x)))
126133
127134
@benchmark gradient!($f, $grad, $prep2, $backend2, $x)
128135
```

0 commit comments

Comments
 (0)