|
1 | | -# API |
| 1 | +# API Reference |
2 | 2 |
|
3 | 3 | ```@docs |
4 | 4 | FiniteDiff |
5 | 5 | ``` |
6 | 6 |
|
7 | | -## Derivatives |
| 7 | +FiniteDiff.jl provides fast, non-allocating finite difference calculations with support for sparsity patterns and various array types. The API is organized into several categories: |
8 | 8 |
|
9 | | -```@docs |
10 | | -FiniteDiff.finite_difference_derivative |
11 | | -FiniteDiff.finite_difference_derivative! |
12 | | -FiniteDiff.DerivativeCache |
13 | | -``` |
| 9 | +## Function Categories |
14 | 10 |
|
15 | | -## Gradients |
| 11 | +### [Derivatives](@ref derivatives) |
| 12 | +Single and multi-point derivatives of scalar functions. |
16 | 13 |
|
17 | | -```@docs |
18 | | -FiniteDiff.finite_difference_gradient |
19 | | -FiniteDiff.finite_difference_gradient! |
20 | | -FiniteDiff.GradientCache |
21 | | -``` |
| 14 | +### [Gradients](@ref gradients) |
| 15 | +Gradients of scalar-valued functions with respect to vector inputs. |
22 | 16 |
|
23 | | -Gradients are either a vector->scalar map `f(x)`, or a scalar->vector map `f(fx,x)` if `inplace=Val{true}` and `fx=f(x)` if `inplace=Val{false}`. |
| 17 | +### [Jacobians](@ref jacobians) |
| 18 | +Jacobian matrices of vector-valued functions, including sparse Jacobian support. |
24 | 19 |
|
25 | | -Note that here `fx` is a cached function call of `f`. If you provide `fx`, then |
26 | | -`fx` will be used in the forward differencing method to skip a function call. |
27 | | -It is on you to make sure that you update `cache.fx` every time before |
28 | | -calling `FiniteDiff.finite_difference_gradient!`. If `fx` is an immutable, e.g. a scalar or |
29 | | -a `StaticArray`, `cache.fx` should be updated using `@set` from [Setfield.jl](https://github.com/jw3126/Setfield.jl). |
30 | | -A good use of this is if you have a cache array for the output of `fx` already being used, you can make it alias |
31 | | -into the differencing algorithm here. |
| 20 | +### [Hessians](@ref hessians) |
| 21 | +Hessian matrices of scalar-valued functions. |
32 | 22 |
|
33 | | -## Jacobians |
| 23 | +### [Jacobian-Vector Products](@ref jvp) |
| 24 | +Efficient computation of directional derivatives without forming full Jacobians. |
34 | 25 |
|
35 | | -```@docs |
36 | | -FiniteDiff.finite_difference_jacobian |
37 | | -FiniteDiff.finite_difference_jacobian! |
38 | | -FiniteDiff.JacobianCache |
39 | | -``` |
| 26 | +### [Utilities](@ref utilities) |
| 27 | +Internal utilities and helper functions. |
40 | 28 |
|
41 | | -Jacobians are for functions `f!(fx,x)` when using in-place `finite_difference_jacobian!`, |
42 | | -and `fx = f(x)` when using out-of-place `finite_difference_jacobian`. The out-of-place |
43 | | -jacobian will return a similar type as `jac_prototype` if it is not a `nothing`. For non-square |
44 | | -Jacobians, a cache which specifies the vector `fx` is required. |
| 29 | +## Quick Start |
45 | 30 |
|
46 | | -For sparse differentiation, pass a `colorvec` of matrix colors. `sparsity` should be a sparse |
47 | | -or structured matrix (`Tridiagonal`, `Banded`, etc. according to the ArrayInterfaceCore.jl specs) |
48 | | -to allow for decompression, otherwise the result will be the colorvec compressed Jacobian. |
| 31 | +All functions follow a consistent API pattern: |
49 | 32 |
|
50 | | -## Hessians |
| 33 | +- **Cache-less versions**: `finite_difference_*` - convenient but allocate temporary arrays |
| 34 | +- **In-place versions**: `finite_difference_*!` - efficient, non-allocating when used with caches |
| 35 | +- **Cache constructors**: `*Cache` - pre-allocate work arrays for repeated computations |
51 | 36 |
|
52 | | -```@docs |
53 | | -FiniteDiff.finite_difference_hessian |
54 | | -FiniteDiff.finite_difference_hessian! |
55 | | -FiniteDiff.HessianCache |
56 | | -``` |
| 37 | +## Method Selection |
57 | 38 |
|
58 | | -Hessians are for functions `f(x)` which return a scalar. |
| 39 | +Choose your finite difference method based on accuracy and performance needs: |
59 | 40 |
|
60 | | -## Jacobian-Vector Products (JVP) |
| 41 | +- **Forward differences**: Fast, `O(h)` accuracy, requires `O(n)` function evaluations |
| 42 | +- **Central differences**: More accurate `O(h²)`, requires `O(2n)` function evaluations |
| 43 | +- **Complex step**: Machine precision accuracy, `O(n)` evaluations, requires complex-analytic functions |
61 | 44 |
|
62 | | -```@docs |
63 | | -FiniteDiff.finite_difference_jvp |
64 | | -FiniteDiff.finite_difference_jvp! |
65 | | -FiniteDiff.JVPCache |
66 | | -``` |
| 45 | +## Performance Tips |
67 | 46 |
|
68 | | -JVP functions compute the Jacobian-vector product `J(x) * v` efficiently without computing the full Jacobian matrix. This is particularly useful when you only need directional derivatives. |
| 47 | +1. **Use caches** for repeated computations to avoid allocations |
| 48 | +2. **Consider sparsity** for large Jacobians with known sparsity patterns |
| 49 | +3. **Choose appropriate methods** based on your accuracy requirements |
| 50 | +4. **Leverage JVPs** when you only need directional derivatives |
69 | 51 |
|
0 commit comments