Skip to content

Commit 5409eb4

Browse files
Restructure API documentation into separate pages
- Split single API page into organized category pages: - Overview (api.md) - Quick start and navigation - Derivatives - Single/multi-point derivative functions - Gradients - Gradient computation with detailed notes - Jacobians - Jacobian matrices including sparse support - Hessians - Hessian matrices with mathematical background - JVP - Jacobian-vector products for efficient directional derivatives - Utilities - Internal functions and helpers - Updated pages.jl with hierarchical documentation structure - Enhanced each page with category-specific guidance and examples - Improved navigation with cross-references between sections - Documentation builds successfully with clean sidebar organization 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 410c999 commit 5409eb4

File tree

8 files changed

+272
-49
lines changed

8 files changed

+272
-49
lines changed

docs/pages.jl

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,15 @@
11
# Put in a separate page so it can be used by SciMLDocs.jl
22

3-
pages = ["Home" => "index.md", "tutorials.md", "api.md"]
3+
pages = [
4+
"Home" => "index.md",
5+
"Tutorials" => "tutorials.md",
6+
"API Reference" => [
7+
"Overview" => "api.md",
8+
"Derivatives" => "derivatives.md",
9+
"Gradients" => "gradients.md",
10+
"Jacobians" => "jacobians.md",
11+
"Hessians" => "hessians.md",
12+
"Jacobian-Vector Products" => "jvp.md",
13+
"Utilities" => "utilities.md"
14+
]
15+
]

docs/src/api.md

Lines changed: 30 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,69 +1,51 @@
1-
# API
1+
# API Reference
22

33
```@docs
44
FiniteDiff
55
```
66

7-
## Derivatives
7+
FiniteDiff.jl provides fast, non-allocating finite difference calculations with support for sparsity patterns and various array types. The API is organized into several categories:
88

9-
```@docs
10-
FiniteDiff.finite_difference_derivative
11-
FiniteDiff.finite_difference_derivative!
12-
FiniteDiff.DerivativeCache
13-
```
9+
## Function Categories
1410

15-
## Gradients
11+
### [Derivatives](@ref derivatives)
12+
Single and multi-point derivatives of scalar functions.
1613

17-
```@docs
18-
FiniteDiff.finite_difference_gradient
19-
FiniteDiff.finite_difference_gradient!
20-
FiniteDiff.GradientCache
21-
```
14+
### [Gradients](@ref gradients)
15+
Gradients of scalar-valued functions with respect to vector inputs.
2216

23-
Gradients are either a vector->scalar map `f(x)`, or a scalar->vector map `f(fx,x)` if `inplace=Val{true}` and `fx=f(x)` if `inplace=Val{false}`.
17+
### [Jacobians](@ref jacobians)
18+
Jacobian matrices of vector-valued functions, including sparse Jacobian support.
2419

25-
Note that here `fx` is a cached function call of `f`. If you provide `fx`, then
26-
`fx` will be used in the forward differencing method to skip a function call.
27-
It is on you to make sure that you update `cache.fx` every time before
28-
calling `FiniteDiff.finite_difference_gradient!`. If `fx` is an immutable, e.g. a scalar or
29-
a `StaticArray`, `cache.fx` should be updated using `@set` from [Setfield.jl](https://github.com/jw3126/Setfield.jl).
30-
A good use of this is if you have a cache array for the output of `fx` already being used, you can make it alias
31-
into the differencing algorithm here.
20+
### [Hessians](@ref hessians)
21+
Hessian matrices of scalar-valued functions.
3222

33-
## Jacobians
23+
### [Jacobian-Vector Products](@ref jvp)
24+
Efficient computation of directional derivatives without forming full Jacobians.
3425

35-
```@docs
36-
FiniteDiff.finite_difference_jacobian
37-
FiniteDiff.finite_difference_jacobian!
38-
FiniteDiff.JacobianCache
39-
```
26+
### [Utilities](@ref utilities)
27+
Internal utilities and helper functions.
4028

41-
Jacobians are for functions `f!(fx,x)` when using in-place `finite_difference_jacobian!`,
42-
and `fx = f(x)` when using out-of-place `finite_difference_jacobian`. The out-of-place
43-
jacobian will return a similar type as `jac_prototype` if it is not a `nothing`. For non-square
44-
Jacobians, a cache which specifies the vector `fx` is required.
29+
## Quick Start
4530

46-
For sparse differentiation, pass a `colorvec` of matrix colors. `sparsity` should be a sparse
47-
or structured matrix (`Tridiagonal`, `Banded`, etc. according to the ArrayInterfaceCore.jl specs)
48-
to allow for decompression, otherwise the result will be the colorvec compressed Jacobian.
31+
All functions follow a consistent API pattern:
4932

50-
## Hessians
33+
- **Cache-less versions**: `finite_difference_*` - convenient but allocate temporary arrays
34+
- **In-place versions**: `finite_difference_*!` - efficient, non-allocating when used with caches
35+
- **Cache constructors**: `*Cache` - pre-allocate work arrays for repeated computations
5136

52-
```@docs
53-
FiniteDiff.finite_difference_hessian
54-
FiniteDiff.finite_difference_hessian!
55-
FiniteDiff.HessianCache
56-
```
37+
## Method Selection
5738

58-
Hessians are for functions `f(x)` which return a scalar.
39+
Choose your finite difference method based on accuracy and performance needs:
5940

60-
## Jacobian-Vector Products (JVP)
41+
- **Forward differences**: Fast, `O(h)` accuracy, requires `O(n)` function evaluations
42+
- **Central differences**: More accurate `O(h²)`, requires `O(2n)` function evaluations
43+
- **Complex step**: Machine precision accuracy, `O(n)` evaluations, requires complex-analytic functions
6144

62-
```@docs
63-
FiniteDiff.finite_difference_jvp
64-
FiniteDiff.finite_difference_jvp!
65-
FiniteDiff.JVPCache
66-
```
45+
## Performance Tips
6746

68-
JVP functions compute the Jacobian-vector product `J(x) * v` efficiently without computing the full Jacobian matrix. This is particularly useful when you only need directional derivatives.
47+
1. **Use caches** for repeated computations to avoid allocations
48+
2. **Consider sparsity** for large Jacobians with known sparsity patterns
49+
3. **Choose appropriate methods** based on your accuracy requirements
50+
4. **Leverage JVPs** when you only need directional derivatives
6951

docs/src/derivatives.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Derivatives
2+
3+
Functions for computing derivatives of scalar-valued functions.
4+
5+
## Functions
6+
7+
```@docs
8+
FiniteDiff.finite_difference_derivative
9+
FiniteDiff.finite_difference_derivative!
10+
```
11+
12+
## Cache
13+
14+
```@docs
15+
FiniteDiff.DerivativeCache
16+
```
17+
18+
## Notes
19+
20+
Derivatives are computed for scalar→scalar maps `f(x)` where `x` can be a single point or a collection of points. The derivative functions support:
21+
22+
- **Forward differences**: `O(1)` function evaluation per point, `O(h)` accuracy
23+
- **Central differences**: `O(2)` function evaluations per point, `O(h²)` accuracy
24+
- **Complex step**: `O(1)` function evaluation per point, machine precision accuracy
25+
26+
For optimal performance with repeated computations, use the cached versions with `DerivativeCache`.

docs/src/gradients.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
# Gradients
2+
3+
Functions for computing gradients of scalar-valued functions with respect to vector inputs.
4+
5+
## Functions
6+
7+
```@docs
8+
FiniteDiff.finite_difference_gradient
9+
FiniteDiff.finite_difference_gradient!
10+
```
11+
12+
## Cache
13+
14+
```@docs
15+
FiniteDiff.GradientCache
16+
```
17+
18+
## Function Types
19+
20+
Gradients support two types of function mappings:
21+
22+
- **Vector→scalar**: `f(x)` where `x` is a vector and `f` returns a scalar
23+
- **Scalar→vector**: `f(fx, x)` for in-place evaluation or `fx = f(x)` for out-of-place
24+
25+
## Performance Notes
26+
27+
- **Forward differences**: `O(n)` function evaluations, `O(h)` accuracy
28+
- **Central differences**: `O(2n)` function evaluations, `O(h²)` accuracy
29+
- **Complex step**: `O(n)` function evaluations, machine precision accuracy
30+
31+
## Cache Management
32+
33+
When using `GradientCache` with pre-computed function values:
34+
35+
- If you provide `fx`, then `fx` will be used in forward differencing to skip a function call
36+
- You must update `cache.fx` before each call to `finite_difference_gradient!`
37+
- For immutable types (scalars, `StaticArray`), use `@set` from [Setfield.jl](https://github.com/jw3126/Setfield.jl)
38+
- Consider aliasing existing arrays into the cache for memory efficiency

docs/src/hessians.md

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
# Hessians
2+
3+
Functions for computing Hessian matrices of scalar-valued functions.
4+
5+
## Functions
6+
7+
```@docs
8+
FiniteDiff.finite_difference_hessian
9+
FiniteDiff.finite_difference_hessian!
10+
```
11+
12+
## Cache
13+
14+
```@docs
15+
FiniteDiff.HessianCache
16+
```
17+
18+
## Function Requirements
19+
20+
Hessian functions are designed for scalar-valued functions `f(x)` where:
21+
22+
- `x` is a vector of parameters
23+
- `f(x)` returns a scalar value
24+
- The Hessian `H[i,j] = ∂²f/(∂x[i]∂x[j])` is automatically symmetrized
25+
26+
## Mathematical Background
27+
28+
For a scalar function `f: ℝⁿ → ℝ`, the Hessian central difference approximation is:
29+
30+
```
31+
H[i,j] ≈ (f(x + eᵢhᵢ + eⱼhⱼ) - f(x + eᵢhᵢ - eⱼhⱼ) - f(x - eᵢhᵢ + eⱼhⱼ) + f(x - eᵢhᵢ - eⱼhⱼ)) / (4hᵢhⱼ)
32+
```
33+
34+
where `eᵢ` is the i-th unit vector and `hᵢ` is the step size in dimension i.
35+
36+
## Performance Considerations
37+
38+
- **Complexity**: Requires `O(n²)` function evaluations for an n-dimensional input
39+
- **Accuracy**: Central differences provide `O(h²)` accuracy for second derivatives
40+
- **Memory**: The result is returned as a `Symmetric` matrix view
41+
- **Alternative**: For large problems, consider computing the gradient twice instead
42+
43+
## StaticArrays Support
44+
45+
The cache constructor automatically detects `StaticArray` types and adjusts the `inplace` parameter accordingly for optimal performance.

docs/src/jacobians.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# Jacobians
2+
3+
Functions for computing Jacobian matrices of vector-valued functions.
4+
5+
## Functions
6+
7+
```@docs
8+
FiniteDiff.finite_difference_jacobian
9+
FiniteDiff.finite_difference_jacobian!
10+
```
11+
12+
## Cache
13+
14+
```@docs
15+
FiniteDiff.JacobianCache
16+
```
17+
18+
## Function Types
19+
20+
Jacobians support the following function signatures:
21+
22+
- **Out-of-place**: `fx = f(x)` where both `x` and `fx` are vectors
23+
- **In-place**: `f!(fx, x)` where `f!` modifies `fx` in-place
24+
25+
## Sparse Jacobians
26+
27+
FiniteDiff.jl provides efficient sparse Jacobian computation using graph coloring:
28+
29+
- Pass a `colorvec` of matrix colors to enable column compression
30+
- Provide `sparsity` as a sparse or structured matrix (`Tridiagonal`, `Banded`, etc.)
31+
- Supports automatic sparsity pattern detection via ArrayInterfaceCore.jl
32+
- Results are automatically decompressed unless `sparsity=nothing`
33+
34+
## Performance Notes
35+
36+
- **Forward differences**: `O(n)` function evaluations, `O(h)` accuracy
37+
- **Central differences**: `O(2n)` function evaluations, `O(h²)` accuracy
38+
- **Complex step**: `O(n)` function evaluations, machine precision accuracy
39+
- **Sparse Jacobians**: Use graph coloring to reduce function evaluations significantly
40+
41+
For non-square Jacobians, specify the output vector `fx` when creating the cache to ensure proper sizing.

docs/src/jvp.md

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
# Jacobian-Vector Products (JVP)
2+
3+
Functions for computing Jacobian-vector products efficiently without forming the full Jacobian matrix.
4+
5+
## Functions
6+
7+
```@docs
8+
FiniteDiff.finite_difference_jvp
9+
FiniteDiff.finite_difference_jvp!
10+
```
11+
12+
## Cache
13+
14+
```@docs
15+
FiniteDiff.JVPCache
16+
```
17+
18+
## Mathematical Background
19+
20+
The JVP computes `J(x) * v` where `J(x)` is the Jacobian of function `f` at point `x` and `v` is a direction vector. This is computed using finite difference approximations:
21+
22+
- **Forward**: `J(x) * v ≈ (f(x + h*v) - f(x)) / h`
23+
- **Central**: `J(x) * v ≈ (f(x + h*v) - f(x - h*v)) / (2h)`
24+
25+
where `h` is the step size and `v` is the direction vector.
26+
27+
## Performance Benefits
28+
29+
JVP functions are particularly efficient when you only need directional derivatives:
30+
31+
- **Function evaluations**: Only 2 function evaluations (vs `O(n)` for full Jacobian)
32+
- **Forward differences**: 2 function evaluations, `O(h)` accuracy
33+
- **Central differences**: 2 function evaluations, `O(h²)` accuracy
34+
- **Memory efficient**: No need to store the full Jacobian matrix
35+
36+
## Use Cases
37+
38+
JVP is particularly useful for:
39+
40+
- **Optimization**: Computing directional derivatives along search directions
41+
- **Sparse directions**: When `v` has few non-zero entries
42+
- **Memory constraints**: Avoiding storage of large Jacobian matrices
43+
- **Newton methods**: Computing Newton steps `J⁻¹ * v` iteratively
44+
45+
## Limitations
46+
47+
- **Complex step**: JVP does not currently support complex step differentiation (`Val(:complex)`)
48+
- **In-place functions**: For in-place function evaluation, ensure proper cache sizing

docs/src/utilities.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# Utilities
2+
3+
Internal utility functions and types that may be useful for advanced users.
4+
5+
## Step Size Computation
6+
7+
While these functions are primarily internal, they can be useful for understanding and customizing finite difference computations:
8+
9+
### Default Step Sizes
10+
11+
Different finite difference methods use different optimal step sizes to balance truncation error and round-off error:
12+
13+
- **Forward differences**: `sqrt(eps(T))` - balances O(h) truncation error with round-off
14+
- **Central differences**: `cbrt(eps(T))` - balances O(h²) truncation error with round-off
15+
- **Hessian central**: `eps(T)^(1/4)` - balances O(h²) truncation error for second derivatives
16+
17+
### Complex Step Differentiation
18+
19+
Complex step differentiation uses machine epsilon since it avoids subtractive cancellation:
20+
21+
⚠️ **Important**: `f` must be a function of a real variable that is also complex analytic when the input is complex!
22+
23+
## Array Utilities
24+
25+
Internal utilities for handling different array types and ensuring compatibility:
26+
27+
- `_vec(x)`: Vectorizes arrays while preserving scalars
28+
- `_mat(x)`: Ensures matrix format, converting vectors to column matrices
29+
- `setindex(x, v, i...)`: Non-mutating setindex for immutable arrays
30+
31+
These functions help FiniteDiff.jl work with various array types including `StaticArrays`, structured matrices, and GPU arrays.

0 commit comments

Comments
 (0)