Skip to content

Commit d9827d5

Browse files
committed
bump version and update changelog
1 parent 5ec1dd2 commit d9827d5

File tree

2 files changed

+31
-1
lines changed

2 files changed

+31
-1
lines changed

changelog.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,33 @@
1+
# v0.8.0 - 09.05.2022
2+
## Optimization has joined the chat
3+
Multi-variate optimization and differentiation has been introduced.
4+
5+
- `numericalnim/differentiate` offers `tensorGradient(f, x)` which calculates the gradient of `f` w.r.t `x` using finite differences, `tensorJacobian` (returns the transpose of the gradient), `tensorHessian`, `mixedDerivative`. It also provides `checkGradient(f, analyticGrad, x, tol)` to verify that the analytic gradient is correct by comparing it to the finite difference approximation.
6+
- `numericalnim/optimize` now has several multi-variate optimization methods:
7+
- `steepestDescent`
8+
- `newton`
9+
- `bfgs`
10+
- `lbfgs`
11+
- They all have the function signatures like:
12+
```nim
13+
proc bfgs*[U; T: not Tensor](f: proc(x: Tensor[U]): T, x0: Tensor[U], options: OptimOptions[U, StandardOptions] = bfgsOptions[U](), analyticGradient: proc(x: Tensor[U]): Tensor[T] = nil): Tensor[U]
14+
```
15+
where `f` is the function to be minimized, `x0` is the starting guess, `options` contain options like tolerance (each method has it own options type which can be created by for example `lbfgsOptions` or `newtonOptions`), `analyticGradient` can be supplied to avoid having to do finite difference approximations of the derivatives.
16+
- There are 4 different line search methods supported and those are set in the `options`: `Armijo, Wolfe, WolfeStrong, NoLineSearch`.
17+
- `levmarq`: non-linear least-square optimizer
18+
```nim
19+
proc levmarq*[U; T: not Tensor](f: proc(params: Tensor[U], x: U): T, params0: Tensor[U], xData: Tensor[U], yData: Tensor[T], options: OptimOptions[U, LevmarqOptions[U]] = levmarqOptions[U]()): Tensor[U]
20+
```
21+
- `f` is the function you want to fit to the parameters in `param` and `x` is the value to evaluate the function at.
22+
- `params0` is the initial guess for the parameters
23+
- `xData` is a 1D Tensor with the x points and `yData` is a 1D Tensor with the y points.
24+
- `options` can be created using `levmarqOptions`.
25+
- Returns the final parameters
26+
27+
28+
Note: There are basic tests to ensure these methods converge for simple problems, but they are not tested on more complex problems and should be considered experimental until more tests have been done. Please try them out, but don't rely on them for anything important for now. Also, the API isn't set in stone yet so expect that it may change in future versions.
29+
30+
131
# v0.7.1 -25.01.2022
232
333
Add a `nimCI` task for the Nim CI to run now that the tests have external dependencies.

numericalnim.nimble

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Package Information
2-
version = "0.7.1"
2+
version = "0.8.0"
33
author = "Hugo Granström"
44
description = "A collection of numerical methods written in Nim. Current features: integration, ode, optimization."
55
license = "MIT"

0 commit comments

Comments
 (0)