Skip to content

Commit cb0a929

Browse files
SebastianM-Cclaude
andcommitted
Fix namespacing issues
Co-authored-by: Claude <noreply@anthropic.com>
1 parent 5e27dd5 commit cb0a929

30 files changed

+130
-127
lines changed

docs/make.jl

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,15 @@
11
using Documenter, Optimization
2-
using FiniteDiff, ForwardDiff, ModelingToolkit, ReverseDiff, Tracker, Zygote
3-
using ADTypes
2+
using OptimizationLBFGSB, OptimizationSophia
43

5-
cp("./docs/Manifest.toml", "./docs/src/assets/Manifest.toml", force = true)
6-
cp("./docs/Project.toml", "./docs/src/assets/Project.toml", force = true)
4+
cp(joinpath(@__DIR__, "Manifest.toml"), joinpath(@__DIR__, "src/assets/Manifest.toml"), force = true)
5+
cp(joinpath(@__DIR__, "Project.toml"), joinpath(@__DIR__, "src/assets/Project.toml"), force = true)
76

87
include("pages.jl")
98

109
makedocs(sitename = "Optimization.jl",
1110
authors = "Chris Rackauckas, Vaibhav Kumar Dixit et al.",
12-
modules = [Optimization, Optimization.SciMLBase, Optimization.OptimizationBase,
13-
FiniteDiff, ForwardDiff, ModelingToolkit, ReverseDiff, Tracker, Zygote, ADTypes],
11+
modules = [Optimization, Optimization.SciMLBase, Optimization.OptimizationBase, Optimization.ADTypes,
12+
OptimizationLBFGSB, OptimizationSophia],
1413
clean = true, doctest = false, linkcheck = true,
1514
warnonly = [:missing_docs, :cross_references],
1615
format = Documenter.HTML(assets = ["assets/favicon.ico"],

docs/src/API/ad.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,15 @@ The choices for the auto-AD fill-ins with quick descriptions are:
1313

1414
## Automatic Differentiation Choice API
1515

16-
The following sections describe the Auto-AD choices in detail.
16+
The following sections describe the Auto-AD choices in detail. These types are defined in the [ADTypes.jl](https://github.com/SciML/ADTypes.jl) package.
1717

1818
```@docs
19-
OptimizationBase.AutoForwardDiff
20-
OptimizationBase.AutoFiniteDiff
21-
OptimizationBase.AutoReverseDiff
22-
OptimizationBase.AutoZygote
23-
OptimizationBase.AutoTracker
24-
OptimizationBase.AutoSymbolics
25-
OptimizationBase.AutoEnzyme
19+
ADTypes.AutoForwardDiff
20+
ADTypes.AutoFiniteDiff
21+
ADTypes.AutoReverseDiff
22+
ADTypes.AutoZygote
23+
ADTypes.AutoTracker
24+
ADTypes.AutoSymbolics
25+
ADTypes.AutoEnzyme
2626
ADTypes.AutoMooncake
2727
```

docs/src/API/optimization_state.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# [OptimizationState](@id optstate)
22

33
```@docs
4-
Optimization.OptimizationState
4+
OptimizationBase.OptimizationState
55
```

docs/src/examples/rosenbrock.md

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -41,15 +41,16 @@ An optimization problem can now be defined and solved to estimate the values for
4141

4242
```@example rosenbrock
4343
# Define the problem to solve
44-
using Optimization, ForwardDiff, Zygote
44+
using SciMLBase, OptimizationBase
45+
using ADTypes, ForwardDiff, Zygote
4546
4647
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
4748
x0 = zeros(2)
4849
_p = [1.0, 100.0]
4950
50-
f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
51+
f = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff())
5152
l1 = rosenbrock(x0, _p)
52-
prob = OptimizationProblem(f, x0, _p)
53+
prob = SciMLBase.OptimizationProblem(f, x0, _p)
5354
```
5455

5556
## Optim.jl Solvers
@@ -59,19 +60,19 @@ prob = OptimizationProblem(f, x0, _p)
5960
```@example rosenbrock
6061
using OptimizationOptimJL
6162
sol = solve(prob, SimulatedAnnealing())
62-
prob = OptimizationProblem(f, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8])
63+
prob = SciMLBase.OptimizationProblem(f, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8])
6364
sol = solve(prob, SAMIN())
6465
6566
l1 = rosenbrock(x0, _p)
66-
prob = OptimizationProblem(rosenbrock, x0, _p)
67+
prob = SciMLBase.OptimizationProblem(rosenbrock, x0, _p)
6768
sol = solve(prob, NelderMead())
6869
```
6970

7071
### Now a gradient-based optimizer with forward-mode automatic differentiation
7172

7273
```@example rosenbrock
73-
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
74-
prob = OptimizationProblem(optf, x0, _p)
74+
optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff())
75+
prob = SciMLBase.OptimizationProblem(optf, x0, _p)
7576
sol = solve(prob, BFGS())
7677
```
7778

@@ -91,19 +92,19 @@ sol = solve(prob, Optim.KrylovTrustRegion())
9192

9293
```@example rosenbrock
9394
cons = (res, x, p) -> res .= [x[1]^2 + x[2]^2]
94-
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = cons)
95+
optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff(); cons = cons)
9596
96-
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf])
97+
prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf])
9798
sol = solve(prob, IPNewton()) # Note that -Inf < x[1]^2 + x[2]^2 < Inf is always true
9899
99-
prob = OptimizationProblem(optf, x0, _p, lcons = [-5.0], ucons = [10.0])
100+
prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-5.0], ucons = [10.0])
100101
sol = solve(prob, IPNewton()) # Again, -5.0 < x[1]^2 + x[2]^2 < 10.0
101102
102-
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf],
103+
prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf],
103104
lb = [-500.0, -500.0], ub = [50.0, 50.0])
104105
sol = solve(prob, IPNewton())
105106
106-
prob = OptimizationProblem(optf, x0, _p, lcons = [0.5], ucons = [0.5],
107+
prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [0.5], ucons = [0.5],
107108
lb = [-500.0, -500.0], ub = [50.0, 50.0])
108109
sol = solve(prob, IPNewton())
109110
@@ -118,8 +119,8 @@ function con_c(res, x, p)
118119
res .= [x[1]^2 + x[2]^2]
119120
end
120121
121-
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = con_c)
122-
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [0.25^2])
122+
optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff(); cons = con_c)
123+
prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [0.25^2])
123124
sol = solve(prob, IPNewton()) # -Inf < cons_circ(sol.u, _p) = 0.25^2
124125
```
125126

@@ -139,17 +140,17 @@ function con2_c(res, x, p)
139140
res .= [x[1]^2 + x[2]^2, x[2] * sin(x[1]) - x[1]]
140141
end
141142
142-
optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote(); cons = con2_c)
143-
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf, -Inf], ucons = [100.0, 100.0])
143+
optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoZygote(); cons = con2_c)
144+
prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-Inf, -Inf], ucons = [100.0, 100.0])
144145
sol = solve(prob, Ipopt.Optimizer())
145146
```
146147

147148
## Now let's switch over to OptimizationOptimisers with reverse-mode AD
148149

149150
```@example rosenbrock
150151
import OptimizationOptimisers
151-
optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
152-
prob = OptimizationProblem(optf, x0, _p)
152+
optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoZygote())
153+
prob = SciMLBase.OptimizationProblem(optf, x0, _p)
153154
sol = solve(prob, OptimizationOptimisers.Adam(0.05), maxiters = 1000, progress = false)
154155
```
155156

@@ -164,8 +165,8 @@ sol = solve(prob, CMAEvolutionStrategyOpt())
164165

165166
```@example rosenbrock
166167
using OptimizationNLopt, ModelingToolkit
167-
optf = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics())
168-
prob = OptimizationProblem(optf, x0, _p)
168+
optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoSymbolics())
169+
prob = SciMLBase.OptimizationProblem(optf, x0, _p)
169170
170171
sol = solve(prob, Opt(:LN_BOBYQA, 2))
171172
sol = solve(prob, Opt(:LD_LBFGS, 2))
@@ -174,7 +175,7 @@ sol = solve(prob, Opt(:LD_LBFGS, 2))
174175
### Add some box constraints and solve with a few NLopt.jl methods
175176

176177
```@example rosenbrock
177-
prob = OptimizationProblem(optf, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8])
178+
prob = SciMLBase.OptimizationProblem(optf, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8])
178179
sol = solve(prob, Opt(:LD_LBFGS, 2))
179180
sol = solve(prob, Opt(:G_MLSL_LDS, 2), local_method = Opt(:LD_LBFGS, 2), maxiters = 10000) #a global optimizer with random starts of local optimization
180181
```
@@ -183,7 +184,7 @@ sol = solve(prob, Opt(:G_MLSL_LDS, 2), local_method = Opt(:LD_LBFGS, 2), maxiter
183184

184185
```@example rosenbrock
185186
using OptimizationBBO
186-
prob = Optimization.OptimizationProblem(rosenbrock, [0.0, 0.3], _p, lb = [-1.0, 0.2],
187+
prob = SciMLBase.OptimizationProblem(rosenbrock, [0.0, 0.3], _p, lb = [-1.0, 0.2],
187188
ub = [0.8, 0.43])
188189
sol = solve(prob, BBO_adaptive_de_rand_1_bin()) # -1.0 ≤ x[1] ≤ 0.8, 0.2 ≤ x[2] ≤ 0.43
189190
```

docs/src/getting_started.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,12 @@ The simplest copy-pasteable code using a quasi-Newton method (LBFGS) to solve th
1414

1515
```@example intro
1616
# Import the package and define the problem to optimize
17-
using Optimization, OptimizationLBFGSB, Zygote
17+
using OptimizationBase, OptimizationLBFGSB, ADTypes, Zygote
1818
rosenbrock(u, p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2
1919
u0 = zeros(2)
2020
p = [1.0, 100.0]
2121
22-
optf = OptimizationFunction(rosenbrock, AutoZygote())
22+
optf = OptimizationFunction(rosenbrock, ADTypes.AutoZygote())
2323
prob = OptimizationProblem(optf, u0, p)
2424
2525
sol = solve(prob, OptimizationLBFGSB.LBFGSB())
@@ -131,8 +131,8 @@ automatically construct the derivative functions using ForwardDiff.jl. This
131131
looks like:
132132

133133
```@example intro
134-
using ForwardDiff
135-
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
134+
using ForwardDiff, ADTypes
135+
optf = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff())
136136
prob = OptimizationProblem(optf, u0, p)
137137
sol = solve(prob, OptimizationOptimJL.BFGS())
138138
```
@@ -155,7 +155,7 @@ We can demonstrate this via:
155155

156156
```@example intro
157157
using Zygote
158-
optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
158+
optf = OptimizationFunction(rosenbrock, ADTypes.AutoZygote())
159159
prob = OptimizationProblem(optf, u0, p)
160160
sol = solve(prob, OptimizationOptimJL.BFGS())
161161
```

docs/src/optimization_packages/blackboxoptim.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
6363
x0 = zeros(2)
6464
p = [1.0, 100.0]
6565
f = OptimizationFunction(rosenbrock)
66-
prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
66+
prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
6767
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(), maxiters = 100000,
6868
maxtime = 1000.0)
6969
```

docs/src/optimization_packages/cmaevolutionstrategy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,6 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
3030
x0 = zeros(2)
3131
p = [1.0, 100.0]
3232
f = OptimizationFunction(rosenbrock)
33-
prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
33+
prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
3434
sol = solve(prob, CMAEvolutionStrategyOpt())
3535
```

docs/src/optimization_packages/evolutionary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,6 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
3838
x0 = zeros(2)
3939
p = [1.0, 100.0]
4040
f = OptimizationFunction(rosenbrock)
41-
prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
41+
prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
4242
sol = solve(prob, Evolutionary.CMAES(μ = 40, λ = 100))
4343
```

docs/src/optimization_packages/gcmaes.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,14 +30,15 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
3030
x0 = zeros(2)
3131
p = [1.0, 100.0]
3232
f = OptimizationFunction(rosenbrock)
33-
prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
33+
prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
3434
sol = solve(prob, GCMAESOpt())
3535
```
3636

3737
We can also utilize the gradient information of the optimization problem to aid the optimization as follows:
3838

3939
```@example GCMAES
40-
f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
41-
prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
40+
using ADTypes, ForwardDiff
41+
f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff())
42+
prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
4243
sol = solve(prob, GCMAESOpt())
4344
```

docs/src/optimization_packages/ipopt.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ The algorithm supports:
4545
### Basic Usage
4646

4747
```julia
48-
using Optimization, OptimizationIpopt
48+
using OptimizationBase, OptimizationIpopt
4949

5050
# Create optimizer with default settings
5151
opt = IpoptOptimizer()

0 commit comments

Comments
 (0)