From 029aeb8298d0003e401e9caa78f2cf5500100087 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Fri, 17 May 2024 14:09:59 +0000 Subject: [PATCH] build based on 6ca8479 --- dev/.documenter-siteinfo.json | 2 +- dev/index.html | 2 +- dev/installation/index.html | 2 +- dev/lib/callbacks/index.html | 4 ++-- dev/lib/ipm/index.html | 2 +- dev/lib/kkt/index.html | 10 +++++----- dev/lib/linear_solvers/index.html | 2 +- dev/man/kkt/index.html | 6 +++--- dev/man/linear_solvers/index.html | 14 +++++++------- dev/man/solver/index.html | 2 +- dev/objects.inv | Bin 2116 -> 2116 bytes dev/options/index.html | 2 +- dev/quickstart/index.html | 10 +++++----- 13 files changed, 29 insertions(+), 29 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index da87ffa5..fa2789e0 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-05-17T14:07:22","documenter_version":"1.4.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-05-17T14:09:55","documenter_version":"1.4.1"}} \ No newline at end of file diff --git a/dev/index.html b/dev/index.html index 0810a28c..14e9e587 100644 --- a/dev/index.html +++ b/dev/index.html @@ -34,4 +34,4 @@ author={Shin, Sungho and Coffrin, Carleton and Sundar, Kaarthik and Zavala, Victor M}, journal={arXiv preprint arXiv:2010.02404}, year={2020} -} +} diff --git a/dev/installation/index.html b/dev/installation/index.html index 563d26a8..85ed1137 100644 --- a/dev/installation/index.html +++ b/dev/installation/index.html @@ -2,4 +2,4 @@ Installation · MadNLP.jl

Installation

To install MadNLP, simply proceed to

pkg> add MadNLP
 
Note

The default installation comes is shipped only with two linear solvers (Umfpack and Lapack), which are not adapted to solve the KKT systems arising in large-scale nonlinear problems. We recommend using a specialized linear solver to speed-up the solution of the KKT systems.

In addition to Lapack and Umfpack, the user can install the following extensions to use a specialized linear solver.

HSL linear solver

Obtain a license and download HSL_jll.jl from https://licences.stfc.ac.uk/product/julia-hsl. There are two versions available: LBT and OpenBLAS. LBT is the recommended option for Julia >= v1.9. Install this download into your current environment using:

import Pkg
 Pkg.develop(path = "/full/path/to/HSL_jll.jl")

If the user has already compiled the HSL solver library, one can simply override the path to the artifact by editing ~/.julia/artifacts/Overrides.toml

# replace HSL_jll artifact /usr/local/lib/libhsl.so
-ecece3e2c69a413a0e935cf52e03a3ad5492e137 = "/usr/local"

Mumps linear solver

Mumps is an open-source sparse linear solver, whose binaries are kindly provided as a Julia artifact. Installing Mumps simply amounts to

pkg> add MadNLPMumps

Pardiso linear solver

To use Pardiso, the user needs to obtain the Pardiso shared libraries from https://panua.ch/, provide the absolute path to the shared library:

julia> ENV["MADNLP_PARDISO_LIBRARY_PATH"] = "/usr/lib/libpardiso600-GNU800-X86-64.so"

and place the license file in the home directory. After obtaining the library and the license file, run

pkg> build MadNLPPardiso

The build process requires a C compiler.

+ecece3e2c69a413a0e935cf52e03a3ad5492e137 = "/usr/local"

Mumps linear solver

Mumps is an open-source sparse linear solver, whose binaries are kindly provided as a Julia artifact. Installing Mumps simply amounts to

pkg> add MadNLPMumps

Pardiso linear solver

To use Pardiso, the user needs to obtain the Pardiso shared libraries from https://panua.ch/, provide the absolute path to the shared library:

julia> ENV["MADNLP_PARDISO_LIBRARY_PATH"] = "/usr/lib/libpardiso600-GNU800-X86-64.so"

and place the license file in the home directory. After obtaining the library and the license file, run

pkg> build MadNLPPardiso

The build process requires a C compiler.

diff --git a/dev/lib/callbacks/index.html b/dev/lib/callbacks/index.html index 257a99d1..0323444e 100644 --- a/dev/lib/callbacks/index.html +++ b/dev/lib/callbacks/index.html @@ -1,7 +1,7 @@ -Callback wrappers · MadNLP.jl

Callbacks

In MadNLP, a nonlinear program is implemented with a given AbstractNLPModel. The model may have a form unsuitable for the interior-point algorithm. For that reason, MadNLP wraps the AbstractNLPModel internally using custom data structures, encoded as a AbstractCallback. Depending on the setting, choose to wrap the AbstractNLPModel as a DenseCallback or alternatively, as a SparseCallback.

MadNLP.AbstractCallbackType
AbstractCallback{T, VT}

Wrap the AbstractNLPModel passed by the user in a form amenable to MadNLP.

An AbstractCallback handles the scaling of the problem and the reformulations of the equality constraints and fixed variables.

source

The function create_callback allows to instantiate a AbstractCallback from a given NLPModel:

MadNLP.create_callbackFunction
create_callback(
+Callback wrappers · MadNLP.jl

Callbacks

In MadNLP, a nonlinear program is implemented with a given AbstractNLPModel. The model may have a form unsuitable for the interior-point algorithm. For that reason, MadNLP wraps the AbstractNLPModel internally using custom data structures, encoded as a AbstractCallback. Depending on the setting, choose to wrap the AbstractNLPModel as a DenseCallback or alternatively, as a SparseCallback.

MadNLP.AbstractCallbackType
AbstractCallback{T, VT}

Wrap the AbstractNLPModel passed by the user in a form amenable to MadNLP.

An AbstractCallback handles the scaling of the problem and the reformulations of the equality constraints and fixed variables.

source

The function create_callback allows to instantiate a AbstractCallback from a given NLPModel:

MadNLP.create_callbackFunction
create_callback(
     ::Type{Callback},
     nlp::AbstractNLPModel{T, VT};
     fixed_variable_treatment=MakeParameter,
     equality_treatment=EnforceEquality,
-) where {T, VT}

Wrap the nonlinear program nlp using the callback wrapper with type Callback. The option fixed_variable_treatment decides if the fixed variables are relaxed (RelaxBound) or removed (MakeParameter). The option equality_treatment decides if the the equality constraints are keep as is (EnforceEquality) or relaxed (RelaxEquality).

source

Internally, a AbstractCallback reformulates the inequality constraints as equality constraints by introducing additional slack variables. The fixed variables are reformulated as parameters (using MakeParameter) or are relaxed (using RelaxBound). The equality constraints can be keep as is with EnforceEquality (default option) or relaxed as inequality constraints with RelaxEquality. In that later case, MadNLP solves a relaxation of the original problem.

MadNLP.MakeParameterType
MakeParameter{VT, VI} <: AbstractFixedVariableTreatment

Remove the fixed variables from the optimization variables and define them as problem's parameters.

source
MadNLP.RelaxBoundType
RelaxBound <: AbstractFixedVariableTreatment

Relax the fixed variables $x = x_{fixed}$ as bounded variables $x_{fixed} - ϵ ≤ x ≤ x_{fixed} + ϵ$, with $ϵ$ a small-enough parameter.

source
MadNLP.EnforceEqualityType
EnforceEquality <: AbstractEqualityTreatment

Keep the equality constraints intact.

The solution returned by MadNLP will respect the equality constraints.

source
MadNLP.RelaxEqualityType
RelaxEquality <: AbstractEqualityTreatment

Relax the equality constraints $g(x) = 0$ with two inequality constraints, as $-ϵ ≤ g(x) ≤ ϵ$. The parameter $ϵ$ is usually small.

The solution returned by MadNLP will satisfy the equality constraints only up to a tolerance $ϵ$.

source

MadNLP has to keep in memory all the indexes associated to the equality and inequality constraints. Similarly, MadNLP has to keep track of the indexes of the bounded variables and the fixed variables. MadNLP provides a utility get_index_constraints to import all the indexes required by MadNLP. Each index vector is encoded as a Vector{Int}.

MadNLP.get_index_constraintsFunction
get_index_constraints(nlp::AbstractNLPModel)

Analyze the bounds of the variables and the constraints in the AbstractNLPModel nlp. Return a named-tuple witht the following keys:return (

  • ind_eq: indices of equality constraints.
  • ind_ineq: indices of inequality constraints.
  • ind_fixed: indices of fixed variables.
  • ind_lb: indices of variables with a lower-bound.
  • ind_ub: indices of variables with an upper-bound.
  • ind_llb: indices of variables with only a lower-bound.
  • ind_uub: indices of variables with only an upper-bound.
source
+) where {T, VT}

Wrap the nonlinear program nlp using the callback wrapper with type Callback. The option fixed_variable_treatment decides if the fixed variables are relaxed (RelaxBound) or removed (MakeParameter). The option equality_treatment decides if the the equality constraints are keep as is (EnforceEquality) or relaxed (RelaxEquality).

source

Internally, a AbstractCallback reformulates the inequality constraints as equality constraints by introducing additional slack variables. The fixed variables are reformulated as parameters (using MakeParameter) or are relaxed (using RelaxBound). The equality constraints can be keep as is with EnforceEquality (default option) or relaxed as inequality constraints with RelaxEquality. In that later case, MadNLP solves a relaxation of the original problem.

MadNLP.MakeParameterType
MakeParameter{VT, VI} <: AbstractFixedVariableTreatment

Remove the fixed variables from the optimization variables and define them as problem's parameters.

source
MadNLP.RelaxBoundType
RelaxBound <: AbstractFixedVariableTreatment

Relax the fixed variables $x = x_{fixed}$ as bounded variables $x_{fixed} - ϵ ≤ x ≤ x_{fixed} + ϵ$, with $ϵ$ a small-enough parameter.

source
MadNLP.EnforceEqualityType
EnforceEquality <: AbstractEqualityTreatment

Keep the equality constraints intact.

The solution returned by MadNLP will respect the equality constraints.

source
MadNLP.RelaxEqualityType
RelaxEquality <: AbstractEqualityTreatment

Relax the equality constraints $g(x) = 0$ with two inequality constraints, as $-ϵ ≤ g(x) ≤ ϵ$. The parameter $ϵ$ is usually small.

The solution returned by MadNLP will satisfy the equality constraints only up to a tolerance $ϵ$.

source

MadNLP has to keep in memory all the indexes associated to the equality and inequality constraints. Similarly, MadNLP has to keep track of the indexes of the bounded variables and the fixed variables. MadNLP provides a utility get_index_constraints to import all the indexes required by MadNLP. Each index vector is encoded as a Vector{Int}.

MadNLP.get_index_constraintsFunction
get_index_constraints(nlp::AbstractNLPModel)

Analyze the bounds of the variables and the constraints in the AbstractNLPModel nlp. Return a named-tuple witht the following keys:return (

  • ind_eq: indices of equality constraints.
  • ind_ineq: indices of inequality constraints.
  • ind_fixed: indices of fixed variables.
  • ind_lb: indices of variables with a lower-bound.
  • ind_ub: indices of variables with an upper-bound.
  • ind_llb: indices of variables with only a lower-bound.
  • ind_uub: indices of variables with only an upper-bound.
source
diff --git a/dev/lib/ipm/index.html b/dev/lib/ipm/index.html index 67612287..183cf726 100644 --- a/dev/lib/ipm/index.html +++ b/dev/lib/ipm/index.html @@ -1,2 +1,2 @@ -IPM solver · MadNLP.jl

MadNLP solver

MadNLP takes as input a nonlinear program encoded as a AbstractNLPModel and solve it using interior-point. The main entry point is the function madnlp:

MadNLP.MadNLPExecutionStatsType
MadNLPExecutionStats{T, VT} <: AbstractExecutionStats

Store the results returned by MadNLP once the interior-point algorithm has terminated.

source

In detail, the function madnlp builds a MadNLPSolver storing all the required structures in the solution algorithm. Once the MadNLPSolver instantiated, the function solve! is applied to solve the nonlinear program with MadNLP's interior-point algorithm.

MadNLP.MadNLPSolverType
MadNLPSolver(nlp::AbstractNLPModel{T, VT}; options...) where {T, VT}

Instantiate a new MadNLPSolver associated to the nonlinear program nlp::AbstractNLPModel. The options are passed as optional arguments.

The constructor allocates all the memory required in the interior-point algorithm, so the main algorithm remains allocation free.

source
+IPM solver · MadNLP.jl

MadNLP solver

MadNLP takes as input a nonlinear program encoded as a AbstractNLPModel and solve it using interior-point. The main entry point is the function madnlp:

MadNLP.MadNLPExecutionStatsType
MadNLPExecutionStats{T, VT} <: AbstractExecutionStats

Store the results returned by MadNLP once the interior-point algorithm has terminated.

source

In detail, the function madnlp builds a MadNLPSolver storing all the required structures in the solution algorithm. Once the MadNLPSolver instantiated, the function solve! is applied to solve the nonlinear program with MadNLP's interior-point algorithm.

MadNLP.MadNLPSolverType
MadNLPSolver(nlp::AbstractNLPModel{T, VT}; options...) where {T, VT}

Instantiate a new MadNLPSolver associated to the nonlinear program nlp::AbstractNLPModel. The options are passed as optional arguments.

The constructor allocates all the memory required in the interior-point algorithm, so the main algorithm remains allocation free.

source
diff --git a/dev/lib/kkt/index.html b/dev/lib/kkt/index.html index 0daca7c1..cc335f74 100644 --- a/dev/lib/kkt/index.html +++ b/dev/lib/kkt/index.html @@ -1,18 +1,18 @@ -KKT systems · MadNLP.jl

KKT systems

MadNLP manipulates KKT systems using two abstractions: an AbstractKKTSystem storing the KKT system' matrix and an AbstractKKTVector storing the KKT system's right-hand-side.

AbstractKKTSystem

MadNLP.AbstractKKTSystemType
AbstractKKTSystem{T, VT<:AbstractVector{T}, MT<:AbstractMatrix{T}, QN<:AbstractHessian{T}}

Abstract type for KKT system.

source

MadNLP implements three different types of AbstractKKTSystem, depending how far we reduce the KKT system.

MadNLP.AbstractUnreducedKKTSystemType
AbstractUnreducedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}

Augmented KKT system associated to the linearization of the KKT conditions at the current primal-dual iterate $(x, s, y, z, ν, w)$.

The associated matrix is

[Wₓₓ  0  Aₑ'  Aᵢ'  -I   0 ]  [Δx]
+KKT systems · MadNLP.jl

KKT systems

MadNLP manipulates KKT systems using two abstractions: an AbstractKKTSystem storing the KKT system' matrix and an AbstractKKTVector storing the KKT system's right-hand-side.

AbstractKKTSystem

MadNLP.AbstractKKTSystemType
AbstractKKTSystem{T, VT<:AbstractVector{T}, MT<:AbstractMatrix{T}, QN<:AbstractHessian{T}}

Abstract type for KKT system.

source

MadNLP implements three different types of AbstractKKTSystem, depending how far we reduce the KKT system.

MadNLP.AbstractUnreducedKKTSystemType
AbstractUnreducedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}

Augmented KKT system associated to the linearization of the KKT conditions at the current primal-dual iterate $(x, s, y, z, ν, w)$.

The associated matrix is

[Wₓₓ  0  Aₑ'  Aᵢ'  -I   0 ]  [Δx]
 [ 0   0   0   -I    0  -I ]  [Δs]
 [Aₑ   0   0    0    0   0 ]  [Δy]
 [Aᵢ  -I   0    0    0   0 ]  [Δz]
 [V    0   0    0    X   0 ]  [Δν]
-[0    W   0    0    0   S ]  [Δw]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $X = diag(x)$
  • $S = diag(s)$
  • $V = diag(ν)$
  • $W = diag(w)$
source
MadNLP.AbstractReducedKKTSystemType
AbstractReducedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}

The reduced KKT system is a simplification of the original Augmented KKT system. Comparing to AbstractUnreducedKKTSystem), AbstractReducedKKTSystem removes the two last rows associated to the bounds' duals $(ν, w)$.

At a primal-dual iterate $(x, s, y, z)$, the matrix writes

[Wₓₓ + Σₓ   0    Aₑ'   Aᵢ']  [Δx]
+[0    W   0    0    0   S ]  [Δw]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $X = diag(x)$
  • $S = diag(s)$
  • $V = diag(ν)$
  • $W = diag(w)$
source
MadNLP.AbstractReducedKKTSystemType
AbstractReducedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}

The reduced KKT system is a simplification of the original Augmented KKT system. Comparing to AbstractUnreducedKKTSystem), AbstractReducedKKTSystem removes the two last rows associated to the bounds' duals $(ν, w)$.

At a primal-dual iterate $(x, s, y, z)$, the matrix writes

[Wₓₓ + Σₓ   0    Aₑ'   Aᵢ']  [Δx]
 [ 0         Σₛ    0    -I ]  [Δs]
 [Aₑ         0     0     0 ]  [Δy]
-[Aᵢ        -I     0     0 ]  [Δz]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $Σₓ = X⁻¹ V$
  • $Σₛ = S⁻¹ W$
source
MadNLP.AbstractCondensedKKTSystemType
AbstractCondensedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}

The condensed KKT system simplifies further the AbstractReducedKKTSystem by removing the rows associated to the slack variables $s$ and the inequalities.

At the primal-dual iterate $(x, y)$, the matrix writes

[Wₓₓ + Σₓ + Aᵢ' Σₛ Aᵢ    Aₑ']  [Δx]
-[         Aₑ              0 ]  [Δy]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $Σₓ = X⁻¹ V$
  • $Σₛ = S⁻¹ W$
source

Each AbstractKKTSystem follows the interface described below:

MadNLP.create_kkt_systemFunction
create_kkt_system(
+[Aᵢ        -I     0     0 ]  [Δz]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $Σₓ = X⁻¹ V$
  • $Σₛ = S⁻¹ W$
source
MadNLP.AbstractCondensedKKTSystemType
AbstractCondensedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}

The condensed KKT system simplifies further the AbstractReducedKKTSystem by removing the rows associated to the slack variables $s$ and the inequalities.

At the primal-dual iterate $(x, y)$, the matrix writes

[Wₓₓ + Σₓ + Aᵢ' Σₛ Aᵢ    Aₑ']  [Δx]
+[         Aₑ              0 ]  [Δy]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $Σₓ = X⁻¹ V$
  • $Σₛ = S⁻¹ W$
source

Each AbstractKKTSystem follows the interface described below:

MadNLP.create_kkt_systemFunction
create_kkt_system(
     ::Type{KKT},
     cb::AbstractCallback,
     ind_cons::NamedTuple,
     linear_solver::Type{LinSol};
     opt_linear_solver=default_options(linear_solver),
     hessian_approximation=ExactHessian,
-) where {KKT<:AbstractKKTSystem, LinSol<:AbstractLinearSolver}

Instantiate a new KKT system with type KKT, associated to the the nonlinear program encoded inside the callback cb. The NamedTuple ind_cons stores the indexes of all the variables and constraints in the callback cb. In addition, the user should pass the linear solver linear_solver that will be used to solve the KKT system after it has been assembled.

source
MadNLP.get_kktFunction
get_kkt(kkt::AbstractKKTSystem)::AbstractMatrix

Return a pointer to the KKT matrix implemented in kkt. The pointer is passed afterward to a linear solver.

source
MadNLP.initialize!Function
initialize!(kkt::AbstractKKTSystem)

Initialize KKT system with default values. Called when we initialize the MadNLPSolver storing the current KKT system kkt.

source
MadNLP.build_kkt!Function
build_kkt!(kkt::AbstractKKTSystem)

Assemble the KKT matrix before calling the factorization routine.

source
MadNLP.compress_hessian!Function
compress_hessian!(kkt::AbstractKKTSystem)

Compress the Hessian inside kkt's internals. This function is called every time a new Hessian is evaluated.

Default implementation do nothing.

source
MadNLP.compress_jacobian!Function
compress_jacobian!(kkt::AbstractKKTSystem)

Compress the Jacobian inside kkt's internals. This function is called every time a new Jacobian is evaluated.

By default, the function updates in the Jacobian the coefficients associated to the slack variables.

source
MadNLP.jtprod!Function
jtprod!(y::AbstractVector, kkt::AbstractKKTSystem, x::AbstractVector)

Multiply with transpose of Jacobian and store the result in y, such that $y = A' x$ (with $A$ current Jacobian).

source
MadNLP.regularize_diagonal!Function
regularize_diagonal!(kkt::AbstractKKTSystem, primal_values::Number, dual_values::Number)

Regularize the values in the diagonal of the KKT system. Called internally inside the interior-point routine.

source
MadNLP.is_inertia_correctFunction
is_inertia_correct(kkt::AbstractKKTSystem, n::Int, m::Int, p::Int)

Check if the inertia $(n, m, p)$ returned by the linear solver is adapted to the KKT system implemented in kkt.

source

Sparse KKT systems

By default, MadNLP stores a AbstractReducedKKTSystem in sparse format, as implemented by SparseKKTSystem:

Alternatively, the user has the choice to store the KKT system as a SparseUnreducedKKTSystem or as a SparseCondensedKKTSystem:

Dense KKT systems

MadNLP provides also two structures to store the KKT system in a dense matrix. Although less efficient than their sparse counterparts, these two structures allow to store the KKT system efficiently when the problem is instantiated on the GPU.

AbstractKKTVector

Each instance of AbstractKKTVector implements the following interface.

MadNLP.number_primalFunction
number_primal(X::AbstractKKTVector)

Get total number of primal values $(x, s)$ in KKT vector X.

source
MadNLP.number_dualFunction
number_dual(X::AbstractKKTVector)

Get total number of dual values $(y, z)$ in KKT vector X.

source
MadNLP.fullFunction
full(X::AbstractKKTVector)

Return the all the values stored inside the KKT vector X.

source
MadNLP.primalFunction
primal(X::AbstractKKTVector)

Return the primal values $(x, s)$ stored in the KKT vector X.

source
MadNLP.dualFunction
dual(X::AbstractKKTVector)

Return the dual values $(y, z)$ stored in the KKT vector X.

source
MadNLP.primal_dualFunction
primal_dual(X::AbstractKKTVector)

Return both the primal and the dual values $(x, s, y, z)$ stored in the KKT vector X.

source
MadNLP.dual_lbFunction
dual_lb(X::AbstractKKTVector)

Return the dual values $ν$ associated to the lower-bound stored in the KKT vector X.

source
MadNLP.dual_ubFunction
dual_ub(X::AbstractKKTVector)

Return the dual values $w$ associated to the upper-bound stored in the KKT vector X.

source

By default, MadNLP provides one implementation of an AbstractKKTVector.

+) where {KKT<:AbstractKKTSystem, LinSol<:AbstractLinearSolver}

Instantiate a new KKT system with type KKT, associated to the the nonlinear program encoded inside the callback cb. The NamedTuple ind_cons stores the indexes of all the variables and constraints in the callback cb. In addition, the user should pass the linear solver linear_solver that will be used to solve the KKT system after it has been assembled.

source
MadNLP.get_kktFunction
get_kkt(kkt::AbstractKKTSystem)::AbstractMatrix

Return a pointer to the KKT matrix implemented in kkt. The pointer is passed afterward to a linear solver.

source
MadNLP.initialize!Function
initialize!(kkt::AbstractKKTSystem)

Initialize KKT system with default values. Called when we initialize the MadNLPSolver storing the current KKT system kkt.

source
MadNLP.build_kkt!Function
build_kkt!(kkt::AbstractKKTSystem)

Assemble the KKT matrix before calling the factorization routine.

source
MadNLP.compress_hessian!Function
compress_hessian!(kkt::AbstractKKTSystem)

Compress the Hessian inside kkt's internals. This function is called every time a new Hessian is evaluated.

Default implementation do nothing.

source
MadNLP.compress_jacobian!Function
compress_jacobian!(kkt::AbstractKKTSystem)

Compress the Jacobian inside kkt's internals. This function is called every time a new Jacobian is evaluated.

By default, the function updates in the Jacobian the coefficients associated to the slack variables.

source
MadNLP.jtprod!Function
jtprod!(y::AbstractVector, kkt::AbstractKKTSystem, x::AbstractVector)

Multiply with transpose of Jacobian and store the result in y, such that $y = A' x$ (with $A$ current Jacobian).

source
MadNLP.regularize_diagonal!Function
regularize_diagonal!(kkt::AbstractKKTSystem, primal_values::Number, dual_values::Number)

Regularize the values in the diagonal of the KKT system. Called internally inside the interior-point routine.

source
MadNLP.is_inertia_correctFunction
is_inertia_correct(kkt::AbstractKKTSystem, n::Int, m::Int, p::Int)

Check if the inertia $(n, m, p)$ returned by the linear solver is adapted to the KKT system implemented in kkt.

source

Sparse KKT systems

By default, MadNLP stores a AbstractReducedKKTSystem in sparse format, as implemented by SparseKKTSystem:

Alternatively, the user has the choice to store the KKT system as a SparseUnreducedKKTSystem or as a SparseCondensedKKTSystem:

Dense KKT systems

MadNLP provides also two structures to store the KKT system in a dense matrix. Although less efficient than their sparse counterparts, these two structures allow to store the KKT system efficiently when the problem is instantiated on the GPU.

AbstractKKTVector

Each instance of AbstractKKTVector implements the following interface.

MadNLP.number_primalFunction
number_primal(X::AbstractKKTVector)

Get total number of primal values $(x, s)$ in KKT vector X.

source
MadNLP.number_dualFunction
number_dual(X::AbstractKKTVector)

Get total number of dual values $(y, z)$ in KKT vector X.

source
MadNLP.fullFunction
full(X::AbstractKKTVector)

Return the all the values stored inside the KKT vector X.

source
MadNLP.primalFunction
primal(X::AbstractKKTVector)

Return the primal values $(x, s)$ stored in the KKT vector X.

source
MadNLP.dualFunction
dual(X::AbstractKKTVector)

Return the dual values $(y, z)$ stored in the KKT vector X.

source
MadNLP.primal_dualFunction
primal_dual(X::AbstractKKTVector)

Return both the primal and the dual values $(x, s, y, z)$ stored in the KKT vector X.

source
MadNLP.dual_lbFunction
dual_lb(X::AbstractKKTVector)

Return the dual values $ν$ associated to the lower-bound stored in the KKT vector X.

source
MadNLP.dual_ubFunction
dual_ub(X::AbstractKKTVector)

Return the dual values $w$ associated to the upper-bound stored in the KKT vector X.

source

By default, MadNLP provides one implementation of an AbstractKKTVector.

diff --git a/dev/lib/linear_solvers/index.html b/dev/lib/linear_solvers/index.html index 15c59627..c8fe0b93 100644 --- a/dev/lib/linear_solvers/index.html +++ b/dev/lib/linear_solvers/index.html @@ -1,2 +1,2 @@ -Linear Solvers · MadNLP.jl

Linear solvers

Direct linear solvers

Each linear solver employed in MadNLP implements the following interface.

MadNLP.factorize!Function
factorize!(::AbstractLinearSolver)

Factorize the matrix $A$ and updates the factors inside the AbstractLinearSolver instance.

source
SolverCore.solve!Function
solve!(::AbstractLinearSolver, x::AbstractVector)

Solve the linear system $Ax = b$.

This function assumes the linear system has been factorized previously with factorize!.

source
MadNLP.is_inertiaFunction
is_inertia(::AbstractLinearSolver)

Return true if the linear solver supports the computation of the inertia of the linear system.

source
MadNLP.inertiaFunction
inertia(::AbstractLinearSolver)

Return the inertia (n, m, p) of the linear system as a tuple.

Note

The inertia is defined as a tuple $(n, m, p)$, with

  • $n$: number of positive eigenvalues
  • $m$: number of negative eigenvalues
  • $p$: number of zero eigenvalues
source

Iterative refinement

MadNLP uses iterative refinement to improve the accuracy of the solution returned by the linear solver.

MadNLP.solve_refine!Function
solve_refine!(x::VT, ::AbstractIterator, b::VT, w::VT) where {VT <: AbstractKKTVector}

Solve the linear system $Ax = b$ using iterative refinement. The object AbstractIterator stores an instance of a AbstractLinearSolver for the backsolve operations.

Notes

This function assumes the matrix stored in the linear solver has been factorized previously.

source
+Linear Solvers · MadNLP.jl

Linear solvers

Direct linear solvers

Each linear solver employed in MadNLP implements the following interface.

MadNLP.factorize!Function
factorize!(::AbstractLinearSolver)

Factorize the matrix $A$ and updates the factors inside the AbstractLinearSolver instance.

source
SolverCore.solve!Function
solve!(::AbstractLinearSolver, x::AbstractVector)

Solve the linear system $Ax = b$.

This function assumes the linear system has been factorized previously with factorize!.

source
MadNLP.is_inertiaFunction
is_inertia(::AbstractLinearSolver)

Return true if the linear solver supports the computation of the inertia of the linear system.

source
MadNLP.inertiaFunction
inertia(::AbstractLinearSolver)

Return the inertia (n, m, p) of the linear system as a tuple.

Note

The inertia is defined as a tuple $(n, m, p)$, with

  • $n$: number of positive eigenvalues
  • $m$: number of negative eigenvalues
  • $p$: number of zero eigenvalues
source

Iterative refinement

MadNLP uses iterative refinement to improve the accuracy of the solution returned by the linear solver.

MadNLP.solve_refine!Function
solve_refine!(x::VT, ::AbstractIterator, b::VT, w::VT) where {VT <: AbstractKKTVector}

Solve the linear system $Ax = b$ using iterative refinement. The object AbstractIterator stores an instance of a AbstractLinearSolver for the backsolve operations.

Notes

This function assumes the matrix stored in the linear solver has been factorized previously.

source
diff --git a/dev/man/kkt/index.html b/dev/man/kkt/index.html index 5f325c46..49fb5a7b 100644 --- a/dev/man/kkt/index.html +++ b/dev/man/kkt/index.html @@ -49,7 +49,7 @@ cb, ind_cons, linear_solver, -)
MadNLP.SparseKKTSystem{Float64, Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int32}, MadNLP.ExactHessian{Float64, Vector{Float64}}, LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}, Vector{Int64}, Vector{Int32}}([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], MadNLP.ExactHessian{Float64, Vector{Float64}}(), [6.9201371817422e-310, 6.9201428570616e-310, 6.9201443992468e-310, 6.92014439925314e-310], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0], [6.9201331019702e-310, 6.9201331019702e-310], [6.92014439798596e-310], [6.9201119326446e-310, 6.92011193265095e-310], [0.0], [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0], sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0], 6, 6), [1, 5, 8, 10, 1, 2, 5, 3, 6, 4, 7, 9, 11, 12, 13], [0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0], sparse(Int32[1, 2, 2], [1, 1, 2], [1.0, 2.0, 3.0], 4, 4), [1, 2, 3], [0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0], sparse(Int32[1, 2, 1, 2, 1, 2], [1, 1, 2, 2, 3, 4], [1.0, 2.0, 3.0, 4.0, 5.0, 6.0], 2, 4), [1, 3, 2, 4, 5, 6], LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}(sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0], 6, 6), [4.6683907402e-313 6.3659873744e-313 … 1.506617011936e-312 1.82491638066e-312; 5.72938863684e-313 1.5e-322 … 1.527836969846e-312 1.80369642274e-312; … ; 1.01855797989e-312 1.31563739065e-312 … 1.97345608606e-312 2.228095581e-312; 6.3659873744e-313 1.082217853713e-312 … 1.86735629651e-312 2.334195375e-313], [6.9201443976508e-310], -1, Base.RefValue{Int64}(0), Dict{Symbol, Any}(), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing)), [1, 2], [3, 4], [1])

Once the KKT system built, one has to initialize it to use it inside the interior-point algorithm:

MadNLP.initialize!(kkt);
3-element Vector{Float64}:
+)
MadNLP.SparseKKTSystem{Float64, Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int32}, MadNLP.ExactHessian{Float64, Vector{Float64}}, LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}, Vector{Int64}, Vector{Int32}}([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], MadNLP.ExactHessian{Float64, Vector{Float64}}(), [6.9487720785721e-310, 6.9487720772429e-310, 6.94877210197027e-310, 6.94877207806066e-310], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0], [6.9487380555264e-310, 6.94876075476854e-310], [6.9487720687904e-310], [6.9487380555264e-310, 6.94876075476854e-310], [0.0], [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0], sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0], 6, 6), [1, 5, 8, 10, 1, 2, 5, 3, 6, 4, 7, 9, 11, 12, 13], [0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0], sparse(Int32[1, 2, 2], [1, 1, 2], [1.0, 2.0, 3.0], 4, 4), [1, 2, 3], [0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0], sparse(Int32[1, 2, 1, 2, 1, 2], [1, 1, 2, 2, 3, 4], [1.0, 2.0, 3.0, 4.0, 5.0, 6.0], 2, 4), [1, 3, 2, 4, 5, 6], LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}(sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0], 6, 6), [6.365987373e-313 4.243991583e-314 … 6.948760873866e-310 6.9487649680995e-310; 5.72938863724e-313 6.3659873744e-314 … 6.94876076529885e-310 6.9487649681983e-310; … ; 4.243991583e-314 9.3367814823e-313 … 6.948764968207e-310 0.0; 4.243991583e-314 8.06358400755e-313 … 6.948764968207e-310 0.0], [1.131023756744e-311], -1, Base.RefValue{Int64}(0), Dict{Symbol, Any}(), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing)), [1, 2], [3, 4], [1])

Once the KKT system built, one has to initialize it to use it inside the interior-point algorithm:

MadNLP.initialize!(kkt);
3-element Vector{Float64}:
  0.0
  0.0
  0.0

The user can query the KKT matrix inside kkt, simply as

kkt_matrix = MadNLP.get_kkt(kkt)
6×6 SparseArrays.SparseMatrixCSC{Float64, Int32} with 13 stored entries:
@@ -88,7 +88,7 @@
   ⋅      ⋅    2.0    ⋅    ⋅    ⋅ 
   ⋅      ⋅     ⋅    2.0   ⋅    ⋅ 
  0.0    0.0  -1.0    ⋅   0.0   ⋅ 
- 1.0    0.0    ⋅   -1.0   ⋅   0.0

Internally, a SparseKKTSystem stores the KKT system in a sparse COO format. When build_kkt! is called, the sparse COO matrix is transferred to SparseMatrixCSC if the linear solver is sparse, or alternatively to a Matrix if the linear solver is dense.

Note

The KKT system stores only the lower-triangular part of the KKT system, as it is symmetric.

Solution of the KKT system

Now the KKT system is assembled in a matrix $K$ (here stored in kkt_matrix), we want to solve a linear system $K x = b$, for instance to evaluate the next descent direction. To do so, we use the linear solver stored internally inside kkt (here an instance of LapackCPUSolver).

We start by factorizing the KKT matrix $K$:

MadNLP.factorize!(kkt.linear_solver)
LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}(sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [4.0, 0.0, 0.0, 1.0, 202.0, 0.0, 0.0, 2.0, -1.0, 2.0, -1.0, 0.0, 0.0], 6, 6), [4.0 0.0 … 0.0 0.0; 0.0 202.0 … 0.0 0.0; … ; 0.0 0.0 … -0.5 0.0; 0.25 0.0 … -0.0 -0.75], [384.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, -1.4637090229072349e147, 0.0, 0.0, 1.19e-321, 1.32327143e-316, 1.57111136e-316, 2.17101615e-316, 0.0], 384, Base.RefValue{Int64}(0), Dict{Symbol, Any}(:ipiv => [1, 2, 3, 4, 5, 6]), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing))

By default, MadNLP uses a LBL factorization to decompose the symmetric indefinite KKT matrix.

Once the KKT matrix has been factorized, we can compute the solution of the linear system with a backsolve. The function takes as input a AbstractKKTVector, an object used to do algebraic manipulation with a AbstractKKTSystem. We start by instantiating two UnreducedKKTVector (encoding respectively the right-hand-side and the solution):

b = MadNLP.UnreducedKKTVector(kkt)
+ 1.0    0.0    ⋅   -1.0   ⋅   0.0

Internally, a SparseKKTSystem stores the KKT system in a sparse COO format. When build_kkt! is called, the sparse COO matrix is transferred to SparseMatrixCSC if the linear solver is sparse, or alternatively to a Matrix if the linear solver is dense.

Note

The KKT system stores only the lower-triangular part of the KKT system, as it is symmetric.

Solution of the KKT system

Now the KKT system is assembled in a matrix $K$ (here stored in kkt_matrix), we want to solve a linear system $K x = b$, for instance to evaluate the next descent direction. To do so, we use the linear solver stored internally inside kkt (here an instance of LapackCPUSolver).

We start by factorizing the KKT matrix $K$:

MadNLP.factorize!(kkt.linear_solver)
LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}(sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [4.0, 0.0, 0.0, 1.0, 202.0, 0.0, 0.0, 2.0, -1.0, 2.0, -1.0, 0.0, 0.0], 6, 6), [4.0 0.0 … 0.0 0.0; 0.0 202.0 … 0.0 0.0; … ; 0.0 0.0 … -0.5 0.0; 0.25 0.0 … -0.0 -0.75], [384.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 7.95e-322, 2.4169059e-316, 2.55683003e-316, 1.79173974e-316, 0.0, 0.0, 0.0, 0.0, 0.0], 384, Base.RefValue{Int64}(0), Dict{Symbol, Any}(:ipiv => [1, 2, 3, 4, 5, 6]), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing))

By default, MadNLP uses a LBL factorization to decompose the symmetric indefinite KKT matrix.

Once the KKT matrix has been factorized, we can compute the solution of the linear system with a backsolve. The function takes as input a AbstractKKTVector, an object used to do algebraic manipulation with a AbstractKKTSystem. We start by instantiating two UnreducedKKTVector (encoding respectively the right-hand-side and the solution):

b = MadNLP.UnreducedKKTVector(kkt)
 fill!(MadNLP.full(b), 1.0)
 x = copy(b)
MadNLP.UnreducedKKTVector{Float64, Vector{Float64}, Vector{Int64}}([1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0], [1.0, 1.0], [1.0], [1.0, 1.0], [1.0, 1.0], [1.0])

The right-hand-side encodes a vector of 1:

MadNLP.full(b)
9-element Vector{Float64}:
  1.0
@@ -119,4 +119,4 @@
  1.0
  1.0
  1.0
- 1.0

We recover a vector filled with 1, which was the initial right-hand-side.

+ 1.0

We recover a vector filled with 1, which was the initial right-hand-side.

diff --git a/dev/man/linear_solvers/index.html b/dev/man/linear_solvers/index.html index 56f15b23..12116227 100644 --- a/dev/man/linear_solvers/index.html +++ b/dev/man/linear_solvers/index.html @@ -5,12 +5,12 @@ ⋅ ⋅ 0.0 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅ ⋅ 0.0 0.0 -1.0 ⋅ 0.0 ⋅ - 1.0 0.0 ⋅ -1.0 ⋅ 0.0

Then, if we want to pass the KKT matrix K to Lapack, this translates to

linear_solver = LapackCPUSolver(K)
LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}(sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [2.0, 0.0, 0.0, 1.0, 200.0, 0.0, 0.0, 0.0, -1.0, 0.0, -1.0, 0.0, 0.0], 6, 6), [6.92014439763813e-310 6.9201443976571e-310 … 6.920144397714e-310 6.920144397733e-310; 6.9201443976413e-310 6.92014439766027e-310 … 6.9201443977172e-310 6.92014439773616e-310; … ; 6.9201443976508e-310 6.92014439766975e-310 … 6.92014439772667e-310 6.92014439774564e-310; 6.92014439765394e-310 6.9201443976729e-310 … 6.92014439772983e-310 6.9201443977488e-310], [0.0], -1, Base.RefValue{Int64}(0), Dict{Symbol, Any}(), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing))

The instance linear_solver does not copy the matrix $K$ and instead keep a reference to it.

linear_solver.A === K
true

That way every time we re-assemble the matrix $K$ in kkt, the values are directly updated inside linear_solver.

To compute the factorization inside linear_solver, one simply as to call:

MadNLP.factorize!(linear_solver)
LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}(sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [2.0, 0.0, 0.0, 1.0, 200.0, 0.0, 0.0, 0.0, -1.0, 0.0, -1.0, 0.0, 0.0], 6, 6), [2.0 0.0 … 0.0 0.0; 0.0 200.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.5 0.0 … -1.0 -0.5], [384.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0719706e-316, 8.487983164e-314, 2.07209867e-316, 2.4e-322  …  1.8901086e-316, 2.49181574e-316, 2.49183234e-316, 1.77075025e-316, 2.4326251e-316, 2.43263063e-316, 2.4326757e-316, 2.4326812e-316, 2.43268675e-316, 2.4326923e-316], 384, Base.RefValue{Int64}(0), Dict{Symbol, Any}(:ipiv => [1, 2, -5, -5, -6, -6]), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing))

Once the factorization computed, computing the backsolve for a right-hand-side b amounts to

nk = size(kkt, 1)
+ 1.0    0.0    ⋅   -1.0   ⋅   0.0

Then, if we want to pass the KKT matrix K to Lapack, this translates to

linear_solver = LapackCPUSolver(K)
LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}(sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [2.0, 0.0, 0.0, 1.0, 200.0, 0.0, 0.0, 0.0, -1.0, 0.0, -1.0, 0.0, 0.0], 6, 6), [6.9487720687841e-310 6.94877206880305e-310 … 6.94877206885997e-310 6.94877206887894e-310; 6.94877206878724e-310 6.9487720688062e-310 … 6.94877206886313e-310 6.9487720688821e-310; … ; 6.94877206879673e-310 6.9487720688157e-310 … 6.9487720688726e-310 6.9487720688916e-310; 6.9487720687999e-310 6.94877206881886e-310 … 6.9487720688758e-310 6.94877206889475e-310], [6.94876115510163e-310], -1, Base.RefValue{Int64}(0), Dict{Symbol, Any}(), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing))

The instance linear_solver does not copy the matrix $K$ and instead keep a reference to it.

linear_solver.A === K
true

That way every time we re-assemble the matrix $K$ in kkt, the values are directly updated inside linear_solver.

To compute the factorization inside linear_solver, one simply as to call:

MadNLP.factorize!(linear_solver)
LapackCPUSolver{Float64, SparseArrays.SparseMatrixCSC{Float64, Int32}}(sparse(Int32[1, 2, 5, 6, 2, 5, 6, 3, 5, 4, 6, 5, 6], [1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 6], [2.0, 0.0, 0.0, 1.0, 200.0, 0.0, 0.0, 0.0, -1.0, 0.0, -1.0, 0.0, 0.0], 6, 6), [2.0 0.0 … 0.0 0.0; 0.0 200.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.5 0.0 … -1.0 -0.5], [384.0, 2.33419537016e-313, 2.12199579106e-313, 2.75859452835e-313, 3.18299368655e-313, 1.6975966328e-313, 2.54639494926e-313, 2.5463949492e-313, 2.7585945283e-313, 1.9097962119e-313  …  1.8357772e-316, 1.8357772e-316, 1.6e-322, 1.81151185e-316, 1.81151185e-316, 1.8115245e-316, 2.529677e-316, 1.29441743249e-312, 1.590772e-316, 0.0], 384, Base.RefValue{Int64}(0), Dict{Symbol, Any}(:ipiv => [1, 2, -5, -5, -6, -6]), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing))

Once the factorization computed, computing the backsolve for a right-hand-side b amounts to

nk = size(kkt, 1)
 b = rand(nk)
 MadNLP.solve!(linear_solver, b)
6-element Vector{Float64}:
-  0.42052979423581754
-  0.0018794528558321439
- -0.0035163128500687035
- -0.115799340341954
- -0.3215005367050907
- -0.12029905693038101

The values of b being modified inplace to store the solution $x$ of the linear system $Kx =b$.

+ 0.5176015242756502 + 0.0011005139396523212 + -0.533796266982562 + 0.3289668722340941 + -0.6374895076417154 + -0.20945923379651077

The values of b being modified inplace to store the solution $x$ of the linear system $Kx =b$.

diff --git a/dev/man/solver/index.html b/dev/man/solver/index.html index c3d91cf6..03d0704c 100644 --- a/dev/man/solver/index.html +++ b/dev/man/solver/index.html @@ -9,4 +9,4 @@ \end{aligned} \right\} \]

and stopping the algorithm whenever

\[E_0(x_k, s_k; y_k, \nu_k, w_k) < \varepsilon_{tol} -\]

Step 2: How to update the barrier parameter $\mu$?

TODO

Step 4: How do we solve the KKT system?

Solving the KKT system happens in two substeps:

  1. Assembling the KKT matrix $K$.
  2. Solve the system $Kx = b$ with a linear solver.

In substep 1, MadNLP reads the Hessian and the Jacobian computed at Step 3 and build the associated KKT system in an AbstractKKTSystem. As a result, we get a matrix $K_k$ encoding the KKT system at iteration $k$.

In substep 2, the matrix $K_k$ is factorized by a compatible linear solver. Then a solution $x$ is returned by applying a backsolve.

Step 5: How to regularize the KKT system?

TODO

Step 6: What is a filter line-search?

TODO

What happens if the line-search failed?

If inside the line-search algorithm the step $\alpha_k$ becomes negligible ($<10^{-8}$) then we consider the line-search has failed to find a next iterate along the current direction $d_k$. If that happens during several consecutive iterations, MadNLP enters into a feasible restoration phase. The goal of feasible restoration is to decrease the primal infeasibility, to move the current iterate closer to the feasible set.

+\]

Step 2: How to update the barrier parameter $\mu$?

TODO

Step 4: How do we solve the KKT system?

Solving the KKT system happens in two substeps:

  1. Assembling the KKT matrix $K$.
  2. Solve the system $Kx = b$ with a linear solver.

In substep 1, MadNLP reads the Hessian and the Jacobian computed at Step 3 and build the associated KKT system in an AbstractKKTSystem. As a result, we get a matrix $K_k$ encoding the KKT system at iteration $k$.

In substep 2, the matrix $K_k$ is factorized by a compatible linear solver. Then a solution $x$ is returned by applying a backsolve.

Step 5: How to regularize the KKT system?

TODO

Step 6: What is a filter line-search?

TODO

What happens if the line-search failed?

If inside the line-search algorithm the step $\alpha_k$ becomes negligible ($<10^{-8}$) then we consider the line-search has failed to find a next iterate along the current direction $d_k$. If that happens during several consecutive iterations, MadNLP enters into a feasible restoration phase. The goal of feasible restoration is to decrease the primal infeasibility, to move the current iterate closer to the feasible set.

diff --git a/dev/objects.inv b/dev/objects.inv index b1a3e953076dc21836cb41a785f77acc17137851..c4f9a6c48aa056aa87320d8b1cba4aeb6932c92c 100644 GIT binary patch delta 12 TcmX>ia718&Bct&~CngR69R>rd delta 12 TcmX>ia718&BcstqCngR69RUNX diff --git a/dev/options/index.html b/dev/options/index.html index 707f378e..05b5a8b2 100644 --- a/dev/options/index.html +++ b/dev/options/index.html @@ -1,3 +1,3 @@ Options · MadNLP.jl

MadNLP Options


Primary options

These options are used to set the values for other options. The default values are inferred from the NLP model.

  • tol::Float64
    Termination tolerance. The default value is 1e-8 for double precision. The solver terminates if the scaled primal, dual, complementary infeasibility is less than tol. Valid range is $(0,\infty)$.
  • callback::Type Valid values are: MadNLP.{DenseCallback,SparseCallback}.
  • kkt_system::Type The type of KKT system. Valid values are MadNLP.{SpasreKKTSystem,SparseUnreducedKKTSystem,SparseCondensedKKTSystem,DenseKKTSystem,DenseCondensedKKTSystem}.
  • linear_solver::Type
    Linear solver used for solving primal-dual system. Valid values are: {MadNLP.UmfpackSolver,MadNLP.LDLSolver,MadNLP.CHOLMODSolver, MadNLP.MumpsSolver, MadNLP.PardisoSolver, MadNLP.PardisoMKLSolver, MadNLP.Ma27Solver, MadNLP.Ma57Solver, MadNLP.Ma77Solver, MadNLP.Ma86Solver, MadNLP.Ma97Solver, MadNLP.LapackCPUSolver, MadNLPGPU.LapackGPUSolver,MadNLPGPU.RFSolver,MadNLPGPU.GLUSolver,MadNLPGPU.CuCholeskySolver,MadNLPGPU.CUDSSSolver} (some may require using extension packages). The selected solver should be properly built in the build procedure. See README.md file.

General options

  • iterator::Type = RichardsonIterator
    Iterator used for iterative refinement. Valid values are: {MadNLPRichardson,MadNLPKrylov}.
  • blas_num_threads::Int = 1
    Number of threads used for BLAS routines. Valid range is $[1,\infty)$.
  • disable_garbage_collector::Bool = false
    If true, Julia garbage collector is temporarily disabled while solving the problem, and then enabled back once the solution is complete.
  • rethrow_error::Bool = true
    If false, any internal error thrown by MadNLP and interruption exception (triggered by the user via ^C) is caught, and not rethrown. If an error is caught, the solver terminates with an error message.

Output options

  • print_level::LogLevels = INFO
    stdout print level. Any message with level less than print_level is not printed on stdout. Valid values are: MadNLP.{TRACE, DEBUG, INFO, NOTICE, WARN, ERROR}.
  • output_file::String = INFO
    If not "", the output log is teed to the file at the path specified in output_file.
  • file_print_level::LogLevels = TRACE
    File print level; any message with level less than file_print_level is not printed on the file specified in output_file. Valid values are: MadNLP.{TRACE, DEBUG, INFO, NOTICE, WARN, ERROR}.

Termination options

  • max_iter::Int = 3000
    Maximum number of interior point iterations. The solver terminates with exit symbol :Maximum_Iterations_Exceeded if the interior point iteration count exceeds max_iter.
  • acceptable_tol::Float64 = 1e-6
    Acceptable tolerance. The solver terminates if the scaled primal, dual, complementary infeasibility is less than acceptable_tol, for acceptable_iter consecutive interior point iteration steps.
  • acceptable_iter::Int = 15
    Acceptable iteration tolerance. Valid rage is $[1,\infty)$.
  • diverging_iterates_tol::Float64 = 1e20
    Diverging iteration tolerance. The solver terminates with exit symbol :Diverging_Iterates if the NLP error is greater than diverging_iterates_tol.
  • max_wall_time::Float64 = 1e6
    Maximum wall time for interior point solver. The solver terminates with exit symbol :Maximum_WallTime_Exceeded if the total solver wall time exceeds max_wall_time.
  • s_max::Float64 = 100.

NLP options

  • kappa_d::Float64 = 1e-5
  • fixed_variable_treatment::FixedVariableTreatments = MakeParameter
    Valid values are: MadNLP.{RelaxBound,MakeParameter}.
  • equality_treatment::FixedVariableTreatments = MakeParameter
    Valid values are: MadNLP.{RelaxEquality,EnforceEquality}.
  • jacobian_constant::Bool = false
    If true, constraint Jacobian is only evaluated once and reused.
  • hessian_constant::Bool = false
    If true, Lagrangian Hessian is only evaluated once and reused.
  • bound_push::Float64 = 1e-2
  • bound_fac::Float64 = 1e-2
  • hessian_approximation::Type = ExactHessian
  • quasi_newton_options::QuasiNewtonOptions = QuasiNewtonOptions()
  • inertia_correction_method::InertiaCorrectionMethods = INERTIA_AUTO
    Valid values are: MadNLP.{INERTIA_AUTO,INERTIA_BASED, INERTIA_FREE}.
    • INERTIA_BASED uses the strategy in Ipopt.
    • INERTIA_FREE uses the strategy in Chiang (2016).
    • INERTIA_AUTO uses INERTIA_BASED if inertia information is available and uses INERTIA_FREE otherwise.
  • inertia_free_tol::Float64 = 0.

Initialization Options

dual_initialized::Bool = false
-dual_initialization_method::Type = kkt_system <: MadNLP.SparseCondensedKKTSystem ? DualInitializeSetZero : DualInitializeLeastSquares
  • constr_mult_init_max::Float64 = 1e3
  • nlp_scaling::Bool = true:
    If true, MadNLP scales the nonlinear problem during the resolution.
  • nlp_scaling_max_gradient::Float64 = 100.

Hessian perturbation options

  • min_hessian_perturbation::Float64 = 1e-20
  • first_hessian_perturbation::Float64 = 1e-4
  • max_hessian_perturbation::Float64 = 1e20
  • perturb_inc_fact_first::Float64 = 1e2
  • perturb_inc_fact::Float64 = 8.
  • perturb_dec_fact::Float64 = 1/3
  • jacobian_regularization_exponent::Float64 = 1/4
  • jacobian_regularization_value::Float64 = 1e-8

Restoration options

  • soft_resto_pderror_reduction_factor::Float64 = 0.9999
  • required_infeasibility_reduction::Float64 = 0.9

Line-search options

  • obj_max_inc::Float64 = 5.
  • kappha_soc::Float64 = 0.99
  • max_soc::Int = 4
  • alpha_min_frac::Float64 = 0.05
  • s_theta::Float64 = 1.1
  • s_phi::Float64 = 2.3
  • eta_phi::Float64 = 1e-4
  • kappa_soc::Float64 = 0.99
  • gamma_theta::Float64 = 1e-5
  • gamma_phi::Float64 = 1e-5
  • delta::Float64 = 1
  • kappa_sigma::Float64 = 1e10
  • barrier_tol_factor::Float64 = 10.
  • rho::Float64 = 1000.

Barrier options

  • mu_init::Float64 = 1e-1
  • mu_min::Float64 = 1e-9
  • mu_superlinear_decrease_power::Float64 = 1.5
  • tau_min::Float64 = 0.99
  • mu_linear_decrease_factor::Float64 = .2

Linear Solver Options

Linear solver options are specific to the linear solver chosen at linear_solver option. Irrelevant options are ignored and a warning message is printed.

Ma27 (requires MadNLPHSL)

  • ma27_pivtol::Float64 = 1e-8
  • ma27_pivtolmax::Float64 = 1e-4
  • ma27_liw_init_factor::Float64 = 5.
  • ma27_la_init_factor::Float64 = 5.
  • ma27_meminc_factor::Float64 = 2.

Ma57 (requires MadNLPHSL)

  • ma57_pivtol::Float64 = 1e-8
  • ma57_pivtolmax::Float64 = 1e-4
  • ma57_pre_alloc::Float64 = 1.05
  • ma57_pivot_order::Int = 5
  • ma57_automatic_scaling::Bool = false
  • ma57_block_size::Int = 16
  • ma57_node_amalgamation::Int = 16
  • ma57_small_pivot_flag::Int = 0

Ma77 (requires MadNLPHSL)

  • ma77_buffer_lpage::Int = 4096
  • ma77_buffer_npage::Int = 1600
  • ma77_file_size::Int = 2097152
  • ma77_maxstore::Int = 0
  • ma77_nemin::Int = 8
  • ma77_order::Ma77.Ordering = Ma77.METIS
  • ma77_print_level::Int = -1
  • ma77_small::Float64 = 1e-20
  • ma77_static::Float64 = 0.
  • ma77_u::Float64 = 1e-8
  • ma77_umax::Float64 = 1e-4

Ma86 (requires MadNLPHSL)

  • ma86_num_threads::Int = 1
  • ma86_print_level::Float64 = -1
  • ma86_nemin::Int = 32
  • ma86_order::Ma86.Ordering = Ma86.METIS
  • ma86_scaling::Ma86.Scaling = Ma86.SCALING_NONE
  • ma86_small::Float64 = 1e-20
  • ma86_static::Float64 = 0.
  • ma86_u::Float64 = 1e-8
  • ma86_umax::Float64 = 1e-4

Ma97 (requires MadNLPHSL)

  • ma97_num_threads::Int = 1
  • ma97_print_level::Int = -1
  • ma97_nemin::Int = 8
  • ma97_order::Ma97.Ordering = Ma97.METIS
  • ma97_scaling::Ma97.Scaling = Ma97.SCALING_NONE
  • ma97_small::Float64 = 1e-20
  • ma97_u::Float64 = 1e-8
  • ma97_umax::Float64 = 1e-4

Mumps (requires MadNLPMumps)

  • mumps_dep_tol::Float64 = 0.
  • mumps_mem_percent::Int = 1000
  • mumps_permuting_scaling::Int = 7
  • mumps_pivot_order::Int = 7
  • mumps_pivtol::Float64 = 1e-6
  • mumps_pivtolmax::Float64 = .1
  • mumps_scaling::Int = 77

Umfpack (requires MadNLPUmfpack)

  • umfpack_pivtol::Float64 = 1e-4
  • umfpack_pivtolmax::Float64 = 1e-1
  • umfpack_sym_pivtol::Float64 = 1e-3
  • umfpack_block_size::Float64 = 16
  • umfpack_strategy::Float64 = 2.

Pardiso (requires MadNLPPardiso)

  • pardiso_matching_strategy::Pardiso.MatchingStrategy = COMPLETE2x2
  • pardiso_max_inner_refinement_steps::Int = 1
  • pardiso_msglvl::Int = 0
  • pardiso_order::Int = 2

PardisoMKL

  • pardisomkl_num_threads::Int = 1
  • pardiso_matching_strategy::PardisoMKL.MatchingStrategy = COMPLETE2x2
  • pardisomkl_max_iterative_refinement_steps::Int = 1
  • pardisomkl_msglvl::Int = 0
  • pardisomkl_order::Int = 2

LapackGPU (requires MadNLPGPU)

  • lapackgpu_algorithm::LapackGPU.Algorithms = BUNCHKAUFMAN

LapackCPU

  • lapackcpu_algorithm::LapackCPU.Algorithms = BUNCHKAUFMAN

Iterator Options

Richardson

  • richardson_max_iter::Int = 10
    Maximum number of Richardson iteration steps. Valid range is $[1,\infty)$.
  • richardson_tol::Float64 = 1e-10
    Convergence tolerance of Richardson iteration. Valid range is $(0,\infty)$.
  • richardson_acceptable_tol::Float64 = 1e-5
    Acceptable convergence tolerance of Richardson iteration. If the Richardson iteration counter exceeds richardson_max_iter without satisfying the convergence criteria set with richardson_tol, the Richardson solver checks whether the acceptable convergence criteria set with richardson_acceptable_tol is satisfied; if the acceptable convergence criteria is satisfied, the computed step is used; otherwise, the augmented system is treated to be singular. Valid range is $(0,\infty)$.

Krylov (requires MadNLPIterative)

  • krylov_max_iter::Int = 10
    Maximum number of Krylov iteration steps. Valid range is $[1,\infty)$.
  • krylov_tol::Float64 = 1e-10
    Convergence tolerance of Krylov iteration. Valid range is $(0,\infty)$.
  • krylov_acceptable_tol::Float64 = 1e-5
    Acceptable convergence tolerance of Krylov iteration. If the Krylov iteration counter exceeds krylov_max_iter without satisfying the convergence criteria set with krylov_tol, the Krylov solver checks whether the acceptable convergence criteria set with krylov_acceptable_tol is satisfied; if the acceptable convergence criteria is satisfied, the computed step is used; otherwise, the augmented system is treated to be singular. Valid range is $(0,\infty)$.
  • krylov_restart::Int = 5
    Maximum Krylov iteration before restarting. Valid range is $[1,\infty)$.

Reference

[Bunch, 1977]: J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163-179.

[Greif, 2014]: Greif, Chen, Erin Moulding, and Dominique Orban. "Bounds on eigenvalues of matrices arising from interior-point methods." SIAM Journal on Optimization 24.1 (2014): 49-83.

[Wächter, 2006]: Wächter, Andreas, and Lorenz T. Biegler. "On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming." Mathematical programming 106.1 (2006): 25-57.

[Chiang, 2016]: Chiang, Nai-Yuan, and Victor M. Zavala. "An inertia-free filter line-search algorithm for large-scale nonlinear programming." Computational Optimization and Applications 64.2 (2016): 327-354.

+dual_initialization_method::Type = kkt_system <: MadNLP.SparseCondensedKKTSystem ? DualInitializeSetZero : DualInitializeLeastSquares

Hessian perturbation options


Restoration options


Line-search options


Barrier options


Linear Solver Options

Linear solver options are specific to the linear solver chosen at linear_solver option. Irrelevant options are ignored and a warning message is printed.

Ma27 (requires MadNLPHSL)

Ma57 (requires MadNLPHSL)

Ma77 (requires MadNLPHSL)

Ma86 (requires MadNLPHSL)

Ma97 (requires MadNLPHSL)

Mumps (requires MadNLPMumps)

Umfpack (requires MadNLPUmfpack)

Pardiso (requires MadNLPPardiso)

PardisoMKL

LapackGPU (requires MadNLPGPU)

LapackCPU

Iterator Options

Richardson

Krylov (requires MadNLPIterative)

Reference

[Bunch, 1977]: J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163-179.

[Greif, 2014]: Greif, Chen, Erin Moulding, and Dominique Orban. "Bounds on eigenvalues of matrices arising from interior-point methods." SIAM Journal on Optimization 24.1 (2014): 49-83.

[Wächter, 2006]: Wächter, Andreas, and Lorenz T. Biegler. "On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming." Mathematical programming 106.1 (2006): 25-57.

[Chiang, 2016]: Chiang, Nai-Yuan, and Victor M. Zavala. "An inertia-free filter line-search algorithm for large-scale nonlinear programming." Computational Optimization and Applications 64.2 (2016): 327-354.

diff --git a/dev/quickstart/index.html b/dev/quickstart/index.html index fb0b3c04..9deb4b5a 100644 --- a/dev/quickstart/index.html +++ b/dev/quickstart/index.html @@ -20,7 +20,7 @@ x2² + x1 ≥ 0 x1 ≤ 0.5

Then, solving HS15 with MadNLP directly translates to

using MadNLP
 JuMP.set_optimizer(model, MadNLP.Optimizer)
-JuMP.optimize!(model)
This is MadNLP version v0.8.2, running with umfpack
+JuMP.optimize!(model)
This is MadNLP version v0.8.3, running with umfpack
 
 Number of nonzeros in constraint Jacobian............:        4
 Number of nonzeros in Lagrangian Hessian.............:        5
@@ -72,10 +72,10 @@
 Number of constraint evaluations                     = 47
 Number of constraint Jacobian evaluations            = 20
 Number of Lagrangian Hessian evaluations             = 19
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  9.357
-Total wall-clock secs in linear solver                      =  0.018
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  9.125
+Total wall-clock secs in linear solver                      =  0.017
 Total wall-clock secs in NLP function evaluations           =  0.000
-Total wall-clock secs                                       =  9.375
+Total wall-clock secs                                       =  9.142
 
 EXIT: Optimal Solution Found (tol = 1.0e-08).

Under the hood, JuMP builds a nonlinear model with a sparse AD backend to evaluate the first and second-order derivatives of the objective and the constraints. Internally, MadNLP takes as input the callbacks generated by JuMP and wraps them inside a MadNLP.MOIModel.

Specifying the derivatives by hand: NLPModels.jl

Alternatively, we can compute the derivatives manually and define directly a NLPModel associated to our problem. This second option, although more complicated, gives us more flexibility and comes without boilerplate.

We define a new NLPModel structure simply as:

struct HS15Model <: NLPModels.AbstractNLPModel{Float64,Vector{Float64}}
     meta::NLPModels.NLPModelMeta{Float64, Vector{Float64}}
@@ -142,4 +142,4 @@
  -0.7921232178470455
  -1.2624298435831807

and

results.multipliers
2-element Vector{Float64}:
  -477.17046873911977
-   -3.126200044003132e-9
+ -3.126200044003132e-9