Skip to content

Commit

Permalink
Add SparseCondensedSystem and several changes (#272)
Browse files Browse the repository at this point in the history
* HSL update

* some further instructions for custom complied hsl

* ma77 fix for windows

* working on sparse condensed

* almost working

* works for simple case

* sparse condensed works

* playing with parameters

* a few updates

* load OpenBLAS32 for lbt

* confirmed that openmp works for the new version of artifacts

* sparse condensed added

* debugging

* started working on backsolve

* in progress

* converges

* in progress; need to address the fixed variable first

* condensed sparse barely works

* works

* works up to high precision

* cleaning up factorization.jl

* looking at solver time

* testing

* testing

* Testing

* weird initialization

* more analytics

* more analytics

* more analytics

* more analytics

* more analytics

* now we can solve up to 1e-6 🎉

* iterators working

* making things general

* solver created well

* fixing jac_raw

* going well

* added extension

* converges on GPU

* gpu works

* slide for secretary

* improving example.jl

* improving wrapper

* improving perf

* performance addressed

* full compatibility

* wrapper improved

* mumps improved

* except for infeasible

* mumps test passing

* starting to attempt Richardson on RR

* restoration and inertia free works

* restoration works

* finding sign error in unreduced

* unreduced work

* sth off

* all kkt systems work

* cleaned up wrapper a bit

* callback introduced

* test passes except lbfgs

* debugging ieee118

* works on ieee118

* fixing tests

* gpu and madnlp test works

* unreduced fix

* some fix for making gmres work

* del w moved

* fix the initialization issue

* spotted issue with solve

* some error

* reg issue fixed

* before running case study

* init_time added

* fine tuning

* experimenting relaxation strategy

* version of pscc case study

* several fixes for testing

* sparse condensed test added and passes

* benchmark update

* var/ineq counting bug fix

* reenabled force_lower_triangular!

* solve_refine added, improve! is used

* improve fixed

* minor changes

* add GLU into MadNLPGPU

* experimental changes

* benchmark improvment

* inertia corrector added

* on par with Ipopt on CUTEst

* benchmark improved

* dropping lts

* glu commented

* mumps 5.4 deprecated

* empty file removed

* doc 1.9

* remove env variable JULIA_CUDA_USE_BINARYBUILDER

* depot directory fixed

* bechmark Ma57 issue fixed

* cusolverrf renamed

* inbounds added

* removed unncessary unions

* empty line issue

* union error fix

* using ALG1 for solve

* solve timer moved

* symv! issue fixed

* fixed weird issue in testing

* is_valid for GPUs as well

* clean up the directory, improve test, and improve KKT

* dropping support for v1.6

* removed unnecessary enums

* improved option sanity

* options sanity imporved

* more comments on the experimental second chance

* buffers reduced

* addressed Francois' comments

* reintroduced several kernels

* improved README and removed outdated infos

* Update README.md

* fix LBFGS with iterative refinement

* fix warning with type variable declaration

---------

Co-authored-by: fpacaud <[email protected]>
  • Loading branch information
sshin23 and frapac authored Nov 10, 2023
1 parent 182dafc commit 6d694cd
Show file tree
Hide file tree
Showing 57 changed files with 4,720 additions and 2,533 deletions.
3 changes: 2 additions & 1 deletion .ci/ci.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ Pkg.activate(@__DIR__)


if ARGS[1] == "full"
pkgs = ["MadNLPHSL","MadNLPPardiso","MadNLPMumps","MadNLPKrylov"]
pkgs = ["MadNLPHSL","MadNLPPardiso","MadNLPMumps"]
# ,"MadNLPKrylov"] # Krylov has been discontinued since the introduction of iterative refinement on the full space.
elseif ARGS[1] == "basic"
pkgs = ["MadNLPMumps","MadNLPKrylov"]
elseif ARGS[1] == "cuda"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
with:
version: '1.6'
version: '1.9'
- name: Install dependencies
run: julia --project=docs/ docs/install.jl
- name: Build and deploy
Expand Down
11 changes: 5 additions & 6 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
julia-version: ['1.6','^1.7']
julia-version: ['1.9']
julia-arch: [x64]
os: [ubuntu-latest,macos-latest,windows-latest]
steps:
Expand All @@ -23,11 +23,11 @@ jobs:
- run: julia --color=yes --project=.ci .ci/ci.jl basic
test-moonshot:
env:
JULIA_DEPOT_PATH: /scratch/sshin/github-actions/julia_depot_madnlp
JULIA_DEPOT_PATH: /home/sshin/action-runners/MadNLP/julia-depot/
runs-on: self-hosted
strategy:
matrix:
julia-version: ['1.6','^1.7']
julia-version: ['1.9']
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
Expand All @@ -43,12 +43,11 @@ jobs:
test-moonshot-cuda:
env:
CUDA_VISIBLE_DEVICES: 1
JULIA_DEPOT_PATH: /scratch/sshin/github-actions/julia_depot_madnlp
JULIA_CUDA_USE_BINARYBUILDER: true
JULIA_DEPOT_PATH: /home/sshin/action-runners/MadNLP/julia-depot/
runs-on: self-hosted
strategy:
matrix:
julia-version: ['^1.7']
julia-version: ['1.9']
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ MadNLPTests = "0.3, 0.4"
MathOptInterface = "1"
NLPModels = "~0.17.2, 0.18, 0.19, 0.20"
SolverCore = "~0.3"
julia = "1.6"
julia = "1.9"

[extras]
MINLPTests = "ee0a3090-8ee9-5cdb-b8cb-8eeba3165522"
Expand Down
54 changes: 11 additions & 43 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,12 @@
<img src="https://github.com/MadNLP/MadNLP.jl/blob/master/logo-full.svg?raw=true"/>
![logo](https://github.com/MadNLP/MadNLP.jl/blob/master/logo-full.svg)

| **Documentation** | **Build Status** | **Coverage** | **DOI** |
|:-----------------:|:----------------:|:----------------:|:----------------:|
| [![doc](https://img.shields.io/badge/docs-dev-blue.svg)](https://madnlp.github.io/MadNLP.jl/dev) | [![build](https://github.com/MadNLP/MadNLP.jl/actions/workflows/test.yml/badge.svg)](https://github.com/MadNLP/MadNLP.jl/actions/workflows/test.yml) | [![codecov](https://codecov.io/gh/MadNLP/MadNLP.jl/branch/master/graph/badge.svg?token=MBxH2AAu8Z)](https://codecov.io/gh/MadNLP/MadNLP.jl) | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5825776.svg)](https://doi.org/10.5281/zenodo.5825776) |
*A [nonlinear programming](https://en.wikipedia.org/wiki/Nonlinear_programming) solver based on the filter line-search [interior point method](https://en.wikipedia.org/wiki/Interior-point_method) (as in [Ipopt](https://github.com/coin-or/Ipopt)) that can handle/exploit diverse classes of data structures, either on [host](https://en.wikipedia.org/wiki/Central_processing_unit) or [device](https://en.wikipedia.org/wiki/Graphics_processing_unit) memories.*

MadNLP is a [nonlinear programming](https://en.wikipedia.org/wiki/Nonlinear_programming) (NLP) solver, purely implemented in [Julia](https://julialang.org/). MadNLP implements a filter line-search algorithm, as that used in [Ipopt](https://github.com/coin-or/Ipopt). MadNLP seeks to streamline the development of modeling and algorithmic paradigms in order to exploit structures and to make efficient use of high-performance computers.
---

## License

MadNLP is available under the [MIT license](https://github.com/MadNLP/MadNLP.jl/blob/master/LICENSE).
| **License** | **Documentation** | **Build Status** | **Coverage** | **DOI** |
|:-----------------:|:-----------------:|:----------------:|:----------------:|:----------------:|
| [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/MadNLP/MadNLP.jl/blob/master/LICENSE) | [![doc](https://img.shields.io/badge/docs-dev-blue.svg)](https://madnlp.github.io/MadNLP.jl/stable) [![doc](https://img.shields.io/badge/docs-dev-blue.svg)](https://madnlp.github.io/MadNLP.jl/dev) | [![build](https://github.com/MadNLP/MadNLP.jl/actions/workflows/test.yml/badge.svg)](https://github.com/MadNLP/MadNLP.jl/actions/workflows/test.yml) | [![codecov](https://codecov.io/gh/MadNLP/MadNLP.jl/branch/master/graph/badge.svg?token=MBxH2AAu8Z)](https://codecov.io/gh/MadNLP/MadNLP.jl) | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5825776.svg)](https://doi.org/10.5281/zenodo.5825776) |

## Installation

Expand All @@ -18,12 +16,13 @@ pkg> add MadNLP

Optionally, various extension packages can be installed together:
```julia
pkg> add MadNLPHSL, MadNLPPardiso, MadNLPMumps, MadNLPGPU, MadNLPGraph, MadNLPKrylov
pkg> add MadNLPHSL, MadNLPPardiso, MadNLPMumps, MadNLPGPU
```

These packages are stored in the `lib` subdirectory within the main MadNLP repository. Some extension packages may require additional dependencies or specific hardware. For the instructions for the build procedure, see the following links:

* [MadNLPHSL](https://github.com/MadNLP/MadNLP.jl/tree/master/lib/MadNLPHSL)
* [MadNLPMumps](https://github.com/MadNLP/MadNLP.jl/tree/master/lib/MadNLPMumps)
* [MadNLPPardiso](https://github.com/MadNLP/MadNLP.jl/tree/master/lib/MadNLPHSL)
* [MadNLPGPU](https://github.com/MadNLP/MadNLP.jl/tree/master/lib/MadNLPGPU)

Expand All @@ -34,7 +33,6 @@ These packages are stored in the `lib` subdirectory within the main MadNLP repos
MadNLP is interfaced with modeling packages:

- [JuMP](https://github.com/jump-dev/JuMP.jl)
- [Plasmo](https://github.com/zavalab/Plasmo.jl)
- [NLPModels](https://github.com/JuliaSmoothOptimizers/NLPModels.jl).

Users can pass various options to MadNLP also through the modeling packages. The interface-specific syntax are shown below. To see the list of MadNLP solver options, check the [OPTIONS.md](https://github.com/MadNLP/MadNLP/blob/master/OPTIONS.md) file.
Expand All @@ -58,37 +56,20 @@ model = CUTEstModel("PRIMALC1")
madnlp(model, print_level=MadNLP.WARN, max_wall_time=3600)
```

#### Plasmo interface (requires extension `MadNLPGraph`)

```julia
using MadNLP, MadNLPGraph, Plasmo
graph = OptiGraph()
@optinode(graph,n1)
@optinode(graph,n2)
@variable(n1,0 <= x <= 2)
@variable(n1,0 <= y <= 3)
@constraint(n1,x+y <= 4)
@objective(n1,Min,x)
@variable(n2,x)
@NLnodeconstraint(n2,exp(x) >= 2)
@linkconstraint(graph,n1[:x] == n2[:x])
MadNLP.optimize!(graph; print_level=MadNLP.DEBUG, max_iter=100)
```

### Linear Solvers

MadNLP is interfaced with non-Julia sparse/dense linear solvers:
- [Umfpack](https://people.engr.tamu.edu/davis/suitesparse.html)
- [MKL-Pardiso](https://software.intel.com/content/www/us/en/develop/documentation/mkl-developer-reference-fortran/top/sparse-solver-routines/intel-mkl-pardiso-parallel-direct-sparse-solver-interface.html)
- [MKL-Lapack](https://software.intel.com/content/www/us/en/develop/documentation/mkl-developer-reference-fortran/top/lapack-routines.html)
- [Lapack](https://software.intel.com/content/www/us/en/develop/documentation/mkl-developer-reference-fortran/top/lapack-routines.html)
- [HSL solvers](http://www.hsl.rl.ac.uk/ipopt/) (requires extension)
- [Pardiso](https://www.pardiso-project.org/) (requires extension)
- [Pardiso-MKL](https://software.intel.com/content/www/us/en/develop/documentation/mkl-developer-reference-fortran/top/sparse-solver-routines/intel-mkl-pardiso-parallel-direct-sparse-solver-interface.html) (requires extension)
- [Mumps](http://mumps.enseeiht.fr/) (requires extension)
- [cuSOLVER](https://docs.nvidia.com/cuda/cusolver/index.html) (requires extension)

Each linear solver in MadNLP is a Julia type, and the `linear_solver` option should be specified by the actual type. Note that the linear solvers are always exported to `Main`.

#### Built-in Solvers: Umfpack, PardisoMKL, LapackCPU
#### Built-in Solvers: Umfpack, LapackCPU

```julia
using MadNLP, JuMP
Expand Down Expand Up @@ -134,19 +115,6 @@ using MadNLP, MadNLPGPU, JuMP
model = Model(()->MadNLP.Optimizer(linear_solver=LapackGPUSolver))
```

#### Schur and Schwarz (requires extension `MadNLPGraph`)

```julia
using MadNLP, MadNLPGraph, JuMP
# ...
model = Model(()->MadNLP.Optimizer(linear_solver=MadNLPSchwarz))
model = Model(()->MadNLP.Optimizer(linear_solver=MadNLPSchur))
```
The solvers in `MadNLPGraph` (`Schur` and `Schwarz`) use multi-thread parallelism; thus, Julia session should be started with `-t` flag.
```sh
julia -t 16 # to use 16 threads
```

## Citing MadNLP.jl

If you use MadNLP.jl in your research, we would greatly appreciate your citing it.
Expand Down
9 changes: 5 additions & 4 deletions benchmark/benchmark-cutest.jl
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
include("config.jl")
Pkg.add(PackageSpec(name="CUTEst",rev="main")) # will be removed once the new CUTEst version is released

@everywhere using CUTEst

if SOLVER == "master" || SOLVER == "current"
@everywhere begin
using MadNLP, MadNLPHSL
solver = nlp -> madnlp(nlp,linear_solver=MadNLPMa57,max_wall_time=900., print_level=PRINT_LEVEL)
LinSol = @isdefined(MadNLPMa57) ? MadNLPMa57 : MadNLPMa57 # for older version of MadNLP
solver = nlp -> madnlp(nlp,linear_solver=LinSol,max_wall_time=900., print_level=PRINT_LEVEL, tol=1e-6)
function get_status(code::MadNLP.Status)
if code == MadNLP.SOLVE_SUCCEEDED
return 1
Expand All @@ -19,7 +19,7 @@ if SOLVER == "master" || SOLVER == "current"
end
elseif SOLVER == "ipopt"
@everywhere begin
solver = nlp -> ipopt(nlp,linear_solver="ma57",max_cpu_time=900., print_level=PRINT_LEVEL)
solver = nlp -> ipopt(nlp,linear_solver="ma57",max_cpu_time=900., print_level=PRINT_LEVEL, tol=1e-6)
using NLPModelsIpopt
function get_status(code::Symbol)
if code == :first_order
Expand Down Expand Up @@ -58,8 +58,9 @@ end
return (status=get_status(retval.status),time=t,mem=mem,iter=retval.iter)
catch e
finalize(nlp)
throw(e)
return (status=3,time=0.,mem=0,iter=0)
end
println("Solved $name")
end

function benchmark(solver,probs;warm_up_probs = [], decode = false)
Expand Down
24 changes: 19 additions & 5 deletions benchmark/benchmark-power.jl
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,17 @@ end
if SOLVER == "master" || SOLVER == "current"
@everywhere begin
using MadNLP, MadNLPHSL
LinSol = @isdefined(MadNLPMa57) ? MadNLPMa57 : Ma57Solver

solver = pm -> begin
set_optimizer(pm.model,()->
MadNLP.Optimizer(linear_solver=MadNLPMa57,max_wall_time=900.,tol=1e-6, print_level=PRINT_LEVEL))
set_optimizer(
pm.model,()-> MadNLP.Optimizer(
linear_solver=LinSol,
max_wall_time=900.,
tol=1e-6,
print_level=PRINT_LEVEL
)
)
mem=@allocated begin
t=@elapsed begin
optimize_model!(pm)
Expand All @@ -56,7 +64,7 @@ elseif SOLVER == "ipopt"

const ITER = [-1]
function ipopt_callback(
prob::IpoptProblem,alg_mod::Cint,iter_count::Cint,obj_value::Float64,
alg_mod::Cint,iter_count::Cint,obj_value::Float64,
inf_pr::Float64,inf_du::Float64,mu::Float64,d_norm::Float64,
regularization_size::Float64,alpha_du::Float64,alpha_pr::Float64,ls_trials::Cint)

Expand All @@ -66,8 +74,14 @@ elseif SOLVER == "ipopt"

solver = pm -> begin
ITER[] = 0
set_optimizer(pm.model,()->
Ipopt.Optimizer(linear_solver="ma57",max_cpu_time=900.,tol=1e-6, print_level=PRINT_LEVEL))
set_optimizer(pm.model, Ipopt.Optimizer)
set_optimizer_attributes(
pm.model,
"linear_solver"=>"ma57",
"max_cpu_time"=>900.,
"tol"=>1e-6,
"print_level"=>PRINT_LEVEL
)
MOI.set(pm.model, Ipopt.CallbackFunction(), ipopt_callback)
mem=@allocated begin
t=@elapsed begin
Expand Down
8 changes: 4 additions & 4 deletions benchmark/config.jl
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ const QUICK = ARGS[4] == "true"
const GCOFF = ARGS[5] == "true"
const DECODE = ARGS[6] == "true"

addprocs(parse(Int,NP),exeflags="--project=.")
Pkg.instantiate()

if SOLVER == "master"
Expand All @@ -21,7 +20,9 @@ elseif SOLVER == "current"
elseif SOLVER == "ipopt"
elseif SOLVER == "knitro"
else
error("Proper ARGS should be given")
Pkg.add(PackageSpec(name="MadNLP",rev="$SOLVER"))
Pkg.add(PackageSpec(name="MadNLPHSL",rev="$SOLVER"))
Pkg.build("MadNLPHSL")
end

# Set verbose option
Expand All @@ -34,5 +35,4 @@ else
const PRINT_LEVEL = VERBOSE ? MadNLP.INFO : MadNLP.ERROR
end

# Set quick option

addprocs(parse(Int,NP))
2 changes: 1 addition & 1 deletion benchmark/runbenchmarks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ function main()
joinpath(PROJECT_PATH, "Project.toml"),
force=true
)

for class in CLASSES
for solver in SOLVERS
launch_script = joinpath(PROJECT_PATH, "benchmark-$class.jl")
Expand Down
5 changes: 4 additions & 1 deletion lib/MadNLPGPU/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,23 @@ version = "0.6"

[deps]
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
CUSOLVERRF = "a8cc9031-bad2-4722-94f5-40deabb4245c"
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
MadNLP = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6"

[compat]
CUDA = "~4"
CUSOLVERRF = "0.2"
KernelAbstractions = "0.9"
MadNLP = "0.7"
MadNLPTests = "0.3, 0.4"
julia = "1.7"

[extras]
MadNLPTests = "b52a2a03-04ab-4a5f-9698-6a2deff93217"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[targets]
test = ["Test", "MadNLPTests"]
test = ["Test", "MadNLPTests", "CUDA"]
48 changes: 40 additions & 8 deletions lib/MadNLPGPU/src/MadNLPGPU.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,16 @@ module MadNLPGPU

import LinearAlgebra
# CUDA
import CUDA: CUDA, CUBLAS, CUSOLVER, CuVector, CuMatrix, CuArray, R_64F, has_cuda, @allowscalar, runtime_version
import CUDA: CUDABackend
import CUDA: CUDA, CUSPARSE, CUBLAS, CUSOLVER, CuVector, CuMatrix, CuArray, R_64F,
has_cuda, @allowscalar, runtime_version, CUDABackend
import .CUSOLVER:
libcusolver, cusolverStatus_t, CuPtr, cudaDataType, cublasFillMode_t, cusolverDnHandle_t, dense_handle
import .CUBLAS: handle, CUBLAS_DIAG_NON_UNIT,
CUBLAS_FILL_MODE_LOWER, CUBLAS_FILL_MODE_UPPER, CUBLAS_SIDE_LEFT, CUBLAS_OP_N, CUBLAS_OP_T
import CUSOLVERRF

# Kernels
import KernelAbstractions: @kernel, @index, synchronize
import KernelAbstractions: @kernel, @index, synchronize, @Const

import MadNLP: NLPModels
import MadNLP
Expand All @@ -23,15 +24,46 @@ import MadNLP:

symul!(y, A, x::CuVector{T}, α = 1., β = 0.) where T = CUBLAS.symv!('L', T(α), A, x, T(β), y)
MadNLP._ger!(alpha::Number, x::CuVector{T}, y::CuVector{T}, A::CuMatrix{T}) where T = CUBLAS.ger!(alpha, x, y, A)

function MadNLP._madnlp_unsafe_wrap(vec::VT, n, shift=1) where {T, VT <: CuVector{T}}
return view(vec,shift:shift+n-1)
end

include("kernels.jl")
include("callbacks.jl")

export CuMadNLPSolver

include("interface.jl")
include("lapackgpu.jl")
include("cusolverrf.jl")

# option preset
function MadNLP.MadNLPOptions(nlp::AbstractNLPModel{T,VT}) where {T, VT <: CuVector{T}}

# if dense callback is defined, we use dense callback
is_dense_callback =
hasmethod(MadNLP.jac_dense!, Tuple{typeof(nlp), AbstractVector, AbstractMatrix}) &&
hasmethod(MadNLP.hess_dense!, Tuple{typeof(nlp), AbstractVector, AbstractVector, AbstractMatrix})

callback = is_dense_callback ? MadNLP.DenseCallback : MadNLP.SparseCallback

# if dense callback is used, we use dense condensed kkt system
kkt_system = is_dense_callback ? MadNLP.DenseCondensedKKTSystem : MadNLP.SparseCondensedKKTSystem

# if dense kkt system, we use a dense linear solver
linear_solver = is_dense_callback ? LapackGPUSolver : RFSolver

equality_treatment = is_dense_callback ? MadNLP.EnforceEquality : MadNLP.RelaxEquality

fixed_variable_treatment = is_dense_callback ? MadNLP.MakeParameter : MadNLP.RelaxBound

tol = MadNLP.get_tolerance(T,kkt_system)

return MadNLP.MadNLPOptions(
callback = callback,
kkt_system = kkt_system,
linear_solver = linear_solver,
equality_treatment = equality_treatment,
fixed_variable_treatment = fixed_variable_treatment,
tol = tol
)
end

export LapackGPUSolver

Expand Down
Loading

0 comments on commit 6d694cd

Please sign in to comment.