-
Notifications
You must be signed in to change notification settings - Fork 0
Conversation
avik-pal
commented
Nov 17, 2023
•
edited
Loading
edited
- Basic Operations
- Broadcasting
- Type Stability?
- Reduction Operations
- Unary Operations
- Broadcasting
- Linear Algebra
- QR
- LU
- Cholesky
- Direct Ldiv
- Batched Matrix Multiplication
- CUDA
- QR
- Batched Solve?
- Square
- Long Rectangle
- LU
- Batched Solve?
- Cholesky
- Direct Ldiv
- Square
- Long Rectangle
- Wide Rectangle
- Batched Matrix Multiply
- QR
- Tests
- Aqua
- Compatibility with LinearSolve.jl
- Krylov GMRES gives incomplete results
- Basic Usage example with NonlinearSolve.jl
ce02274
to
ecd53e7
Compare
For LinearSolve.jl, Currently I make the assumption that the batchsize of LinearProblem(BatchedArray(rand(4, 4, 1)), BatchedArray(4, 16)) # Single `A` but 16 - `b`s |
a8477e1
to
95cbb1c
Compare
Welcome to Codecov 🎉Once merged to your default branch, Codecov will compare your coverage reports and display the results in this comment. Thanks for integrating Codecov - We've got you covered ☂️ |
64b5930
to
6704c42
Compare
6704c42
to
bbeadbe
Compare
SimpleNonlinearSolve.jl example usageusing BatchedArrays, SimpleNonlinearSolve
u0 = BatchedArray(rand(3, 5))
prob1 = NonlinearProblem((u, p) -> u .^ 2 .- p, u0, 2.0)
solve(prob1, SimpleBroyden())
solve(prob1, SimpleDFSane())
solve(prob1, SimpleLimitedMemoryBroyden(; threshold = 2))
solve(prob1, SimpleNewtonRaphson())
solve(prob1, SimpleKlement())
solve(prob1, SimpleHalley()) I am leaving out TR for now since there is a potential correctness issue that needs some careful investigation. As a summary of this, branching is almost impossible to handle nicely if there are conditional computations inside the branch. Fun part: Methods using Jacobian will be much faster using BatchedArrays since we can automatically color and propagate all the batch duals together. So the current SimpleNewtonRaphson with BatchedArrays is faster than the pre 1.0 BatchedSimpleNewtonRaphson |