Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/gridap/Gridap.jl into Upd…
Browse files Browse the repository at this point in the history
…ate_RK_ODE_solver
  • Loading branch information
oriolcg committed Sep 5, 2023
2 parents 5963fff + 0b5dca3 commit d5d1b19
Show file tree
Hide file tree
Showing 34 changed files with 1,817 additions and 286 deletions.
31 changes: 28 additions & 3 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,46 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

- Jacobi polynomial bases. Since PR [#896](https://github.com/gridap/Gridap.jl/pull/896).
- Block assembly now generalised to work with `AbstractBlockArrays`, to include changes in GridapDistributed. Since PR [939](https://github.com/gridap/Gridap.jl/pull/939).

### Fixed

- Using Broadcasting(\circ) instead of \circ in one of the lazy_maps used to transform a coarse field into a fine field. Since PR [#938](https://github.com/gridap/Gridap.jl/pull/938).
- Better infinite norm computation in `Algebra._check_convergence`. Now works for any `AbstractArray` type, including `PVector`. Since PR [#940](https://github.com/gridap/Gridap.jl/pull/940).

## [0.17.19] - 2023-08-23

### Fixed

- Reimplemented `DomainStyle` for `CellQuadrature` to fix breaking low-level Poisson tutorial. Since PR [#937](https://github.com/gridap/Gridap.jl/pull/937).

## [0.17.18] - 2023-08-15

### Added

- Jacobi polynomial bases. Since PR [#896](https://github.com/gridap/Gridap.jl/pull/896).
- Replaced newest vertex bisection mesh adaptation in
`src/Geometry/NewestVertexBisection.jl` with appropriate changes to
`src/Adaptivity/EdgeBasedRefinement.jl`. see PR
`src/Adaptivity/EdgeBasedRefinement.jl`. Since PR
[#901](https://github.com/gridap/Gridap.jl/pull/901).
- When refining `DiscreteModels`, the `FaceLabeling` of the resulting `AdaptedDiscreteModel` will now correctly inhering the tags of the parent model. This has been made possible by the addition of the method `get_d_to_face_to_parent_face`. Since PR[#886](https://github.com/gridap/Gridap.jl/pull/886).
- Added support for mixed adaptivity (i.e coarsening and refining), as well as non-conforming adaptivity. Since PR[#886](https://github.com/gridap/Gridap.jl/pull/886).
- Added support for block assembly of FE systems. Two new types `BlockMultiFieldStyle` and `BlockSparseMatrixAssemblers` have been added. Since PR[#915](https://github.com/gridap/Gridap.jl/pull/915).

### Changed

- The API of `CellQuadrature` has now both data and integration domain styles as keyword arguments. Old signatures are deprecated. Since PR [#885](https://github.com/gridap/Gridap.jl/pull/885).

### Fixed

- ODE operators cache linear system at initial time or the time stored by the operator. Before, the linear system was cached at time `t = 0.0`, which cannot be done if the operator is not well-defined at `t = 0.0`. Since PR [#891](https://github.com/gridap/Gridap.jl/pull/891).
- Fixed the method `get_normal_vector` for `AdaptedTriangulation`. The method `get_facet_normal`
was using default, it's now using the spetialized implementation for the underlying triangulation type.
Since PR [#884](https://github.com/gridap/Gridap.jl/pull/884).

- Fixed `cell_dof_ids` for the case of vectorial `ConstantFESpace`. Since PR [#888](https://github.com/gridap/Gridap.jl/pull/888)
- Fixed generation of Modal C0 bases for Julia 1.9. Since PR [#918](https://github.com/gridap/Gridap.jl/pull/918).
- Fixed some edge cases for `change_domain` between `AdaptedTriangulations` where inneficient coordinate transformations would be applied between physical and reference domains. Since PR[#886](https://github.com/gridap/Gridap.jl/pull/886).
- Fixed: Domain limits can now be of any type (notably, floats) when refining `CartesianDiscreteModels`. Since PR[#886](https://github.com/gridap/Gridap.jl/pull/886).

## [0.17.17] - 2023-02-24

Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Gridap"
uuid = "56d4f2e9-7ea1-5844-9cf6-b9c51ca7ce8e"
authors = ["Santiago Badia <[email protected]>", "Francesc Verdugo <[email protected]>", "Alberto F. Martin <[email protected]>"]
version = "0.17.17"
version = "0.17.19"

[deps]
AbstractTrees = "1520ce14-60c1-5f80-bbc7-55ef81b5835c"
Expand Down
4 changes: 4 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@ pages = [
"Gridap.Visualization" => "Visualization.md",
"Gridap.FESpaces" => "FESpaces.md",
"Gridap.MultiField" => "MultiField.md",
"Gridap.Adaptivity" => "Adaptivity.md",
"Developper notes" => Any[
"dev-notes/block-assemblers.md",
],
]

makedocs(
Expand Down
112 changes: 112 additions & 0 deletions docs/src/Adaptivity.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# Gridap.Adaptivity

```@meta
CurrentModule = Gridap.Adaptivity
```

The adaptivity module provides a framework to work with adapted (refined/coarsened/mixed) meshes.

It provides

- A generic interface to represent adapted meshes and a set of tools to work with Finite Element spaces defined on them. In particular, moving `CellFields` between parent and child meshes.
- Particular implementations for conformally refining/coarsening 2D/3D meshes using several well-known strategies. In particular, Red-Green refinement and longest-edge bisection.

## Interface

The following types are defined in the module:

```@docs
RefinementRule
AdaptivityGlue
AdaptedDiscreteModel
AdaptedTriangulation
```

The high-level interface is provided by the following methods:

```@docs
refine
coarsen
adapt
```

## Edge-Based refinement

The module provides a `refine` method for `UnstructuredDiscreteModel`. The method takes a string `refinement_method`
that determines the refinement strategy to be used. The following strategies are available:

- `"red_green"` :: Red-Green refinement, default.
- `"nvb"` :: Longest-edge bisection (only for meshes of TRIangles)

Additionally, the method takes a kwarg `cells_to_refine` that determines which cells will be refined.
Possible input types are:

- `Nothing` :: All cells get refined.
- `AbstractArray{<:Bool}` of size `num_cells(model)` :: Only cells such that `cells_to_refine[iC] == true` get refined.
- `AbstractArray{<:Integer}` :: Cells for which `gid ∈ cells_to_refine` get refined

The algorithms try to respect the `cells_to_refine` input as much as possible, but some additional cells
might get refined in order to guarantee that the mesh remains conforming.

```julia
function refine(model::UnstructuredDiscreteModel;refinement_method="red_green",kwargs...)
[...]
end
```

## CartesianDiscreteModel refining

The module provides a `refine` method for `CartesianDiscreteModel`. The method takes a `Tuple` of size `Dc`
(the dimension of the model cells) that will determine how many times cells will be refined in
each direction. For example, for a 2D model, `refine(model,(2,3))` will refine each QUAD cell into
a 2x3 grid of cells.

```julia
function refine(model::CartesianDiscreteModel{Dc}, cell_partition::Tuple) where Dc
[...]
end
```

## Notes for users

Most of the tools provided by this module are showcased in the tests of the module itself, as well as the following tutorial (coming soon).

However, we want to stress a couple of key performance-critical points:

- The refining/coarsening routines are not optimized for performance. In particular, they are not parallelized.
If you require an optimized/parallel implementation, please consider leveraging specialised meshing libraries. For instance, we provide an implementation of `refine/coarsen` using P4est in the [GridapP4est.jl](https://github.com/gridap/GridapP4est.jl) library.

- Although the toolbox allows you to evaluate `CellFields` defined on both fine/coarse meshes on their parent/children mesh, both directions of evaluation are not equivalent. As a user, you should always try to evaluate/integrate on the finest mesh for maximal performance. Evaluating a fine `CellField` on a coarse mesh relies on local tree searches, and is therefore a very expensive operation that should be avoided whenever possible.

## Notes for developers

### RefinementRule API

Given a `RefinementRule`, the library provides a set of methods to compute the mappings between parent (coarse) face ids and child (fine) face ids (and vice-versa). The ids are local to the `RefinementRule`.

```@docs
get_d_to_face_to_child_faces
get_d_to_face_to_parent_face
get_face_subface_ldof_to_cell_ldof
```

### AdaptivityGlue API

```@docs
get_n2o_reference_coordinate_map
get_old_cell_refinement_rules
get_new_cell_refinement_rules
get_d_to_fface_to_cface
n2o_reindex
o2n_reindex
```

### New-to-old field evaluations

When a cell is refined, we need to be able to evaluate the fields defined on the children cells on the parent cell. To do so, we bundle the fields defined on the children cells into a new type of `Field` called `FineToCoarseField`. When evaluated on a `Point`, a `FineToCoarseField` will select the child cell that contains the `Point` and evaluate the mapped point on the corresponding child field.

```@docs
FineToCoarseField
FineToCoarseDofBasis
FineToCoarseRefFE
```
162 changes: 162 additions & 0 deletions docs/src/dev-notes/block-assemblers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
# Block Assemblers

This file contains implementation details for block assemblers. We will first have a look at how standard sparse matrix assembly works, then compare it to block assembly.

## SparseMatrixAssemblers

Let's understand the main workflow for `SparseMatrixAssemblers` by looking at the `assemble_matrix` method:

```julia
function assemble_matrix(a::SparseMatrixAssembler,matdata)
1 m1 = nz_counter(get_matrix_builder(a),(get_rows(a),get_cols(a)))
2 symbolic_loop_matrix!(m1,a,matdata)
3 m2 = nz_allocation(m1)
4 numeric_loop_matrix!(m2,a,matdata)
5 m3 = create_from_nz(m2)
return m3
end
```

By line number:

1) We retrieve the `SparseMatrixBuilder/ArrayBuilder` from the assembler. This structure has information on which type of array will be allocated at the end. For instance CSR vs CSC sparse matrix storage, which type of element type the array should hold (Float64, Float32,...), etc... With this information, we create a non-zero counter object `m1`, which will manage the counting of sparse entries and eventually the allocation of the array.
2) We do a symbolic loop over the matrix contributions in `matdata`. During this loop, we save the row/col indexes of the non-zeros that will be present in the final matrix.
3) We allocate the necessary space for the nonzero entries we have counted. In the case of CSR and CSC storage types, we do not allocate the final matrix yet but rather do everything in COO format (which is more efficient for random access assembly). `m2` is now an array allocator object.
4) We do a second loop over the matrix contributions in `matdata`, During this loop, we actually assemble the entries that will end up in the matrix.
5) We use the matrix allocator to allocate and fill the final sparse matrix.

So the objects involved and the overall workflow is given by

```md
SparseMatrixBuilder -> ArrayCounter -> ArrayAllocator -> Array
```

The second part of the puzzle is given by the loops over the data, i.e `symbolic_loop_X!` and `numeric_loop_X!`. Both loops are quite similar, so we will focus on the numeric loop, which is implemented in the following function:

```julia
function numeric_loop_matrix!(A,a::SparseMatrixAssembler,matdata)
get_mat(a::Tuple) = a[1]
get_mat(a) = a
if LoopStyle(A) == DoNotLoop()
return A
end
1 strategy = get_assembly_strategy(a)
for (cellmat,_cellidsrows,_cellidscols) in zip(matdata...)
2 cellidsrows = map_cell_rows(strategy,_cellidsrows)
2 cellidscols = map_cell_cols(strategy,_cellidscols)
@assert length(cellidscols) == length(cellidsrows)
if length(cellidscols) > 0
rows_cache = array_cache(cellidsrows)
cols_cache = array_cache(cellidscols)
mat1 = get_mat(first(cellmat))
rows1 = getindex!(rows_cache,cellidsrows,1)
cols1 = getindex!(cols_cache,cellidscols,1)
3 add! = AddEntriesMap(+)
3 add_cache = return_cache(add!,A,mat1,rows1,cols1)
3 caches = add_cache, vals_cache, rows_cache, cols_cache
3 _numeric_loop_matrix!(A,caches,cellmat,cellidsrows,cellidscols)
end
end
A
end

@noinline function _symbolic_loop_matrix!(A,caches,cell_rows,cell_cols,mat1)
touch_cache, rows_cache, cols_cache = caches
touch! = TouchEntriesMap()
4 for cell in 1:length(cell_cols)
4 rows = getindex!(rows_cache,cell_rows,cell)
4 cols = getindex!(cols_cache,cell_cols,cell)
4 vals = getindex!(vals_cache,cell_vals,cell)
4 evaluate!(add_cache,add!,mat,vals,rows,cols)
4 end
end
```

By line number:

1) We retrieve the `AssemblyStrategy` object from the assembler. This object contains all the information necessary to map the DoF ids from our mesh to the final columns/rows of the matrix. In serial this map is almost always the identity, but in parallel it is crucial to handle ghosts and local/global indexes.
2) We use the col/row maps in strategy to map the cell DoF ids in each cell contribution to the corresponding rows/cols in the final matrix.
3) We prepare a `TouchEntriesMap` (symbolic loop) or `AddEntriesMap` (numeric loop). These maps will be executed on each cell contribution, and are the ones responsible to allocate/assemble the contributions on the array counter.
4) For each cell, we retrieve the rows, cols and values and execute the map on these. This will allocate/assemble the contributions of this cell into the counter. In the case of `MultiFieldFESpaces`, the cell indices `rows`/`cols` will be `VectorBlocks` with as many blocks as fields the `MultiFieldFESpace` has and `vals` will be a `MatrixBlock` with an array of blocks of size (# blocks in rows, # blocks in cols). The `TouchEntriesMap` and`AddEntriesMap` maps are specialised on these types, and assemble all blocks into the same `ArrayCounter`.

## BlockSparseMatrixAssemblers

To activate the block assemblers, we have created a new `MultiFieldStyle` called `BlockMultiFieldStyle`. The purpose of this style is two-fold:

1) It activates the block assembly automatically when calling `SparseMatrixAssembler`, so that the everything fits with the high-level API.
2) It manages the numbering of the cell DoFs when performing the integrals, ensuring the DoFs ids are local to each block.

To create a block-assembled `MultiFieldFESpace`, you can use the following constructor:

```julia
mfs = BlockMultiFieldStyle()
Yb = MultiFieldFESpace(tests;style=mfs)
Xb = MultiFieldFESpace(trials;style=mfs)
```

By default, the final matrix and vector will have a block for each input `FESpace`. However, you can introduce some parameters when building your `BlockMultiFieldStyle` so that multiple fields are assembled in the same block (see section B).

### A) One to one Block <-> FESpace correspondence

The design of `BlockSparseMatrixAssemblers` is quite simple: Instead of having a single `SparseMatrixBuilder`, `ArrayCounter` and`Array` in which we assemble the entries coming from all fields in the `MultiFieldFESpace`, the assembler will create one of these objects for each final block and put them in a `ArrayBlock` object. We will then dispatch on the `ArrayBlock` type so that the contributions from each field is assembled in the block we want.

For instance, for the `nz_counter` function we dispatch as follows:

```julia
function Algebra.nz_counter(builder::MatrixBlock,axs)
s = size(builder)
rows = axs[1]
cols = axs[2]
counters = [nz_counter(builder.array[i,j],(rows[i],cols[j])) for i in 1:s[1], j in 1:s[2]]
return ArrayBlock(counters,fill(true,size(counters)))
end
```

In this function, the variable `builder` is a `MatrixBlock{<:ArrayBuilder}`, which holds an array with the array builders for each final block.
We then simply select the rows/cols for each of the blocks and apply the `nz_counter` function to the corresponding `ArrayBuilder` , then return a `MatrixBlock{<:ArrayCounter}` which holds the array counters for each block.
Similar dispatches are provided for `nz_allocation`, `create_from_nz`, `map_cell_rows` and `map_cell_cols`.

We also specialise the evaluation of the `TouchEntriesMap` and`AddEntriesMap` maps when the counters are `BlockArrays`. For instance, let's have a look at the following function:

```julia
# A) Default implementation for MultiFieldFESpaces
function Fields.evaluate!(k::AddEntriesMap,A,v::MatrixBlock,I::VectorBlock,J::VectorBlock)
ni,nj = size(v.touched)
for j in 1:nj
for i in 1:ni
if v.touched[i,j]
evaluate!(cache[i,j],k,A,v.array[i,j],I.array[i],J.array[j])
end
end
end
end

# B) Dispatching for block assemblers
function Fields.evaluate!(k::AddEntriesMap,A::MatrixBlock,v::MatrixBlock,I::VectorBlock,J::VectorBlock)
ni,nj = size(v.touched)
for j in 1:nj
for i in 1:ni
if v.touched[i,j]
evaluate!(cache[i,j],k,A.array[i,j],v.array[i,j],I.array[i],J.array[j])
end
end
end
end
```

In the monolithic assembly of `MultiFieldFESpaces`, the variable `A` is an `ArrayCounter`. As you can see, all contribution blocks (coming from different fields) are assembled into the same `ArrayCounter`. The block-assembly counterpart will have the input `A` be a `MatrixBlock{<:ArrayCounter}`, and assembles each contribution block to it's corresponding `ArrayCounter` (notice the `A.array[i,j]`).

### B) Assembling multiple FE Fields into the same Block

The `BlockMultiFieldStyle` constructor can take up to three parameters:

1) `NB` :: Integer, representing the number of final blocks. Then the matrix and vector will have `NBxNB` and `NB` blocks respectively.
2) `SB` :: Tuple of integers, of length `NB`. In each position, `SB[ib]` is the number of fields that will be assembled in that block.
3) `P` :: Tuple of integers, of length the number of fields. This represents a field permutation, such that the fields will be reordered as`[P[1],P[2],....,P[n]]`.

Using this three parameters, one can assemble an arbitrary number of fields into any number of blocks.

**Example**: Consider we are solving an MHD problem with variables `(u,p,j,q)` , i.e (fluid velocity, fluid pressure, magnetic current, electric potential). Although the variables are in this specific order in the `MultiFieldFESpace`, we want to build a block-preconditioner that solves `(u,j)` together in a single block then `p` and `q` separately in two other blocks. Then we would need to assemble our system using `NB=3`, `SB=(2,1,1)` and `P=(1,3,2,4)`.
With this configuration, we will create 3 blocks. The first block will have size 2 and hold variables `[P[1],P[2]] = [1,3] = [u,j]`. The second block will have size 1 and hold variables `[P[3]] = [2] = [p]`. Finally, the third block will hold variables `[P[4]] = [4] = [q]`.

In terms of implementation, everything is the same. We use `ArrayBlockViews` (which is a view counterpart of `ArrayBlock`) so that an array of `NBxNB` array builders / array counters can be indexed using the field indexes. This allows us to use the same dispatches as we had in part A.
2 changes: 0 additions & 2 deletions docs/src/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,3 @@ For further information about how to install and manage Julia packages, see the
## Further steps

We recommend to follow the [Gridap Tutorials](https://gridap.github.io/Tutorials/dev/) in order to get familiar with the library.


2 changes: 1 addition & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,6 @@ Pages = [
"Visualization.md",
"FESpaces.md",
"MultiField.md",
"Adaptivity.md",
]
```

Loading

0 comments on commit d5d1b19

Please sign in to comment.