All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Added missing methods
allocate_in_range
andallocate_in_domain
for distributed types. Since PR #139.
- Exporting
redistribute
function. Since PR 136.
- Added missing methods for
DistributedTransientFESpace
s. Since PR 135.
- Added support for distributed block-assembly. Since PR 124.
- Add possibility to use
OwnAndGhostVector
as vector partition forFESpace
dofs. Since PR 124. - Implement
BlockPArray <: AbstractBlockArray
, a new type that behaves as aBlockArray{PArray}
and which fulfills the APIs of bothPArray
andAbstractBlockArray
. This new type will be used to implement distributed block-assembly. Since PR 124. DistributedMultiFieldFESpace{<:BlockMultiFieldStyle}
now has aBlockPRange
as gids andBlockPVector
as vector type. This is necessary to create consistency between fespace and system vectors, which in turn avoids memory allocations/copies when transferring between FESpace and linear system layouts. Since PR 124.
- Merged functionalities of
consistent_local_views
andchange_ghost
.consistent_local_views
has been removed.change_ghost
now has two keywargsis_consistent
andmake_consistent
that take into consideration all possible use cases.change_ghost
has also been optimized to avoid unnecessary allocations if possible. Since PR 124.
- Added missing _get_cell_dof_ids_inner_space() method overload. Since PR130.
- Added missing remove_ghost_cells() overload for AdaptiveTriangulation. Since PR131.
- Updated compat for FillArrays to v1. Since PR127.
- Porting the whole library to PartitionedArrays v0.3.x. Since PR 114
- Tools for redistributing FE functions among meshes; added mock tests for
RedistributeGlue
. Since PR 114. This functionality was already somewhere else in the Gridap ecosystem of packages (in GridapSolvers.jl in particular). - A variant of the PArrays
assemble_coo!
function namedassemble_coo_with_column_owner!
which also exchanges processor column owners of the entries. This variant is required to circumvent the current limitation of GridapDistributed.jl assembly for the case in which the following is not fullfilled: "each processor can determine locally with a single layer of ghost cells the global indices and associated processor owners of the rows that it touches after assembly of integration terms posed on locally-owned entities." Since PR 115.
- Added missing parameter to
allocate_jacobian
, needed after Gridap v0.17.18. Since PR 126.
- Reverted some changes introduced in PR 98. Eliminated
DistributedGridapType
. Functionslocal_views
andget_parts
now take argument of typeAny
. Since PR 117.
- Fixed bug where operating three or more
DistributedCellFields
would fail. Since PR 110
- Added
DistributedCellDof
, a distributed wrapper forGridap.CellDof
. This new wrapper acts onDistributedCellField
in the same wayGridap.CellDof
acts onCellField
. Addedget_fe_dof_basis
function, which extracts aDistributedCellDof
from aDistributedFESpace
. Since PR #97. - Added
gather_free_and_dirichlet_values!
andgather_free_values!
wrapper functions. Since PR #97. - Added compatibility with MPI v0.20 and PartitionedArrays v0.2.13. Since PR #104.
DistributedDiscreteModel
is now an abstract class. The concrete implementation is now given byGenericDistributedDiscreteModel
. Since PR #98.- New abstract type
DistributedGridapType
. All distributed structures now inherit from it. Implements two new API methodslocal_views
andget_parts
. Since PR #98. - Added support for adaptivity. Created
DistributedAdaptedDiscreteModel
. Since PR #98. - Added
RedistributeGlue
, which allows to redistribute model data between different communicators. Since PR #98.
- Support for parallel ODE solvers (GridapDistributed+GridapODEs). Since PR #81
- Support for parallel interface (surface) coupled problems. Since PR #84
- Added the missing zero_dirichlet_values used in Multifield.jl. Since PR #87
- Model now handles gids of all faces (not only cells) and support for FESpaces on lower-dim trians. Since PR #86
- Minor bug in the definition of the jacobian of the PLaplacian problem. Since PR #88
- Support for periodic boundary conditions for
CartesianDiscreteModel
. Since PR #79 - Skeleton documentation and some content. Since PR #77
- Added
interpolate_everywhere
andinterpolate_dirichlet
functions. Since PR #74 - Added
createpvd
andsavepvd
functions to save collections of VTK files. Since PR #71
- Visualization of functions and numbers. Since PR #78
- Bug-fix in global dof numbering. Since PR #66
- RT FEs in parallel. Since PR #64
- Added new overload for
SparseMatrixAssembler
to let one select the local matrix and vector types. Since PR #63
- Added
num_cells
method toDistributedDiscreteModel
. Since PR #62
This version introduces fully-parallel distributed memory data structures for all the steps required in a finite element simulation (geometry handling, fe space setup, linear system assembly) except for the linear solver kernel, which is just a sparse LU solver applied to the global system gathered on a master task (and thus obviously not scalable, but very useful for debug and testing purposes). Parallel solvers are available in the GridapPETSc.jl package. The distributed data structures in GridapDistributed.jl mirror their counterparts in the Gridap.jl software architecture and implement most of their abstract interfaces. This version of GridapDistributed.jl relies on PartitionedArrays.jl (https://github.com/fverdugo/PartitionedArrays.jl) as distributed linear algebra backend (global distributed sparse matrices and vectors).
More details can also be found in #39
A changelog is not maintained for this version.
This version although functional, is fully deprecated.