Skip to content

Commit

Permalink
minor changes in paper.md
Browse files Browse the repository at this point in the history
  • Loading branch information
danielskatz authored Oct 4, 2024
1 parent 1e204dd commit 070bd46
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions joss_paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,16 +32,16 @@ aas-journal: Journal of Open Source Software

## Summary and statement of need

The ever-increasing demand for resolution and accuracy in mathematical models of physical processes governed by systems of Partial Differential Equations (PDEs) can only be addressed using fully-parallel advanced numerical discretization methods and scalable solvers, thus able to exploit the vast amount of computational resources in state-of-the-art supercomputers.
The ever-increasing demand for resolution and accuracy in mathematical models of physical processes governed by systems of Partial Differential Equations (PDEs) can only be addressed using fully-parallel advanced numerical discretization methods and scalable solvers, able to exploit the vast amount of computational resources in state-of-the-art supercomputers.

One of the biggest scalability bottlenecks within Finite Element (FE) parallel codes is the solution of linear systems arising from the discretization of PDEs.
The implementation of exact factorization-based solvers in parallel environments is an extremely challenging task, and even state-of-the-art libraries such as MUMPS [@MUMPS1; @MUMPS2] or PARDISO [@PARDISO] have severe limitations in terms of scalability and memory consumption above a certain number of CPU cores.
Hence the use of iterative methods is crucial to maintain scalability of FE codes. Unfortunately, the convergence of iterative methods is not guaranteed and rapidly deteriorates as the size of the linear system increases. To retain performance, the use of highly scalable preconditioners is mandatory.
For simple problems, algebraic solvers and preconditioners (i.e based solely on the algebraic system) are enough to obtain robust convergence. Many well-known libraries providing algebraic solvers already exist, such as PETSc [@petsc-user-ref], Trilinos [@trilinos], or Hypre [@hypre]. However, algebraic solvers are not always suited to deal with more challenging problems.
The implementation of exact factorization-based solvers in parallel environments is an extremely challenging task, and even state-of-the-art libraries such as MUMPS [@MUMPS1; @MUMPS2] and PARDISO [@PARDISO] have severe limitations in terms of scalability and memory consumption above a certain number of CPU cores.
Hence, the use of iterative methods is crucial to maintain scalability of FE codes. Unfortunately, the convergence of iterative methods is not guaranteed and rapidly deteriorates as the size of the linear system increases. To retain performance, the use of highly scalable preconditioners is mandatory.
For simple problems, algebraic solvers and preconditioners (i.e., those based solely on the algebraic system) are enough to obtain robust convergence. Many well-known libraries providing algebraic solvers already exist, such as PETSc [@petsc-user-ref], Trilinos [@trilinos], and Hypre [@hypre]. However, algebraic solvers are not always suited to deal with more challenging problems.

In these cases, solvers that exploit the physics and mathematical discretization of the particular problem are required. This is the case of many multiphysics problems involving differential operators with a large kernel such as the divergence [@Arnold1] and the curl [@Arnold2]. Examples can be found amongst highly relevant problems such as Navier-Stokes, Maxwell or Darcy. Scalable solvers for this type of multiphysics problems rely on exploiting the block structure of such systems to find a spectrally equivalent block-preconditioner, and are often tied to a specific discretization of the underlying equations.
In these cases, solvers that exploit the physics and mathematical discretization of the particular problem are required. This is the case of many multiphysics problems involving differential operators with a large kernel such as the divergence [@Arnold1] and the curl [@Arnold2]. Examples can be found amongst highly relevant problems such as Navier-Stokes, Maxwell, and Darcy. Scalable solvers for this type of multiphysics problems rely on exploiting the block structure of such systems to find a spectrally equivalent block-preconditioner, and are often tied to a specific discretization of the underlying equations.

As a consequence, high-quality open-source parallel finite element packages like FEniCS [@fenics-book] or deal.II [@dealII93] already provide implementations of several state-of-the-art physics-informed solvers [@fenics-patch; @dealII-patch]. The Gridap ecosystem [@Badia2020] aims to provide a similar level of functionality within the Julia programming language [@Bezanson2017].
As a consequence, high-quality open-source parallel finite element packages like FEniCS [@fenics-book] and deal.II [@dealII93] already provide implementations of several state-of-the-art physics-informed solvers [@fenics-patch; @dealII-patch]. The Gridap ecosystem [@Badia2020] aims to provide a similar level of functionality within the Julia programming language [@Bezanson2017].

To this end, GridapSolvers is a registered Julia software package which provides highly scalable physics-informed solvers tailored for the FE numerical solution of PDEs on parallel computers within the Gridap ecosystem of packages. Emphasis is put on the modular design of the library, which easily allows new preconditioners to be designed from the user's specific problem.

Expand All @@ -58,21 +58,21 @@ GridapSolvers complements GridapPETSc with a modular and extensible interface fo
- A modular, high-level interface for designing block-based preconditioners for multiphysics problems. These preconditioners can be used together with any solver compliant with Gridap's API, including those provided by GridapPETSc.
- A generic interface to handle multi-level distributed meshes, with full support for Adaptative Mesh Refinement (AMR) using p4est [@p4est] through GridapP4est [@gridap4est].
- A modular implementation of Geometric MultiGrid (GMG) solvers [@gmg-book], allowing different types of smoothers and restriction/prolongation operators.
- A generic interface for patch-based subdomain decomposition methods, and an implementation of patch-based smoothers for GMG solvers. Here the term "patch-based" refers to the use of local overlapping subdomains (patches) built by aggregation of cells around a given vertex, face or cell. See [@fenics-patch; @dealII-patch] for more details.
- A generic interface for patch-based subdomain decomposition methods, and an implementation of patch-based smoothers for GMG solvers. Here the term "patch-based" refers to the use of local overlapping subdomains (patches) built by aggregation of cells around a given vertex, face or cell. See @fenics-patch and @dealII-patch for more details.

![GridapSolvers and its relation to other packages in the Julia package ecosystem. In this diagram, each node represents a Julia package, while the (directed) arrows represent relations (dependencies) among packages. Dashed arrows mean the package can be used, but is not required. \label{fig:packages}](packages.png){ width=50% }

## Demo

The following code snippet shows how to solve a 2D incompressible Stokes cavity problem in a cartesian domain $\Omega = [0,1]^2$. We discretize the velocity and pressure in $H^1(\Omega)$ and $L^2(\Omega)$ respectively, and use the well known stable element pair $Q_k \times P_{k-1}^{-}$ with $k=2$. For the cavity problem, we fix the velocity to $u_t = \hat{x}$ on the top boundary $\Gamma_t = (0,1)\times\{1\}$, and homogeneous Dirichlet boundary conditions elsewhere. We impose a zero-mean pressure constraint to have a solvable system of equations. Given discrete spaces $V \times Q_0$, we find $(u,p) \in V \times Q_0$ such that
The following code snippet shows how to solve a 2D incompressible Stokes cavity problem in a Cartesian domain $\Omega = [0,1]^2$. We discretize the velocity and pressure in $H^1(\Omega)$ and $L^2(\Omega)$ respectively, and use the well known stable element pair $Q_k \times P_{k-1}^{-}$ with $k=2$. For the cavity problem, we fix the velocity to $u_t = \hat{x}$ on the top boundary $\Gamma_t = (0,1)\times\{1\}$, and homogeneous Dirichlet boundary conditions elsewhere. We impose a zero-mean pressure constraint to have a solvable system of equations. Given discrete spaces $V \times Q_0$, we find $(u,p) \in V \times Q_0$ such that

$$
\int_{\Omega} \nabla v : \nabla u - (\nabla \cdot v) p - (\nabla \cdot u) q = \int_{\Omega} v \cdot f \quad \forall v \in V_0, q \in Q_0
$$

where $V_0$ is the space of velocity functions with homogeneous boundary conditions everywhere.

The system is block-assembled and solved using a flexible Generalised Minimum Residual (F-GMRES) solver, together with a block-triangular Schur-complement-based preconditioner. We eliminate the velocity block and approximate the resulting Schur complement by a pressure mass matrix. A more detailed overview of this preconditioner as well as its spectral analysis can be found in [@Elman2014]. The resulting block structure for the system matrix $\mathcal{A}$ and our preconditioner $\mathcal{P}$ is
The system is block-assembled and solved using a flexible Generalised Minimum Residual (F-GMRES) solver, together with a block-triangular Schur-complement-based preconditioner. We eliminate the velocity block and approximate the resulting Schur complement by a pressure mass matrix. A more detailed overview of this preconditioner as well as its spectral analysis can be found in @Elman2014. The resulting block structure for the system matrix $\mathcal{A}$ and our preconditioner $\mathcal{P}$ is

$$
\mathcal{A} = \begin{bmatrix}
Expand All @@ -86,7 +86,7 @@ $$
\end{bmatrix}
$$

with $A$ the velocity laplacian block, and $M$ a pressure mass matrix.
with $A$ the velocity Laplacian block, and $M$ a pressure mass matrix.

Application of the above block-preconditioner requires both diagonal sub-matrices to be solved. The pressure block $M$ is solved using a Conjugate Gradient (CG) solver with Jacobi preconditioner. The velocity block $A$ is solved by a 2-level V-cycle GMG solver, where the coarsest level is solved exactly in a single processor.
The code for this example can be found [here](https://github.com/gridap/GridapSolvers.jl/tree/joss-paper/joss_paper/demo.jl). It is set up to run in parallel with 4 MPI tasks and can be executed with the following command: `mpiexec -n 4 julia --project=. demo.jl`.
Expand Down

0 comments on commit 070bd46

Please sign in to comment.