Different results of serial and parallel for DG transport equation with upwinding #2656
Answered
by
knepley
lrtfm
asked this question in
Firedrake support
-
I implement an Euler scheme for transport equation by following the code in the document: https://www.firedrakeproject.org/demos/DG_advection.py.html#id5. But the result for the parallel case is wrong, see the output below. (firedrake-mini-petsc) [firedrake-env] [z2yang@ws2 001_dg_transport]$ mpiexec -n 2 python test_transport.py
error = 3.383363484335868e+75
(firedrake-mini-petsc) [firedrake-env] [z2yang@ws2 001_dg_transport]$ mpiexec -n 1 python test_transport.py
error = 0.0009364615351207077 code: from firedrake import *
from firedrake.petsc import PETSc
import numpy as np
def printf(*args):
PETSc.Sys.Print(*args)
def solve_transport(m, T=1/64, M=128, p=1):
dt = T/M
tau = Constant(dt)
t = Constant(0)
t_n = Constant(0)
x, y = SpatialCoordinate(m)
u = as_vector([1, 1])
rho_exact = 2 + sin(x + y - 2*t)
rho_n_in = 2 + sin(x + y - 2*t_n)
V = FunctionSpace(m, 'DG', p)
rho, w = TrialFunction(V), TestFunction(V)
rho_n = Function(V, name='rho_n')
n = FacetNormal(m)
un = (abs(dot(u, n)) + dot(u, n))/2
common = inner(rho_n('+')*un('+') + rho_n('-')*un('-'), w('+') - w('-'))*dS \
- inner(rho_n*u, grad(w))*dx
bdy_formula = inner(rho_n_in*dot(u, n), w)*(ds(1) + ds(3)) \
+ inner(rho_n*dot(u, n), w)*(ds(2) + ds(4))
a = 1/tau*inner(rho - rho_n, w)*dx + common + bdy_formula
sol = Function(V, name='rho_h')
prob = LinearVariationalProblem(lhs(a), rhs(a), sol)
params = {'ksp_type': 'preonly', 'pc_type': 'bjacobi', 'sub_pc_type': 'ilu'}
solver = LinearVariationalSolver(prob, options_prefix='', solver_parameters=params)
t.assign(0)
t_n.assign(0)
rho_n.interpolate(rho_exact)
for i in range(M):
t_n.assign(i*dt)
solver.solve()
rho_n.assign(sol)
t.assign(T)
err = errornorm(rho_exact, sol)
return rho_n, err
def test_transport():
T = 1
N = 32
m = RectangleMesh(N, N, 1, 1)
rho_new, err = solve_transport(m, T=T, M=16*32)
printf('error = ', err)
if __name__ == '__main__':
test_transport() |
Beta Was this translation helpful? Give feedback.
Answered by
knepley
Nov 28, 2022
Replies: 1 comment 1 reply
-
|
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
lrtfm
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-ksp_type gmres
-ksp_monitor_true_residual
-pc_type lu -pc_factor_mat_solver_type mumps
. That way it is solved exactly in parallel as well