Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fourier Neural Operator for Parametric Partial Differential Equations By: Zongyi Li, et,all. #50

Open
QiXuanWang opened this issue Nov 25, 2020 · 0 comments

Comments

@QiXuanWang
Copy link
Owner

Link: https://arxiv.org/pdf/2010.08895.pdf
Code: https://github.com/zongyi-li/fourier_neural_operator
Tutorial: https://www.youtube.com/watch?v=IaS72aHrJKE

Published on Oct.20,2020
I think the major innovation is that the algorithm used FFT to reduce infinite dimension to finite dimension since IFFT could remove those high frequency components without much accuracy loss. But I'm not so sure.
And what's the input actually? According to youtube video, it's vt curve of different time step, or the image for Navier-Stokes equation.

Problem:
Many problems in science and engineering involve solving complex partial differential equation (PDE) systems
repeatedly for different values of some parameters. Examples arise in molecular dynamics, micro-mechanics,
and turbulent flows. Often such systems requires fine discretization in order to capture the phenomenon
being modeled. As a consequence, traditional finite element methods (FEM) and finite difference methods
(FDM) are slow and sometimes inefficient.
Machine learning methods hold the key to revolutionizing many scientific disciplines by providing fast
solvers that approximate traditional ones. However, classical neural networks map between finite-dimensional
spaces and can therefore only learn solutions tied to a specific discretization. This is often an insurmountable
limitation for practical applications and therefore the development of mesh-invariant neural networks is
required.

Innovation:
We introduce the Fourier neural operator, a novel deep learning architecture able to
learn mappings between infinite-dimensional spaces of functions; the integral operator is instantiated through
a linear transformation in the Fourier domain as shown in Figure 1 (a).

  • By construction, the method shares the same learned network parameters irrespective of the discretization
    used on the input and output spaces for the purposes of computation
  • The proposed Fourier neural operator consistently outperforms all existing deep learning methods for
    parametric PDEs. It achieves error rates that are 30% lower on Burgers’ Equation, 60% lower on Darcy
    Flow, and 30% lower on Navier Stokes (turbulent regime with Reynolds number 10000) (Figure 1 (b)).
    When learning the mapping for the entire time series, the method achieves < 1% error with Reynolds
    number 1000 and 8% error with Reynolds number 10000.
  • On a 256×256 grid, the Fourier neural operator has an inference time of only 0.005s compared to the 2.2s
    of the pseudo-spectral method used to solve Navier-Stokes. Despite its tremendous speed advantage, it
    does not suffer from accuracy degradation when used in downstream applications such as solving Bayesian
    inverse problem, as shown in Figure 3.

We observe that the Fourier neural operator captures global interactions through convolution with low frequency functions and returns high-frequency modes by composition with an activation function, allowing
it to approximate functions with slow Fourier mode decay (Section 5). Furthermore, local neural networks fix
the periodic boundary which comes from the inverse Fourier transform and allows the method to approximate
function with any boundary conditions

Our methodology learns a mapping between two infinite dimensional spaces from a finite collection of observed input-output pairs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant