You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Published on Oct.20,2020
I think the major innovation is that the algorithm used FFT to reduce infinite dimension to finite dimension since IFFT could remove those high frequency components without much accuracy loss. But I'm not so sure.
And what's the input actually? According to youtube video, it's vt curve of different time step, or the image for Navier-Stokes equation.
Problem:
Many problems in science and engineering involve solving complex partial differential equation (PDE) systems
repeatedly for different values of some parameters. Examples arise in molecular dynamics, micro-mechanics,
and turbulent flows. Often such systems requires fine discretization in order to capture the phenomenon
being modeled. As a consequence, traditional finite element methods (FEM) and finite difference methods
(FDM) are slow and sometimes inefficient.
Machine learning methods hold the key to revolutionizing many scientific disciplines by providing fast
solvers that approximate traditional ones. However, classical neural networks map between finite-dimensional
spaces and can therefore only learn solutions tied to a specific discretization. This is often an insurmountable
limitation for practical applications and therefore the development of mesh-invariant neural networks is
required.
Innovation:
We introduce the Fourier neural operator, a novel deep learning architecture able to
learn mappings between infinite-dimensional spaces of functions; the integral operator is instantiated through
a linear transformation in the Fourier domain as shown in Figure 1 (a).
By construction, the method shares the same learned network parameters irrespective of the discretization
used on the input and output spaces for the purposes of computation
The proposed Fourier neural operator consistently outperforms all existing deep learning methods for
parametric PDEs. It achieves error rates that are 30% lower on Burgers’ Equation, 60% lower on Darcy
Flow, and 30% lower on Navier Stokes (turbulent regime with Reynolds number 10000) (Figure 1 (b)).
When learning the mapping for the entire time series, the method achieves < 1% error with Reynolds
number 1000 and 8% error with Reynolds number 10000.
On a 256×256 grid, the Fourier neural operator has an inference time of only 0.005s compared to the 2.2s
of the pseudo-spectral method used to solve Navier-Stokes. Despite its tremendous speed advantage, it
does not suffer from accuracy degradation when used in downstream applications such as solving Bayesian
inverse problem, as shown in Figure 3.
We observe that the Fourier neural operator captures global interactions through convolution with low frequency functions and returns high-frequency modes by composition with an activation function, allowing
it to approximate functions with slow Fourier mode decay (Section 5). Furthermore, local neural networks fix
the periodic boundary which comes from the inverse Fourier transform and allows the method to approximate
function with any boundary conditions
Our methodology learns a mapping between two infinite dimensional spaces from a finite collection of observed input-output pairs
The text was updated successfully, but these errors were encountered:
Link: https://arxiv.org/pdf/2010.08895.pdf
Code: https://github.com/zongyi-li/fourier_neural_operator
Tutorial: https://www.youtube.com/watch?v=IaS72aHrJKE
Published on Oct.20,2020
I think the major innovation is that the algorithm used FFT to reduce infinite dimension to finite dimension since IFFT could remove those high frequency components without much accuracy loss. But I'm not so sure.
And what's the input actually? According to youtube video, it's vt curve of different time step, or the image for Navier-Stokes equation.
Problem:
Many problems in science and engineering involve solving complex partial differential equation (PDE) systems
repeatedly for different values of some parameters. Examples arise in molecular dynamics, micro-mechanics,
and turbulent flows. Often such systems requires fine discretization in order to capture the phenomenon
being modeled. As a consequence, traditional finite element methods (FEM) and finite difference methods
(FDM) are slow and sometimes inefficient.
Machine learning methods hold the key to revolutionizing many scientific disciplines by providing fast
solvers that approximate traditional ones. However, classical neural networks map between finite-dimensional
spaces and can therefore only learn solutions tied to a specific discretization. This is often an insurmountable
limitation for practical applications and therefore the development of mesh-invariant neural networks is
required.
Innovation:
We introduce the Fourier neural operator, a novel deep learning architecture able to
learn mappings between infinite-dimensional spaces of functions; the integral operator is instantiated through
a linear transformation in the Fourier domain as shown in Figure 1 (a).
used on the input and output spaces for the purposes of computation
parametric PDEs. It achieves error rates that are 30% lower on Burgers’ Equation, 60% lower on Darcy
Flow, and 30% lower on Navier Stokes (turbulent regime with Reynolds number 10000) (Figure 1 (b)).
When learning the mapping for the entire time series, the method achieves < 1% error with Reynolds
number 1000 and 8% error with Reynolds number 10000.
of the pseudo-spectral method used to solve Navier-Stokes. Despite its tremendous speed advantage, it
does not suffer from accuracy degradation when used in downstream applications such as solving Bayesian
inverse problem, as shown in Figure 3.
We observe that the Fourier neural operator captures global interactions through convolution with low frequency functions and returns high-frequency modes by composition with an activation function, allowing
it to approximate functions with slow Fourier mode decay (Section 5). Furthermore, local neural networks fix
the periodic boundary which comes from the inverse Fourier transform and allows the method to approximate
function with any boundary conditions
Our methodology learns a mapping between two infinite dimensional spaces from a finite collection of observed input-output pairs
The text was updated successfully, but these errors were encountered: