Least-Squares Neural Network (LSNN) Method for Linear Advection-Reaction Equation: Non-Constant Jumps
DOI:
https://doi.org/10.4208/ijnam2024-1024Keywords:
Least-squares method, ReLU neural network, linear advection-reaction equation, discontinuous solution.Abstract
The least-squares ReLU neural network (LSNN) method was introduced and studied for solving linear advection-reaction equation with discontinuous solution in [4, 5]. The method is based on an equivalent least-squares formulation and [5] employs ReLU neural network (NN) functions with ⌈${\rm log}_2(d+1)$⌉$+1$-layer representations for approximating solutions. In this paper, we show theoretically that the method is also capable of accurately approximating non-constant jumps along discontinuous interfaces that are not necessarily straight lines. Theoretical results are confirmed through multiple numerical examples with $d = 2, 3$ and various non-constant jumps and interface shapes, showing that the LSNN method with ⌈${\rm log}_2 (d + 1)$⌉$+1$ layers approximates solutions accurately with degrees of freedom less than that of mesh-based methods and without the common Gibbs phenomena along discontinuous interfaces having non-constant jumps.
Downloads
Published
Abstract View
- 13008
Pdf View
- 1017