arrow
Volume 28, Issue 5
Finite Neuron Method and Convergence Analysis

Jinchao Xu

Commun. Comput. Phys., 28 (2020), pp. 1707-1745.

Published online: 2020-11

Export citation
  • Abstract

We study a family of $H^m$-conforming piecewise polynomials based on the artificial neural network, referred to as the finite neuron method (FNM), for numerical solution of $2m$-th-order partial differential equations in $\mathbb{R}^d$ for any $m,d≥1$ and then provide convergence analysis for this method. Given a general domain Ω$⊂\mathbb{R}^d$ and a partition $\mathcal{T}_h$ of Ω, it is still an open problem in general how to construct a conforming finite element subspace of $H^m$(Ω) that has adequate approximation properties. By using techniques from artificial neural networks, we construct a family of $H^m$-conforming functions consisting of piecewise polynomials of degree $k$ for any $k≥m$ and we further obtain the error estimate when they are applied to solve the elliptic boundary value problem of any order in any dimension. For example, the error estimates that $‖u−u_N‖_{H^m(\rm{Ω})}=\mathcal{O}(N^{−\frac{1}{2}−\frac{1}{d}})$ is obtained for the error between the exact solution $u$ and the finite neuron approximation $u_N$. A discussion is also provided on the difference and relationship between the finite neuron method and finite element methods (FEM). For example, for the finite neuron method, the underlying finite element grids are not given a priori and the discrete solution can be obtained by only solving a non-linear and non-convex optimization problem. Despite the many desirable theoretical properties of the finite neuron method analyzed in the paper, its practical value requires further investigation as the aforementioned underlying non-linear and non-convex optimization problem can be expensive and challenging to solve. For completeness and the convenience of the reader, some basic known results and their proofs are introduced.

  • AMS Subject Headings

65D07, 65D15, 65N22, 65N30

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{CiCP-28-1707, author = {Xu , Jinchao}, title = {Finite Neuron Method and Convergence Analysis}, journal = {Communications in Computational Physics}, year = {2020}, volume = {28}, number = {5}, pages = {1707--1745}, abstract = {

We study a family of $H^m$-conforming piecewise polynomials based on the artificial neural network, referred to as the finite neuron method (FNM), for numerical solution of $2m$-th-order partial differential equations in $\mathbb{R}^d$ for any $m,d≥1$ and then provide convergence analysis for this method. Given a general domain Ω$⊂\mathbb{R}^d$ and a partition $\mathcal{T}_h$ of Ω, it is still an open problem in general how to construct a conforming finite element subspace of $H^m$(Ω) that has adequate approximation properties. By using techniques from artificial neural networks, we construct a family of $H^m$-conforming functions consisting of piecewise polynomials of degree $k$ for any $k≥m$ and we further obtain the error estimate when they are applied to solve the elliptic boundary value problem of any order in any dimension. For example, the error estimates that $‖u−u_N‖_{H^m(\rm{Ω})}=\mathcal{O}(N^{−\frac{1}{2}−\frac{1}{d}})$ is obtained for the error between the exact solution $u$ and the finite neuron approximation $u_N$. A discussion is also provided on the difference and relationship between the finite neuron method and finite element methods (FEM). For example, for the finite neuron method, the underlying finite element grids are not given a priori and the discrete solution can be obtained by only solving a non-linear and non-convex optimization problem. Despite the many desirable theoretical properties of the finite neuron method analyzed in the paper, its practical value requires further investigation as the aforementioned underlying non-linear and non-convex optimization problem can be expensive and challenging to solve. For completeness and the convenience of the reader, some basic known results and their proofs are introduced.

}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2020-0191}, url = {http://global-sci.org/intro/article_detail/cicp/18394.html} }
TY - JOUR T1 - Finite Neuron Method and Convergence Analysis AU - Xu , Jinchao JO - Communications in Computational Physics VL - 5 SP - 1707 EP - 1745 PY - 2020 DA - 2020/11 SN - 28 DO - http://doi.org/10.4208/cicp.OA-2020-0191 UR - https://global-sci.org/intro/article_detail/cicp/18394.html KW - Finite neuron method, finite element method, neural network, error estimate. AB -

We study a family of $H^m$-conforming piecewise polynomials based on the artificial neural network, referred to as the finite neuron method (FNM), for numerical solution of $2m$-th-order partial differential equations in $\mathbb{R}^d$ for any $m,d≥1$ and then provide convergence analysis for this method. Given a general domain Ω$⊂\mathbb{R}^d$ and a partition $\mathcal{T}_h$ of Ω, it is still an open problem in general how to construct a conforming finite element subspace of $H^m$(Ω) that has adequate approximation properties. By using techniques from artificial neural networks, we construct a family of $H^m$-conforming functions consisting of piecewise polynomials of degree $k$ for any $k≥m$ and we further obtain the error estimate when they are applied to solve the elliptic boundary value problem of any order in any dimension. For example, the error estimates that $‖u−u_N‖_{H^m(\rm{Ω})}=\mathcal{O}(N^{−\frac{1}{2}−\frac{1}{d}})$ is obtained for the error between the exact solution $u$ and the finite neuron approximation $u_N$. A discussion is also provided on the difference and relationship between the finite neuron method and finite element methods (FEM). For example, for the finite neuron method, the underlying finite element grids are not given a priori and the discrete solution can be obtained by only solving a non-linear and non-convex optimization problem. Despite the many desirable theoretical properties of the finite neuron method analyzed in the paper, its practical value requires further investigation as the aforementioned underlying non-linear and non-convex optimization problem can be expensive and challenging to solve. For completeness and the convenience of the reader, some basic known results and their proofs are introduced.

Jinchao Xu. (2020). Finite Neuron Method and Convergence Analysis. Communications in Computational Physics. 28 (5). 1707-1745. doi:10.4208/cicp.OA-2020-0191
Copy to clipboard
The citation has been copied to your clipboard