\begin{abstract}

  We present a neural network-based method for solving linear and
  nonlinear partial differential equations, by combining 
  the ideas of extreme learning machines (ELM), domain decomposition
  and local neural networks.
  The field solution on each sub-domain is represented by a local feed-forward
  neural network, and $C^k$ continuity with an appropriate
  integer $k$ is imposed on the sub-domain boundaries.
  Each local neural network consists
  of a small number (one or more) of hidden layers, while its last hidden layer can
  be wide. The weight/bias coefficients in all the hidden layers of the
  local neural networks are
  pre-set to random values and fixed throughout the computation,
  and only the weight coefficients in the output layers of the local
  neural networks are adjustable training parameters.
  The overall neural network is trained by a linear or
  nonlinear least squares computation, not by the back-propagation type
  algorithms. We introduce a block time-marching scheme together with
  the presented method for long-time simulations of time-dependent
  linear/nonlinear partial differential equations.
  The current method exhibits a clear sense of convergence with respect to
  the degrees of freedom in the neural network.
  Its numerical errors typically decrease exponentially or nearly exponentially
  as the number of degrees of freedom (e.g.~the number of training
  parameters, number of training data points, number of sub-domains)
  in the system increases. Extensive numerical experiments have been
  performed to demonstrate the computational performance of the current method
  and to study the effects of the simulation parameters. 
  We also present results to demonstrate its capability
  for long-time dynamic simulations with certain test problems.
  We compare the presented method with the deep Galerkin method (DGM)
  and the physics-informed neural network (PINN) method in terms of
  the accuracy and computational cost. The current method exhibits a clear
  superiority, with its numerical errors and network training time
  considerably smaller (typically by orders of magnitude) than those of DGM and PINN.
  We also compare the current method with the classical finite element method (FEM).
  The computational performance of the current method is on par with,
  and often exceeds, the FEM performance in terms of the accuracy and
  computational cost. To achieve the same accuracy, the network training time of
  the current method is comparable to, and oftentimes less than,
  the FEM computation time. Under the same
  computational cost (training/computation time),
  the numerical errors of the current method are comparable to, and oftentimes
  markedly smaller than, the FEM errors.

\end{abstract}
