\section{Introduction}
\label{sec:intro}

% what is the problem area you are considering?
% what is already known and what is unknown?
%   need to review DNN PDE solvers here
%   what are the outstanding problems?
%     accuracy, cost
%
% what is this paper about?
%   addressing accuracy and computational cost
%   build upon ELM idea
%   need to review ELM and related work here
%
% what is new? what are your contributions?
% what are your results?
% what are the implications of your results?
% how is the paper organized?
%
% Questions:
%  is it new?
%  does it work?
%  does it work better?
%  is it important?

% logic:
% (1) DNN-based PDE solvers:
%     achievements
%     weaknesses: low accuracy, high computational cost
% (2) Goal: to address accuracy and cost issues
%           develop method --> high accuracy, low computational cost
%     Approach: domain decomposition, local extreme learning machine
%               training by least squares computation
% (3) Related ELM works
% (4) What is new about this paper?

Neural network based numerical methods,
especially those based on deep learning~\cite{GoodfellowBC2016},
have attracted a significant amount of research in the past few years
for simulating the governing partial differential equations (PDE) of physical
phenomena. These methods provide a new way for approximating the field solutions,
in the form of deep neural networks (DNN), which is different from the ansatz space
with traditional numerical methods such as finite difference or finite element techniques.
This can be a promising approach, potentially more effective and more efficient
than the traditional methods, for solving the governing PDEs
of scientific and engineering importance.
DNN-based methods solve the PDE by transforming the solution finding problem
into an optimization problem.
They typically parameterize the PDE solution by the
training parameters in a deep neural network, in light of the universal approximation
property of DNNs~\cite{HornikSW1989,HornikSW1990,Cotter1990,Li1996}.
Then these methods attempt to minimize a loss function that consists of
the residual norms of the governing equations and also the associated boundary
and initial conditions,
% in strong or weak forms,
typically by some flavor of gradient descent type techniques (i.e.~back
propagation algorithm~\cite{Werbos1974,Haykin1999}).
This process constitutes the predominant computations in the DNN-based PDE
solvers, commonly known as the training of the neural network.
Upon convergence of the training process,
the solution is represented by the neural network,
with the training parameters set according to their converged values.
Several successful DNN-based PDE solvers have emerged 
in the past years,
such as the deep Galerkin method (DGM)~\cite{SirignanoS2018},
the physics-informed neural network (PINN)~\cite{RaissiPK2019}, and
related approaches
(see e.g.~\cite{LagarisLF1998,LagarisLP2000,RuddF2015,EY2018,HeX2019,ZangBYZ2020,Samaniegoetal2020,Xu2020},
among others).
Neural network-based PDE solutions are smooth analytical functions,
depending on the activation functions used therein.
The solution and its derivatives can be computed exactly,
by evaluation of the neural network or by auto-differentiation~\cite{BaydinPRS2018}.

% issues with DNN-based solvers:
%   accuracy, cost, lack of sense of convergence
%   cannot compete with classical numerical methods

While their computational performance is promising,
DNN-based PDE solvers, in their current
state, suffer from a number of limitations that make them
numerically less than satisfactory and
computationally uncompetitive.
The first limitation is the solution accuracy of DNN-based
methods~\cite{JagtapKK2020}.
A survey of related literature indicates that the absolute error of
the current DNN-based methods is generally on, and rarely goes below,
the level of $10^{-3}\sim 10^{-4}$.
Increasing the resolution or the number of training epochs/iterations
does not notably improve this error level.
The accuracy of such levels is less than satisfactory for scientific computing,
especially considering that the classical numerical methods
can achieve the machine accuracy given sufficient mesh resolution and
computation time.
Perhaps because of such limited accuracy levels, a sense of convergence with
a certain convergence rate is generally lacking with the DNN-based PDE solvers.
For example, when the number of layers, or the number of nodes within
the layers, or the number of training data points
is varied systematically, one can hardly observe
a consistent improvement in 
the accuracy of the obtained simulation results.
%
Another limitation concerns the computational cost.
The computational cost of DNN-based PDE solvers is extremely high.
The neural network of these
solvers takes a considerable amount of time to train, in order to reach a reasonable
level of accuracy.
For example, a DNN-based PDE solver can take hours to train
to reach a certain accuracy, while with a traditional numerical method
such as the finite element method it may take only a few seconds
to produce a solution with the same or better accuracy.
%
Because of their limited accuracy and large computational cost,
there seems to be a general sense that the DNN-based PDE solvers, at least
in their current state, cannot
compete with classical numerical methods, except perhaps for certain  problems
such as high-dimensional
PDEs which can be challenging to classical methods due to
the so-called curse of dimensionality.

% overview of current method

In the current work we concentrate on the accuracy and the computational cost
of neural network-based numerical methods.
%The accuracy and the computational cost of neural network-based methods
%are the focal issues of the current work.
We introduce a
neural network-based method for solving linear and nonlinear
PDEs that exhibits a disparate computational performance 
from the above DNN-based PDE solvers.
%For example, 
The current method exhibits a clear sense of convergence with respect to
the degrees of freedom in the system.
Its numerical errors typically decrease exponentially or nearly
exponentially as the
number of degrees of freedom (e.g.~the number of training
parameters, number of training data points) in the network increases.
In terms of the accuracy and computational cost,
it exhibits a clear superiority to the often-used DNN-based PDE solvers.
Extensive comparisons with the deep Galerkin method~\cite{SirignanoS2018}
and the physics-informed neural network~\cite{RaissiPK2019} are presented
in this paper.
The numerical errors, and the network training time,
of the current method are typically orders of magnitude
smaller than those of DGM and PINN.
%while its training time is typically
%orders of magnitude less than those of DGM and PINN.
The computational performance of the current method
is competitive compared with traditional numerical methods.
Extensive comparisons with the classical finite element method (FEM)
are provided. The performance of the current method is on par with,
and often exceeds, the performance of FEM with regard to the
accuracy and computational cost.
For example,
to achieve the same accuracy, the network training time of the current
method is comparable to, and oftentimes smaller than, the FEM computation
time. With the same computational cost (training/computation time),
the numerical errors of 
the current method are comparable to, and oftentimes
markedly smaller than, those of the FEM.

% how did you achieve this?
% how does the method work?

The superior computational performance of the current
method  can be
attributed to several of its algorithmic characteristics:
\begin{itemize}

\item
  Network architecture and training parameters. The current method
  is based on shallow feed-forward neural networks. Here ``shallow''
  refers to the configuration that the network contains only
  a small number (e.g.~one, two or three) of hidden layers, while the
  last hidden layer can be wide.
  The weight/bias coefficients in all the hidden layers
  are pre-set to random values and are fixed, and they are not
  training parameters. % of the network.
  The training parameters consist of the weight coefficients
  of the output layer.
  %which is assumed to be linear and contain no bias.

\item
  Training method. The network is trained and the values for the training
  parameters are determined by a least squares computation, not
  by the back propagation (gradient descent-type) algorithm.
  For linear PDEs, training the neural network involves a
  linear least squares computation.
  For nonlinear PDEs, the network training involves a nonlinear
  least squares computation.

\item
  Domain decomposition and local neural networks.
  We partition the overall domain into sub-domains, and represent the
  solution on each sub-domain
  locally by a shallow feed-forward neural network. $C^k$ continuity
  conditions, where $k\geqslant 0$ is an integer related
  to the PDE order, are enforced across sub-domain boundaries.
  The local neural networks collectively form a
  multi-input multi-output logical network model, and are trained
  in a coupled way with the linear or nonlinear least squares computation.

\item
  Block time marching. For long-time simulations of
  time-dependent PDEs, the current method adopts a block time-marching strategy.
  The overall spatial-temporal domain is first
  divided into a number of windows in time, referred to as
  time blocks. The PDE is then solved on the spatial-temporal
  domain of each time block, individually and successively.
  Block time marching is crucial to long-time simulations, especially for
  nonlinear time-dependent PDEs.
  
  
\end{itemize}

% comment on ELMs and domain decompositions etc
% review of ELMs

The idea of random weight/bias coefficients in the network
and the use of linear least
squares method for network training stem from the so-called
extreme learning machines (ELM)~\cite{HuangZS2006,HuangWL2011}.
ELM was developed for single-hidden layer feed-forward neural networks (SLFN),
and for linear problems. It transforms the linear
classification or regression
problem into a system of linear algebraic equations, which is then
solved by a linear least squares method or by
using the pseudo-inverse (Moore-Penrose inverse)
of the coefficient matrix~\cite{GolubL1996}.
ELM is one example of the so-called randomized neural
networks (see e.g.~\cite{PaoPS1994,IgelnikP1995,MaassM2004,JaegerLPS2007,ZhangS2016}),
which can be traced to Turing's unorganized machine and Rosenblatt's
perceptron~\cite{Webster2012,Rosenblatt1958} and have witnessed a revival
in neuro-computations in recent years.
%
% other ELM works in linear DE
% what is the difference here?
% then discuss idea of domain decomposition and local NN
The application of ELM to function approximations and linear differential
equations have been considered in several recent
works~\cite{BalasundaramK2011,YangHL2018,Sunetal2019,PanghalK2020,LiuXWL2020,DwivediS2020}.
%
Domain decomposition has found widespread applications in classical numerical
methods~\cite{SmithBG1996,ToselliW2005,Dong2010,DongS2015,Dong2018}. Its use in neural network-based methods, however,
has been very limited and is very recent
(see e.g.~\cite{LiTWL2020,JagtapKK2020,DwivediS2020}).


% what is new in current method?
% what is different from previous ELM or local NN?

The contribution of the current work lies in several aspects.
A main contribution of this work is the introduction of an ELM-like method
for nonlinear differential equations, based on domain decomposition and local neural networks.
In contrast, existing ELM-based methods for differential equations have been confined to
linear problems, and the neural network is limited to a single hidden layer.
For nonlinear problems, to solve the resultant nonlinear
algebraic system about the training parameters, we have adopted two methods:
(i) a nonlinear least squares method with perturbations (referred to as NLSQ-perturb),
and (ii) a combined Newton/linear least squares method (referred to as Newton-LLSQ).
We find that the random perturbation in the NLSQ-perturb method
is crucial to preventing the method from being trapped to local minima with cost values
exceeding some given tolerance, especially in under-resolved cases and in long-time simulations.
We present an algorithm for effective generation of the random perturbations
for the nonlinear least squares method.

Another contribution of the current work is the afore-mentioned
block time-marching scheme for long-time simulations of time-dependent
linear/nonlinear PDEs. When the temporal dimension of the spatial-temporal domain
is large, if the PDE is solved on the entire domain all at once,
we find that the neural network becomes very hard to train
with the ELM algorithm (and also with the back propagation-based algorithms),
in the sense that the obtained solution can contain pronounced errors, especially toward
later time instants in the spatial-temporal domain.
On the other hand, by using the block time-marching strategy
and with a moderate time block size,
the problem becomes much easier to solve and the neural network is
much easier to train with the ELM algorithm.
Accurate results can be attained with
the block time-marching scheme for very long-time simulations.
The block time marching strategy is often crucial to the simulations of
nonlinear time-dependent PDEs when the temporal dimension becomes
even moderately large.

We would also like to emphasize that, with the current method,
each local neural network is not limited to a single hidden layer,
which is another notable difference from existing ELM-type methods.
Up to three hidden layers in the local neural networks have been tested
in the current paper. We observe that with one or a small number (more than one) of
hidden layers in the local neural networks, the current method can
produce accurate simulation results.

% numerical tests,
% implementations with tensorflow/keras

Since the current method is a combination of the ideas of ELM, domain decomposition,
and local neural networks, we refer to this method as locELM (local extreme
learning machines) in the current paper.

We have performed extensive numerical experiments with linear and nonlinear, stationary
and time-dependent,
partial differential equations
to test the performance of the locELM method,
and to study the effects of the simulation parameters involved therein.
For certain test problems (e.g.~the advection equation) we present very long-time
simulations to demonstrate the capability and accuracy of the locELM method
together with the block time-marching scheme.
We compare extensively the current locELM method with
the deep Galerkin method~\cite{SirignanoS2018} and the physics-informed neural
network method~\cite{RaissiPK2019}, and demonstrate the superiority of
the current method in terms of both accuracy and the computational cost.
We also compare the current method with the classical finite element method,
and show that the computational performance of the locELM method is comparable to,
and often exceeds, the FEM performance.
The current locELM method, DGM and PINN have all been implemented in Python,
using the Tensorflow (www.tensorflow.org) and Keras (keras.io) libraries.
The finite element method is also implemented in Python,
by using the FEniCS library (fenicsproject.org).

The rest of this paper is structured as follows.
In Section \ref{sec:method} we outline the locELM representation of field functions
based on domain decomposition and local extreme learning machines, and then
discuss how to solve linear and nonlinear differential equations
using the locELM representation %for the field solution
and how to
train the overall neural network by the linear or nonlinear
least squares method. For nonlinear differential equations
we present the NLSQ-perturb method and the Newton-LLSQ method for solving
the resultant nonlinear algebraic system.
For time-dependent PDEs,
we present the block time-marching scheme, and discuss how to employ
the locELM method together with block time marching for long-time
simulations. We primarily use second-order differential equations in
two spatial dimensions,
and also plus time if the problem is time-dependent,
as examples in the presentation of the locELM method.
In Section \ref{sec:tests} we present extensive numerical
experiments with the linear and nonlinear Helmholtz equations,
the advection equation, the diffusion equation, nonlinear spring equation,
and the viscous Burger's equation
to test the performance of the locELM
method.
% and investigate the effects of the simulation parameters.
We compare the locELM method with DGM and PINN, and demonstrate the superiority
of locELM in terms of the accuracy and computational cost.
We also compare locELM with the classical finite element method,
and show that locELM is on par with, and often exceeds,
the FEM in computational performance.
Section \ref{sec:summary} concludes the main presentation with
a number of comments on the characteristics and properties of the current method.
The Appendix summarizes some additional numerical tests
not included in the main text.


