\section{Concluding Remarks}
\label{sec:summary}

% what have you done in this paper?
% how did you do it?
% what are the results?
% what are implications? what is the importance?
%   accuracy
%   computational cost
%   ELM for nonlinear problems
%
% what are the remaining issues?
%   R_m, how to determine R_m for a given new problem?
%   exponential convergence w.r.t. collocation points and training parameters
%   nonlinear least squares, perturbations, convergence to local minimum
%

In this paper we have developed an efficient method
based on domain decomposition and local extreme learning machines (termed locELM) 
for solving linear and nonlinear  partial differential equations.
The problem domain is partitioned into sub-domains,
and the field solution on each sub-domain is represented
by a local shallow feed-forward neural network,
consisting of a small number (one or more) of
hidden layers.
%The output layer of the local neural networks represents
%the field function to be solved for, and
%is assumed to be linear %(i.e.~no activation function)
%and contain no bias.
%The input layer represents the spatial-temporal coordinates of the problem.
$C^k$ continuity, with $k$ determined by the order of the PDE,
is imposed on the sub-domain boundaries.
The overall neural network constitutes a multi-input multi-output model
consisting of the local neural networks.
%Importantly,
The weight/bias coefficients in the hidden layers
of all the local neural networks are pre-set to random values,
%generated on some interval,
and are fixed throughout the computation.
The  training parameters are composed of
%in the overall neural network
the weight coefficients in the output layers of
the local neural networks.
%and the number of training parameters
%in each local neural network corresponds to the width the last hidden layer
%of the local neural network.

% what about the collocation points?
% how do you enforce the PDE?

We employ a set of collocation points within each sub-domain, the collection of
which constitutes
the input data to the neural network.
%In the current work we have mostly employed
%a regular uniform distribution for the collocation points in the majority
%of computations. But other distributions for the collocation points have also been
%considered, such as the distributions
%of Gaussian quadrature points  and random points.
%
The PDE is enforced on the collocation points in
each sub-domain, and the  derivatives involved therein are computed
by auto-differentiation. The boundary and initial conditions are enforced on
those collocation points that reside on the spatial
and temporal boundaries of the spatial-temporal domain.
The $C^k$ continuity conditions are enforced on those collocation
points that reside on the corresponding sub-domain boundaries.
%
%The enforcement of the partial differential equation, the boundary/initial conditions,
%and the $C^k$ continuity conditions on the corresponding collocation
%points
These operations result in a system of linear or nonlinear algebraic equations about
the set of training parameters.
% of the overall neural network.
%
We seek a least squares solution to this system, and 
 compute the solution by a linear least squares routine or a nonlinear
least squares method.
Training the overall neural network
%and the determination of the training parameters therein
consists of the linear or nonlinear least squares computations.
%
It should be noted that this training method is different from
the back propagation-type algorithms.
%gradient descent-based training approaches,
%as often found with other related methods such as the deep Galerkin method
%or the physics-informed neural network method, which employ some flavor of the
%stochastic gradient descent or its variants in training the neural networks.

%
% what about block time marching?

For  longer-time simulations of time-dependent PDEs,
we have developed a block time-marching scheme together with the locELM method.
The spatial-temporal domain is first divided into a number of windows in time,
referred to as time blocks, and we solve the PDE on each time block
separately and successively. The locELM method is then applied to the spatial-temporal
domain of each time block to find the solution,
as discussed in foregoing paragraphs.
We observe that when the temporal dimension of the domain is large, if without block
time marching, the
neural network can become very difficult to train.
On the other hand, with block time marching and using a moderate time block size,
the problem is more manageable and much easier to solve. 
Block time marching requires re-training of the overall neural network on
different time blocks, and so all network trainings become online operations.
This is feasible with the current locELM method thanks to its high accuracy
and low computational cost. % (training time).
We have demonstrated the capability of the current method for long-time dynamic
simulations
with the advection equation.
%We can perform long-time simulations of time-dependent partial differential equations
%with the current method together with the block time marching strategy,
%and have demonstrated this capability with advection equation.


% what are the results?

We have performed extensive numerical experiments to test the locELM method,
%and study the effects of the simulation parameters.
and compared extensively the locELM method with DGM, PINN, global ELM, and
the classical finite element method (FEM).
%the often-used deep neural
%network-based PDE solvers such as the deep Galerkin method (DGM) and
%the Physics-Informed Neural Network (PINN). We have also compared the current method,
%which is based on domain decomposition and local extreme learning machines,
%and the global extreme learning machine (i.e.~no domain decomposition, single sub-domain).
We have the following observations:
\begin{itemize}

\item
  %The degrees of freedom in the system (number of sub-domains,
  %collocation points/sub-domain, training parameters/sub-domain) influence
  %the simulation accuracy.
  The locELM method exhibits a clear sense of
  convergence with increasing number of degrees of freedom.
  Its errors typically decrease exponentially or nearly exponentially
  as the number of sub-domains, or the number of collocation points/sub-domain,
  or the number of training parameters/sub-domain increases.
  %generally decrease exponentially with increasing
  %number of collocation points (in each direction) per sub-domain,
  %when the number of sub-domains and the number of training parameters/sub-domain are fixed.
  %The errors generally decrease exponentially with increasing number
  %of training parameters/sub-domain,
  %with the number of sub-domains and the number of collocation
  %points/sub-domain fixed.
  %The errors generally decrease nearly exponentially with increasing number of
  %sub-domains, with the number of collocation points/sub-domain and
  %the number of training parameters/sub-domain fixed.

\item
  The random weight/bias coefficients in the hidden layers of local neural networks
  influence the simulation accuracy.
  In the current work, these weight/bias coefficients are set to uniform random
  values generated on $[-R_m,R_m]$.
  The simulation accuracy tends to decrease
  with very large or very small $R_m$ values. Higher accuracy is generally associated
  with a range of moderate $R_m$ values.
  % (typically around $R_m\approx 1\sim 5$).
  This range of optimal $R_m$ values tends to expand when the number of
  collocation points/sub-domain or the number of training parameters/sub-domain
  increases.

\item
  The network training time  generally increases
  linearly (or super-linearly for some problems) with respect to the number of
  sub-domains in the simulation. It also tends to increase with respect to
  the number of collocation points/sub-domain and to
  the number of training parameters/sub-domain, but the relation is
  not quite regular.

\item
  When the total degrees of freedom (total collocation points,
  total training parameters) in the system are fixed,
  increasing the number of sub-domains in the simulation, hence with
  the number of collocation points/training parameters per sub-domain correspondingly reduced,
  generally leads to simulation results with comparable accuracy,
  but it can dramatically reduce the network training time.
  %Note that the configuration of locELM with a single sub-domain corresponds to
  %a global extreme learning machine.
  Compared with global ELM, which corresponds to the
  locELM configuration with a single sub-domain, the use of
  domain decomposition and multiple sub-domains in locELM
  can significantly reduce the network training time, and
  produce results with comparable accuracy.

\item
  The current locELM method shows a clear superiority to DGM and PINN, which are some of
  the commonly-used DNN-based PDE solvers,
  in terms of both accuracy and computational cost.
  The numerical errors and the network training time of locELM are considerably smaller,
  typically by orders of magnitude, than those of DGM and PINN.
  %and the network training time of locELM is
  %also considerably smaller, typically by around two orders of magnitude,
  %than those of DGM and PINN.

\item
  The current locELM method exhibits a computational performance that is comparable,
  and oftentimes superior, to the classical finite element method.
  %with regard to the accuracy and computational cost.
  %Extensive numerical experiments demonstrate that,
  With the same
  computational cost, the locELM errors
  are comparable to, and oftentimes considerably smaller than, the FEM errors.
  To achieve the same accuracy,
  the training time of locELM 
  is comparable to, and oftentimes markedly smaller than, the FEM computation time.


\end{itemize}

% further discussion of R_m effect
% further discussion of comparison with single sub-domain, what if number of domains
%     becomes very large? accuracy will deteriorate in this case with total DOF fixed.
% further discussion of perturbation in NLSQ-perturb: effect of degrees of freedom etc

We would like to make some further comments with regard to $R_m$,
the maximum magnitude of the random weight/bias coefficients in the hidden layers of
the local neural networks.
As discussed above, the simulation results have a better accuracy
if $R_m$ falls into a range of moderate values for a given problem.
Let us consider the following question:
given a new problem (e.g.~a new PDE), how do we know
what this range is and how do we
find this range of optimal $R_m$ values in practice? 
The approximate range of these optimal $R_m$ values can be estimated readily
by preliminary numerical experiments. Here is the basic idea.
Given a new problem, one can always add some source terms to the PDE or to
the boundary/initial conditions, and then manufacture a solution to
the given problem, with the augmented source terms.
Then one can use the manufactured solution
to evaluate the accuracy of a set of preliminary simulations by varying the
$R_m$ systematically.
This will provide a reasonable estimate for the range of optimal $R_m$ values.
After that, one can conduct actual simulations of the given problem,
without the added source term, by using an $R_m$ value from
the estimated range.

% do we need to discuss the effect of sub-domains, when the number of sub-domains
%   becomes very large, while the total DOF is fixed?

Some further comments are also in order concerning the numerical tests
with fixed total degrees of
freedom in the domain, while the number of sub-domains is varied.
Because the total degrees of freedom in the domain is fixed,
the degrees of freedom
(number of collocation points, number of training parameters) per sub-domain 
decrease as the number of sub-domains increases.
One can anticipate that, when the number of sub-domains becomes sufficiently large, 
the number of degrees of freedom per sub-domain can become very small.
This will be bound to adversely affect the simulation accuracy,
because the solution is represented locally by these degrees of
freedom on each sub-domain.
Therefore, if the total degrees of freedom in the domain  are fixed,
when the number of sub-domains in domain decomposition
increases beyond a certain point, the simulation accuracy will
start to deteriorate.
The afore-mentioned observation about comparable accuracy with increasing number of
sub-domains, with fixed  total degrees of freedom in the domain,
is for the cases where the number of sub-domains is below that point.

% what else to discuss here?

% what are implications? what is the importance?

% scientific machine learning, NN-based PDE solvers,
%   not very accurate (hard to go beyond 10^{-2} or 10^{-3}),
%   too slow, lack of convergence when layers/nodes increases
% accuracy and computational cost of DNNs
% current method --> high accuracy, low cost,
% it can be an alternative to DNN solvers and effective for PDEs in long-time simulations
% general sense that DNN cannot compete with classical numerical methods
%   --> high dimensional PDE with DNN
%
% competitive with FEM. is this the first time this is achieved?
% competitive with classical numerical methods for low-D problems.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{comment}
  
Deep neural network based PDE solvers have stimulated a great deal of
interest in the past few years, as it offers a new way to parameterize
and represent unknown field functions.
The computational performance of these DNN-based methods so far
is promising, but less than satisfying.
A survey of related literature indicates that the error of these
PDE solvers is generally on, and rarely goes beyond,
the level of $10^{-2}\sim 10^{-3}$. The accuracy of such levels may be deemed
remarkable for other application areas such as image classification.
For scientific computing, however, it is less than satisfactory,
especially considering that the classical numerical methods
can achieve the machine accuracy given sufficient mesh resolution and
computation time.
Perhaps because of such poor error levels, a sense of convergence with
a certain rate is generally lacking with the DNN-based PDE solvers.
For example, when the number of layers or the number of nodes within
the layers or the number of training data points
is varied systematically, one can hardly observe
an improvement or a systematic improvement in 
the accuracy of the obtained results.
%
% training time
Another factor concerns the computational cost. The computational cost
of DNN-based PDE solvers is very high, as demonstrated here and
also by other works in the literature. The neural network of these
solvers takes a considerable amount of time to train, in order to reach a reasonable
level of accuracy. 
%
Because of their poorer accuracy and large computational cost,
there seems to be a general sense that these DNN-based PDE solvers cannot
compete with classical numerical methods, except perhaps in certain problems
such as high-dimensional
PDEs which can be challenging to classical methods due to
the so-called curse of dimensionality.

% discussion of importance/implication of current methods
%
% performance on par with classical FEM, considerably superior to DNN-based PDE solvers
% current method is neural-network based

The current locELM method developed herein, also based on neural networks,
exhibits completely different performance characteristics. 
The current method demonstrates a clear sense of convergence,
often with an exponential convergence rate, with respect to the
degrees of freedom in the system such as the number of training parameters and
the number of collocation points.
As demonstrated by ample examples, the accuracy of the current method
is far superior to, typically by orders of magnitude, and its computational cost
is far less, also often by orders of magnitude, than those of the
deep Galerkin method and the Physics-Informed Neural Network, which are both
commonly-used DNN-based PDE solvers.

\end{comment}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

As demonstrated by ample examples in this paper,
the computational performance of the current locELM method is on par with,
and oftentimes exceeds, that of
the classical finite element method.
The importance of this point cannot be overstated. To the best of the authors'
knowledge, this seems to be the first time when a neural network-based method delivers
the same performance as, or a better performance than, a traditional
numerical method for the commonly-encountered computational problems in
low dimensions.
The current method demonstrates the great potential, and perhaps points toward
a path forward, for neural network-based
methods to be truly competitive, and excel, in computational science and engineering
simulations.

% what else to discuss here?


%% ******* STOP 11/12/2020 ********* %%





