\section{Numerical Examples}
\label{sec:tests}

% overview of numerical tests
% how does locELM compare with ELM and DNN? any advantage?
% what is the accuracy?
% what is the computational cost?
% can you say something about the implementations of locELM?
% linear / nonlinear equations
% 1D/2D in space plus time
% long-time simulations for time-dependent problems
%

% how to define computational cost? what does it include?
% how did you do the FEM in general? what does its computational cost include?
% conditions for comparison:
%  FEM JIT compiled; locELM not
%  FEM also implemented in Python

In the forthcoming section we provide a number of numerical
examples to test the locELM method developed here.
These examples pertain to stationary and time-dependent,
linear and nonlinear differential equations.
They are in general one- or
two-dimensional (1D/2D) in space, and also plus time if time-dependent.
For certain problems (e.g.~the advection equation) we provide results
from long-time simulations, to demonstrate the capability
of the locELM method combined with the block time-marching scheme.
We employ $\tanh$ as the activation function in all the local
neural networks of this section.

In our discussion we focus on the
accuracy and the computational cost.
For locELM, the computational cost here refers to the total training time
of the overall neural network, which includes the computation time for
the output functions of the last hidden layer and its derivatives
(e.g.~$V_j^{e_{mnl}}$, $\frac{\partial V_j^{e_{mnl}}}{\partial x}$, etc),
the computation time for the coefficient matrix and the right hand side
of the least squares problem, and the solution time for the linear/nonlinear
squares problem. It does not include, after the training is over,
the evaluation of the neural network on a set of given points
for the output of the solution data.
The timing data is collected using the ``timeit'' module in Python.

We compare the current locELM method with
the deep Galerkin method (DGM)~\cite{SirignanoS2018}
and the physics-informed neural network (PINN) method~\cite{RaissiPK2019},
which are both based on deep neural
networks (DNN), in terms of the accuracy and the neural-network training time.
The DGM and PINN are trained using both the Adam~\cite{KingmaB2014}
and the L-BFGS~\cite{NocedalW2006} optimizers. For L-BFGS, we have employed
the routine available from the Tensorflow-Probability library
(www.tensorflow.org/probability).
For DGM and PINN, the training time refers to the time interval
between the start and the end of the Adam or L-BFGS training loop
for a given number of epochs/iterations.
The locELM, the DGM and the PINN methods
are all implemented in Python
with the Tensorflow (www.tensorflow.org) and Keras (keras.io) libraries.

% about FEM comparison

Additionally, we compare the  locELM method with the classical
finite element method (linear elements, second-order), in terms of
the accuracy and computational cost.
For the numerical tests reported below,
the finite element method (FEM) is implemented also in Python,
using the FEniCS library (fenicsproject.org).
When the FEM code is run for the first time, the
FEniCS library uses Just-In-Time (JIT) compilers to compile
certain key finite element operations in the Python code into C++ code,
which is in turn compiled by the C++ compiler and then cached. 
This is done only once. So the FEM code is slower as JIT compilation
occurs when run for the first time, but it
is much faster in subsequent runs.
For FEM, the computational cost here refers to the computation time
collected using the ``timeit'' module
after the code has been compiled by the JIT compilers.
All the timing data with the locELM, DGM, PINN and FEM methods
is collected on a MAC computer ($3.2$GHz Intel Core i5
CPU, $24$GB memory) at the authors' institution.

% overview of tests
% discuss the tests in the Appendix, for effects of the collocation point
%   distribution, number of hidden layers of locELM with
%   1D Helmholtz equation, and 2nd-order wave equation


%and demonstrate the superiority
%of the current method in terms of both the accuracy and
%the computational cost (training time).
%We also compare the current locELM method with the global ELM
%method (i.e.~with one sub-domain), and show that under the same
%total degrees of freedom the locELM method is computationally
%considerably cheaper (less training time) while achieving comparable
%accuracy in the simulation results.
%The locELM method and the DGM in all the tests below are implemented
%in Tensorflow (www.tensorflow.org), Keras (keras.io), and Python. 


%\subsection{Function Approximations}
% function approximation in 1D and 2D
% effect of types of collocation points: quadrature, uniform, random


%\subsection{Helmholtz Equation}
\input Helm1d


\subsection{Advection Equation}
%\subsubsection{Advection Equation}

We next test the locELM method using the advection equation
%(first-order wave equation)
in one spatial dimension plus time, and we will
demonstrate the capability of the method, when combined with the block
time-marching strategy, for long-time simulations.
Consider the spatial-temporal domain,
$\Omega=\{(x,t)\ |\ x\in[a_1,b_1], \ t\in[0,t_f]  \}$,
and the initial/boundary-value problem with the advection equation
on this domain,
\begin{subequations}
  \begin{align}
    &
    \frac{\partial u}{\partial t} - c\frac{\partial u}{\partial x} = 0,
    \label{wav1_1} \\
    &
    u(a_1,t) = u(b_1,t), \\
    &
    u(x,0) = h(x) \label{wav1_2}
  \end{align}
\end{subequations}
where $u(x,t)$ is the field function to be solved for,
the constant $c$ denotes the wave speed, and we impose the periodic boundary condition
on the spatial domain boundaries
$x=a_1$ and $b_1$.
$h(x)$ denotes
the initial wave profile given by
\begin{equation}\label{eq_wav1_ic}
  h(x) = 2\sech\left[\frac{3}{\delta_0}\left(x-x_0 \right)\right],
\end{equation}
where $x_0$ is the peak location of the wave and $\delta_0$ is a constant
that controls the width of the wave profile.
The above equations and the domain specification contain several constant parameters,
and we employ the following values in this problem,
\begin{equation}\label{eq_wav1_par}
  a_1 = 0, \quad
  b_1 = 5, \quad
  c = -2, \quad
  \delta_0 = 1, \quad
  x_0 = 2.5, \quad
  t_f=2, \ \text{or}\ 10,\ \text{or}\ 100.
\end{equation}
The temporal domain size $t_f$ is varied in different tests  and will be specified in
the discussions below.
This problem has the following solution
\begin{equation}\label{wav1_3}
  u(x,t) = 2\sech\left[\frac{3}{\delta_0}\left(-\frac{L_1}{2}+ \xi \right)  \right], \quad
  \xi = \bmod\left(x-x_0+ct+\frac{L_1}{2}, L_1\right), \quad
  L_1 = b_1 - a_1,
\end{equation}
where $\bmod$ denotes the modulo operation.


\begin{figure}
  \centerline{
    \includegraphics[height=2.5in]{Figures/Wave/wave_soln_locelm_8elem_colloc20sq_randmag1.0_10tblocks.pdf}(a)
    \includegraphics[height=2.5in]{Figures/Wave/wave_error_locelm_8elem_colloc20sq_randmag1.0_10tblocks.pdf}(b)
  }
  \caption{
    Advection equation: Distributions of (a) the locELM solution
    and (b) its absolute error in the spatial-temporal plane.
    The temporal domain size is $t_f=10$, and
    $10$ time blocks are used in the simulation.
    %spatial-temporal domain: [0,5]x[0,10]. 10 time blocks.
    %In each time block,
    %4x2=8 uniform sub-domains. 20x20 uniform collocation points
    %in each sub-domain. local ELM structure: [2, 300, 1], i.e. 300
    %training parameters/sub-domain. rand-mag = 1.0.
    %analytic solution:
    %$u=2\sech\left[3(x-x_0+ct)\right]$,
    %where  $c=-2$ and $x_0=2.5$.
  }
  \label{fg_wav1_1}
\end{figure}

% how to simulate the problem?

We simulate this problem using the locELM method together with the
block time-marching strategy
from Section \ref{sec:unsteady}, by restricting the method to one spatial dimension.
We divide the overall spatial-temporal domain
into $N_b$ uniform blocks along the temporal direction, with a time block size
$\Gamma = \frac{t_f}{N_b}$. The spatial-temporal domain of each time block is then partitioned
into $N_x$ uniform sub-domains along the $x$ direction and $N_t$ uniform sub-domains in time,
leading to $N_e=N_xN_t$ uniform sub-domains in each time block.
$C^0$ continuity is imposed on the sub-domain boundaries in both the $x$ and $t$ directions.
Within each sub-domain, let $Q_x$ denote the number of uniform collocation points along
the $x$ direction and $Q_t$ denote the number of uniform collocation points in time,
leading to $Q=Q_xQ_t$ uniform collocation points in each sub-domain.

The local neural network corresponding to each sub-domain contains
an input layer of two nodes (representing $x$ and $t$),
a single hidden layer
with $M$ nodes and the $\tanh$ activation function,
and an output layer (representing the solution $u$) of a single node.
The output layer is linear and contains no bias.
%No activation function is applied to the output layer.
An additional affine mapping
normalizing the input $x$ and $t$ data to the interval $[-1,1]\times[-1,1]$
has been incorporated into the local neural networks right behind the input layer
for each sub-domain. The number of training parameters per sub-domain
corresponds to $M$, the width of the hidden layer.
% in the local neural networks.
The weight and bias coefficients in the hidden layer
%of the local neural networks
are pre-set to uniform random values generated on $[-R_m,R_m]$,
as in the previous section.


The locELM simulation parameters include
the number of sub-domains ($N_x$, $N_t$, $N_e$), the number of collocation
points per sub-domain ($Q_x$, $Q_t$, $Q$), the number of training parameters
per sub-domain ($M$), and the maximum magnitude of the random coefficients ($R_m$).
The degrees of freedom within a sub-domain are characterized by
%the number of collocation points and the training parameters within the sub-domain
($Q,M$). The degrees of freedom in each time block are characterized by
%the total number of collocation points and the total number of training parameters
($N_eQ$, $N_eM$).
%within the time block.
We use a fixed seed value $1$ for the Tensorflow random number generators in all the tests
of this sub-section, so that all the numerical tests here are repeatable.

Figure \ref{fg_wav1_1} illustrates the solution from the locELM simulation.
Plotted here are the distributions of the locELM solution and its absolute error in
the spatial-temporal plane.
%Here the error is defined as the difference
%between the locELM solution and the exact solution in equation \eqref{wav1_3}.
In this test, the temporal domain size is $t_f=10$, and we  employ
$10$ uniform time blocks ($N_b=10$) in this domain. Within each time block,
we have employed $N_e=8$ uniform sub-domains (with $N_x=4$ and $N_t=2$),
and $Q=20\times 20$ uniform collocation points ($Q_x=Q_t=20$)
in each sub-domain. We employ $M=300$ training parameters per sub-domain,
and $R_m=1.0$ when generating the random weight/bias coefficients.
%for the hidden layers of the  local neural networks.
It is evident that the current method has captured the wave solution accurately. 


\begin{figure}
  \centerline{
    \includegraphics[width=2.in]{Figures/Wave/wave_error_elem_randmag1.0_colloc20sq_trapar300perElem.pdf}(a)
    \includegraphics[width=2.in]{Figures/Wave/wave_error_colloc_8elem_randmag1.0_trapar300perElem_10tblocks.pdf}(b)
    \includegraphics[width=2.in]{Figures/Wave/wave_error_trapar_10tblocks_8elem_randmag1.0_colloc20sq.pdf}(c)
  }
  \caption{Effect of the degrees of freedom
    % per sub-domain
    (advection equation): the maximum and rms errors in the overall domain
    as a function of (a) the number of sub-domains,
    (b) the number of collocation points in each direction
    per sub-domain,
    and (c) the number of training parameters per sub-domain.
    %In these tests $10$ time blocks in the domain and $8$ sub-domains per time block
    %have been used.
    Temporal domain size is $t_f=10$, and $10$ time blocks have been used.
    In (a), the degrees of freedom per sub-domain are fixed.
    In (b) and (c), $N_e=8$ sub-domains per time block are used.
    %In (a) $Q=20\times 20$, $M=300$.
    %In (b) and (c), $N_e=8$ sub-domains, $(N_x,N_t)=(4,2)$.
    %In (b) the number of training parameters/sub-domain is fixed at $M=300$.
    %In (c) the number of collocation points/sub-domain is
    %fixed at $Q=20\times 20$.
  }
  \label{fg_wav1_3}
\end{figure}

The effect of the degrees of freedom  on the simulation accuracy 
is illustrated by Figure \ref{fg_wav1_3}.
In this group of tests, the temporal domain size is fixed at $t_f=10$. 
We have employed $N_b=10$ uniform time blocks within the domain,
one hidden layer in each local nueral
network, and $R_m=1.0$
when generating the random weight/bias coefficients for the hidden layers.
%
Figure \ref{fg_wav1_3}(a) illustrates the effect of the number of sub-domains per time
block, when the degrees of freedom per sub-domain are fixed.
Here the number of sub-domains within each time block is varied systematically.
We employ a fixed set of $Q=20\times 20$ uniform 
collocation points per sub-domain ($Q_x=Q_t=20$),
and fix the number of training parameters per sub-domain
at $M=300$.
%and employ $R_m=1.0$ when generating the random coefficients.
%for the hidden layers of the local neural networks.
%Figure \ref{fg_wav1_2}(a)
This plot shows the maximum and rms errors in the domain
as a function of the number of sub-domains per time block.
Here the case with $N_e=2$ sub-domains/time-block corresponds to $(N_x,N_t)=(2,1)$.
The case with $N_e=4$ sub-domains corresponds to $(N_x,N_t)=(2,2)$,
and the case with $N_e=8$
sub-domains corresponds to $(N_x,N_t)=(4,2)$.
It can be observed that, with increasing sub-domains/time-block, the rate of
reduction in the
errors, while not very regular, is approximately exponential.
%Figure \ref{fg_wav1_2}(b) shows the training time as a function of the number of
%sub-domains per time block. With the degrees of freedom fixed within each sub-domain,
%the increase in the network training time is approximately
%quadratic as the number of sub-domains per time block increases.


Figure \ref{fg_wav1_3}(b) shows the maximum and rms errors in the entire
spatial-temporal domain as a function of the number of collocation
points in each direction (with $Q_x=Q_t$ maintained) in each sub-domain.
Figure \ref{fg_wav1_3}(c) shows the maximum and rms errors in the entire domain
as a function of the number of training parameters per sub-domain.
In these tests, %the temporal domain size is set to $t_f=10$, and we have employed
%$10$ time blocks in the spatial-temporal domain,
we have employed $8$ sub-domains
($N_x=4$, $N_t=2$) per time block.
%one hidden layer in each local neural
%network, and $R_m=1.0$ when generating the
%random weight/bias coefficients in the hidden layers of the local neural networks.
For those tests of Figure \ref{fg_wav1_3}(b), the number of training parameters/sub-domain
is fixed at $M=300$. For the tests of Figure \ref{fg_wav1_3}(c), the number of
collocation points/sub-domains is fixed at $Q=20\times 20$ ($Q_x=Q_t=20$).
With the increase of the collocation points in each direction,
or the increase of the training parameters per sub-domain, we can observe
an approximately exponential decrease in the maximum and rms
errors.
When the number of collocation points (or training parameters) increases above
a certain point, the errors start to stagnate, apparently because of
the fixed number of training parameters (or the fixed number of collocation points)
in these tests.
The sense of convergence exhibited by the current locELM method
is unmistakable.
%with increasing degrees of freedom in the domain, is quite evident.


\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Wave/wave_error_randmag_8elem_colloc20sq_trapar300_1hlayer.pdf}
  }
  \caption{Effect of the random coefficients
    (advection equation): the maximum and rms errors in the domain
    as a function of $R_m$, the maximum magnitude of the random coefficients in
    hidden layer of local neural networks.
    %computed using $10$ time blocks in domain
    %and $8$ sub-domains per time block.
    %8 sub-domains, 20x20 collocation points/sub-domain, 300 training
    %parameters/sub-domain, 1 hidden layer in locELM.
  }
  \label{fg_wav1_4}
\end{figure}

The effect of the random coefficients in the hidden layers of the local
neural networks on the simulation accuracy is illustrated in Figure \ref{fg_wav1_4}.
%We observe an effect of  the random coefficients in the hidden layers of the local
%neural networks on the simulation accuracy similar to that observed
%in the previous section.
%Figure \ref{fg_wav1_4} demonstrates this effect by 
This plot shows the maximum and rms errors in the domain
as a function of $R_m$, the maximum magnitude
of the random weight/bias coefficients.
%in the hidden layer of the local neural networks.
In this set of experiments, the temporal
domain size is $t_f=10$, and $N_b=10$ time blocks are used in the domain.
We have employed $8$ uniform sub-domains per time block ($N_x=4$, $N_t=2$),
$Q=20\times 20$ uniform collocation points per sub-domain ($Q_x=Q_t=20$),
and $M=300$ training parameters per sub-domain.
The weight/bias coefficients in the hidden layers of the local neural
networks are set to uniform random values generated on $[-R_m,R_m]$,
and $R_m$ is varied systematically in these tests.
Very large or very small values of $R_m$ have an adverse effect on
the simulation accuracy. Better accuracy generally
corresponds to a range of moderate $R_m$ values.



\begin{figure}
  \centerline{
    \includegraphics[width=6.2in]{Figures/Wave/wave_soln_dist_long_8elem_colloc20sq_randmag1.0_300trapar_A.pdf}(a)
  }
  \centerline{
    \includegraphics[width=6.2in]{Figures/Wave/wave_error_dist_long_8elem_colloc20sq_randmag1.0_300trapar_A.pdf}(b)
  }
  \caption{Long-time simulation of the
    advection equation: distributions of (a) the locELM solution
    and (b) its absolute error in the spatial-temporal plane for
    a long-time simulation.
    In these tests $100$ time blocks in the domain and $8$ sub-domains
    per time block are used.
    %100 time blocks; time block size=1.0; 8 sub-domains per time block;
    %20x20 collocation points per sub-domain; 300 training parameters
    %per sub-domain; rand-mag=1.0; 1 hidden layer in each locELM.
  }
  \label{fg_wav1_5}
\end{figure}

\begin{figure}
  \centerline{
    \includegraphics[width=6in]{Figures/Wave/wave_soln_hist_long_x2.5_line_B.pdf}(a)
  }
  \centerline{
    \includegraphics[width=6in]{Figures/Wave/wave_error_hist_long_x2.5_line.pdf}(b)
  }
  \centerline{
    \includegraphics[width=2in]{Figures/Wave/wave_soln_prof_t100_line.pdf}(c)
    \includegraphics[width=2in]{Figures/Wave/wave_error_prof_t100_line.pdf}(d)
    \includegraphics[width=2in]{Figures/Wave/wave_error_tblock_hist_8elem_randmag_1.0_colloc20sq_trapar300perElem.pdf}(e)
  }
  \caption{Long-time simulation of the advection equation:
    Time histories of the locELM solution (a) and its absolute error
    against the exact solution (b) at the mid-point ($x=2.5$) of the spatial
    domain.
    Profiles of the locELM solution (c) and its absolute
    error against the exact solution (d)
    at the last time instant $t=100$.
    (e) Time histories of the maximum and rms errors
    in each time block.
    The problem settings correspond to those of Figure \ref{fg_wav1_5}.
  }
  \label{fg_wav1_6}
\end{figure}


Thanks to its accuracy and favorable computational cost,
it is feasible to perform
long-time simulations of time-dependent PDEs
using the current locELM method.
Figures \ref{fg_wav1_5} and \ref{fg_wav1_6} demonstrate
a long-time simulation
of the advection equation with the current method.
In this simulation, the temporal domain size is set to $t_f=100$,
which amounts to approximately $40$ periods of the wave propagation time.
In the simulation
we have employed $100$ uniform time blocks in the domain,
$8$ uniform sub-domains per time block (with $N_x=4$ and $N_t=2$),
$20\times 20$ uniform collocation points per sub-domain (i.e.~$Q_x=Q_t=20$),
$300$ training parameters per sub-domain ($M=300$),
a single hidden layer in each local neural network,
and $R_m=1.0$ when generating the random weight/bias coefficients
for the hidden layers of the local neural networks.
The total network training time for this locELM
computation is about $892$ seconds.
%
Figure \ref{fg_wav1_5} shows the distributions of the locELM solution
and its absolute error in the spatial-temporal plane. 
Figures \ref{fg_wav1_6}(a) and (b) are the time histories of
the locELM solution and its absolute error at the mid-point ($x=2.5$)
of the spatial domain. The time history of the exact solution
at this point is also shown in Figure \ref{fg_wav1_6}(a),
which can be observed to overlap with that of the locELM solution.
Figures \ref{fg_wav1_6}(c) and (d) show the locELM-computed wave profile and
its absolute-error profile at the last time instant $t=100$.
We have also computed and monitored the maximum and rms errors of the locELM solution
within each time block.
Figure \ref{fg_wav1_6}(e) shows these errors versus the time block index,
which represents essentially the time histories of these block-wise
maximum and rms errors.
%
All these results show that the current method has captured
the solution to the advection equation quite accurately
in the long-time simulation.
%and with a reasonable computational cost.
Accurate simulation of the advection equation
in long time integration is challenging, even for classical
numerical methods. 
The results presented here demonstrate the capability and the promise
of the current method in tackling long-time dynamical simulations
of these challenging problems.


%\begin{figure}
%  \centering
%  \includegraphics[width=6in]{Figures/Wave/wave_soln_hist_long_x0.0_line.eps}(a)
%  \includegraphics[width=6in]{Figures/Wave/wave_soln_hist_long_x1.25_line.eps}(b)
%  \includegraphics[width=6in]{Figures/Wave/wave_soln_hist_long_x2.5_line.eps}(c)
%  \includegraphics[width=6in]{Figures/Wave/wave_soln_hist_long_x3.75_line.eps}(d)
%\end{figure}

%%%%%%%%%%%%%%%%%%%%%%%
\begin{comment}

\begin{figure}
  \centerline{
    %\includegraphics[width=1.5in]{Figures/Wave/wave_soln_dist_fixed_totalDOF_1elem_colloc50sq_trapar1600_10tblocks.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Wave/wave_error_dist_fixed_totalDOF_1elem_colloc50sq_trapar1600_10tblocks.pdf}(a)
  %}
  %\centerline{
    %\includegraphics[width=1.5in]{Figures/Wave/wave_soln_dist_fixed_totalDOF_2elem_colloc35sq_trapar800_randmag_2.0_10tblocks.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Wave/wave_error_dist_fixed_totalDOF_2elem_colloc35sq_trapar800_randmag_2.0_10tblocks.pdf}(b)
  }
  \caption{Comparison between locELM and global ELM solutions (advection equation):
    distributions of the absolute errors computed using the locELM method with
    1 sub-domain per time block (a),
    which is equivalent to a global ELM, and
    2 sub-domains per time block (b).
    %$10$ time blocks are used.
    Both cases have essentially the same total degrees of freedom in the domain.
    %the total number of training parameters in each time block is the same ($1600$), and
    %the total number of collocation points in each time block is also approximately
    %the same ($2500$ for 1 sub-domain/block, versus $2450$ for 2 sub-domains/block).
    %10 time blocks; total 1600 training parameters in domain,
    %total about 2500 collocation points in domain.
    %1 sub-domain/block: 50x50 collocation points, 1600 training parameters,
    %rand-mag=3.0.
    %2 sub-domains/block: 35x35 collocation points/sub-domain, 800 training
    %parameters/sub-domain, rand-mag=2.0.
  }
  \label{fg_wav1_8}
\end{figure}


% compare with global DNN

In Figure \ref{fg_wav1_8} we compare the the solutions and their errors
obtained using a single sub-domain per time block,
which is equivalent to that of a global extreme learning machine,
and using two sub-domains per time block in the locELM simulation.
Here the temporal domain size is $t_f=10$, and $10$ uniform time blocks
have been used in the overall domain.
%The case with a single sub-domain per time block is equivalent to
%the configuration of a global ELM in the computation.
This figure is basically a comparison between the global ELM and locELM results.
The total degrees of freedom in the overall domain are essentially the same
for these two cases.
In the case of 1 sub-domain/time-block, we have employed $1600$ training parameters
per sub-domain and $50\times 50$ uniform collocation points per sub-domain.
In the case of $2$ sub-domains/time-block we have employed $800$ training parameters
per sub-domain and $35\times 35$ uniform collocation points per sub-domain.
So both cases have the same
total number of training parameters per time block, and also comparable
total number of collocation points per time block
($2500$ for 1 sub-domain/block versus $2450$ for 2 sub-domains/block).
The local neural networks contain a single hidden layer.
For the case of 1 sub-domain/block we have employed $R_m=3.0$ when generating
the random coefficients for the hidden layer of the local neural network,
and for the case of $2$ sub-domains/block we have employed $R_m=2.0$ when
generating random coefficients for the hidden layers of the local neural networks.
These values are approximately in the optimal range of $R_m$ values
for these cases.
It is evident that both the locELM and the global ELM capture
the wave solution quite accurately, with the locELM solution
on two sub-domains/time-block  better.

\end{comment}
%%%%%%%%%%%%%%%%%%%%%%

\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Wave/wave_error_elem_fixed_totalDOF_colloc2500_trapar1600_10tblocks.pdf}(a)
    \includegraphics[width=2in]{Figures/Wave/wave_traintime_elem_fixed_totalDOF_colloc2500_trapar1600_10tblocks.pdf}(b)
  }
  \caption{Effect of the number of sub-domains,
    with fixed total degrees of freedom in the domain
    (advection equation): (a) the maximum and rms errors in the overall
     domain,
     and (b) the training time, as a function of the number of sub-domains per
     time-block.
    %with the total number of training parameters and collocation points
    %in the domain fixed.
    10 time blocks are used in the domain.
    For all cases,
    the total number of training parameters per time block is fixed at $2500$,
    and the total number of collocation points per time block is
    approximately $2500$.
    %1 sub-domain: 50x50 collocation points, 1600 training
    %parameters, rand-mag=3.0.
    %2 sub-domains: 2x1 configuration, 35x35 collocation points/sub-domain,
    %800 traing parameters/sub-domain, rand-mag=2.0.
    %4 sub-domains: 4x1 configuration, 25x25 collocation points/sub-domain,
    %400 training parameters/sub-domain, rang-mag=2.0.
  }
  \label{fg_wav1_9}
\end{figure}


Figure \ref{fg_wav1_9} provides a comparison between the
locELM and global ELM results.
%The problem settings here correspond to those of Figure \ref{fg_wav1_8}.
We fix the total degrees of freedom in each time block (temporal
dimension $t_f=10$, $10$ time blocks,
$1600$ training parameters/time-block, approximately $2500$ collocation
points/time-block), and vary the
number of sub-domains per time block.
Figure \ref{fg_wav1_9}(a) shows the maximum and rms errors  in the overall
spatial-temporal domain as a function of the number of sub-domains per
time block.
%The cases of one and $2$ sub-domains/time-block correspond
%to those of Figure \ref{fg_wav1_8}.
For the case with $4$ sub-domains
per time-block, we have employed the configuration of $N_x=4$ and $N_t=1$,
%$4$ sub-domains along the $x$ direction
%one sub-domain in time (i.e.~$N_x=4, N_t=1$),
$M=400$ training parameters
per sub-domain, $Q=25\times 25$ uniform collocation points per sub-domain,
and $R_m=2.0$ when generating the random coefficients in the hidden layers
of the local neural networks.
The error levels with one sub-domain and  multiple sub-domains per time block
are observed to be comparable, with the results of $2$ sub-domains
per time block  more accurate than the others.
%
Figure \ref{fg_wav1_9}(b) compares the network training time corresponding
to different sub-domains.
The use of multiple sub-domains is observed to significantly
reduce the training time of the neural network,
from around $105$ seconds with a single sub-domain
per time block to around $40$ seconds with $4$ sub-domains per
time block.
%
The results here confirm what has been observed %with other problems
in the previous section. With the same total degrees of freedom in the domain,
the use of multiple sub-domains and local neural networks with
the current locELM method can significantly reduce
the training/computation time, while 
producing results with comparable accuracy when
compared with the global ELM method.


\begin{figure}
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Wave/wave_soln_dist_dnn_adam_4hlayer_40width.pdf}(a)
    \includegraphics[width=2.1in]{Figures/Wave/wave_error_dist_dnn_adam_4hlayer_40width_A.pdf}(a)
  %}
  %\centerline{
    %\includegraphics[width=2.5in]{Figures/Wave/wave_soln_dist_dnn_dgm_lbfgs_4hlayer_40width.pdf}(c)
    \includegraphics[width=2.1in]{Figures/Wave/wave_error_dist_dnn_dgm_lbfgs_4hlayer_40width_A.pdf}(b)
  %}
  %\centerline{
    %\includegraphics[width=2.5in]{Figures/Wave/wave_soln_dist_locelm_16elem_rangmag2.0_colloc20sq_trapar250perElem.pdf}(e)
    \includegraphics[width=2.1in]{Figures/Wave/wave_error_dist_locelm_16elem_rangmag2.0_colloc20sq_trapar250perElem_A.pdf}(c)
  }
  \caption{Comparison between locELM and DGM (advection equation):
    distributions of the  absolute errors, computed using the deep Galerkin
    method (DGM)~\cite{SirignanoS2018} with the Adam
    optimizer (a) and the L-BFGS optimizer (b), and computed using
    the current locELM method (c).
    %In the locELM simulation, a single time block in the domain and
    %$16$ sub-domains ($N_x=N_t=4$) per time block have been used.
    %DNN-DGM: 4 hidden layers, 40 neurons each, global DNN, 8 sub-domains (4x2) for
    %integration, 30x30 quadrature points in each sub-domain, ic-penalty=0.1,
    %equation-penalty=0.9, periodic BC implemented with $C^{\infty}$ periodic layer;
    %Adam optimizer: total 60,000 epochs, learning rates:
    %first 5000 epochs, 1.0*default-lr;
    %next 5000 epochs, 0.5*default-lr; next 10000 epochs, 0.25*default-lr;
    %next 10000 epochs, 0.125*default-lr; next 10000 epochs, 0.1*default-lr;
    %next 10000 epochs, 0.05*default-lr; next 10000 epochs, 0.025*default-lr.
    %L-BFGS optimizer: total 12,000 iterations.
    %locELM: 1 time block, block size=2.0, 16 sub-domains/block, 4x4 configuration
    %for sub-domains, rand-mag=2.0, 20x20 uniform collocation points/sub-domain,
    %250 training parameters/sub-domain.
  }
  \label{fg_wav1_10}
\end{figure}


\begin{table}
  \centering
  \begin{tabular}{lllll}
    \hline
    method & maximum error & rms error & epochs/iterations & training time (seconds) \\
    DGM (Adam) & $8.37e-3$ & $1.64e-3$ & $60,000$ & $2527.8$ \\
    DGM (L-BFGS) & $2.59e-3$ & $5.37e-4$ & $12,000$ & $1675.9$ \\
    locELM (no block time-marching) & $2.74e-4$ & $6.05e-5$ & $0$ & $43.4$ \\
    locELM (with block time-marching) & $1.83e-4$ & $4.34e-5$ & $0$ & $19.3$ \\
    \hline
  \end{tabular}
  \caption{Advection equation:
    comparison between locELM and DGM.
    %in terms of the maximum/rms errors in the domain and the training time.
    The problem settings correspond to those of Figure \ref{fg_wav1_10}.
    The two DGM cases and the locELM case with no block time-marching correspond
    to those of Figure \ref{fg_wav1_10}.
    In the locELM case with block time-marching, two time blocks
    in the domain and $8$ sub-domains per time block are used.
    The total degrees of freedom for this case are identical to
    those of the locELM case with no
    block time marching.
    %With the current locELM method, the problem is solved both with and
    %without block time marching. The row ``locELM'' corresponds to
    %the case without block time marching. In this case,
    %one time block in the entire spatial-temporal domain,
    %which is partitioned into 16 sub-domains (4x4), 20x20 uniform collocation
    %points/sub-domain, and 250 training parameters/sub-domain, and rand-mag=2.0.
    %In the case with block time-marching, two time blocks are used,
    %%with each time block partitioned into 8 sub-domains, with 20x20 uniform
    %collocation points/sub-domain, and 250 training parameters/sub-domain,
    %and rand-mag=2.0.
  }
  \label{tab_wav1_11}
\end{table}


Finally we compare the current locELM method with the deep Galerkin method
(DGM)~\cite{SirignanoS2018}, another often-used DNN-based PDE solver,
for solving the advection equation.
Figure \ref{fg_wav1_10} shows distributions of the solutions and their
absolute errors obtained using DGM with the Adam and the
L-BFGS optimizers and using the current locELM method.
The temporal domain size is $t_f=2.0$ in these tests.
With DGM, four hidden layers with a width of $40$ nodes  and
the $\tanh$ activation function in each layer
have been employed in the neural networks.
When computing the residual norms in the DGM loss function, we have partitioned
the domain into $8$ sub-regions ($4$ in $x$ and $2$ in time) and
used $30\times 30$ Gaussian quadrature points in each sub-region
for calculating the integrals.
The periodic boundary condition is enforced exactly using the method
from~\cite{DongN2020} for DGM.
With the Adam optimizer, the neural network has been trained for $60,000$
epochs, with the learning rate decreasing gradually from $0.001$
at the beginning to $2.5\times 10^{-5}$ at the end of training.
With the L-BFGS optimizer, the neural network has been trained for
$12,000$ iterations.
In the locELM simulation, a single time block has been used in the spatial-temporal
domain, i.e.~without block time marching. We employ $16$ sub-domains
(with $4$ sub-domains in $x$ and time) per time block,
$20\times 20$ uniform collocation points in each sub-domain,
$250$ training parameters per sub-domain, a single hidden layer
in each local neural network, and $R_m=2.0$ for
generating the random weight/bias coefficients for the hidden layer
of the local neural networks.
One can observe that the current method produces considerably more
accurate result than DGM for the advection equation.


Table \ref{tab_wav1_11} provides further comparisons between
locELM and DGM. Here we list the maximum and rms
errors in the overall spatial-temporal domain, the number of epochs or iterations
in the network training, and the training time obtained using
DGM (Adam/L-BFGS optimizers) and using locELM without block time marching,
and additionally using locELM together with block time marching.
The problem settings here correspond to those of Figure \ref{fg_wav1_10}, and
the DGM cases and the locELM case without block time marching
correspond to those in Figure \ref{fg_wav1_10}.
In the locELM case with block time marching, we
have employed $2$ uniform time blocks in the spatial-temporal domain,
$8$ sub-domains ($N_x=4$, $N_t=2$)
per time block, $20\times 20$ uniform collocation points per sub-domain,
$250$ training parameters per sub-domain, a single hidden layer in
the local neural networks, and $R_m=2.0$
when generating the random weight/bias coefficients.
%for the hidden layer of the local neural networks.
So the total degrees of freedom
in this case are identical to
those of the locELM case without block time marching.
The data in the table shows that the current locELM method (with and without
block time marching) is much more accurate than DGM (by an order of magnitude),
and is dramatically faster to train than DGM (by nearly two
orders of magnitude).
%for the advection equation.
In addition, we observe that the locELM method
with block time marching and
a moderate time block size
can significantly reduce
the training time, and simultaneously achieve
the same or better accuracy,
%in the simulation results,
when compared with that without block time marching.

% what else to discuss here?

% second-order wave equation

%\input Wave2nd


\subsection{Diffusion Equation}

% will only do diffusion equations in 2D+t

% probably do both 1D+t and 2D+t, because can only compare with
% DNN for 1D+t. but do not separate these two into different sub-sections.
% focus on 1D+t, and then show some results for 2D+t toward the end.

In this subsection we the test the locELM method using the diffusion
equation in one and two spatial dimensions (plus time).
Let us first study the 1D diffusion equation.
We consider the spatial-temporal domain,
$\Omega=\{ (x,t)\ |\ x\in[a_1,b_1], \ t\in[0,t_f] \}$,
and the following initial/boundary-value problem,
%with the 1D diffusion equation,
\begin{subequations}
  \begin{align}
    &
    \frac{\partial u}{\partial t} - \nu\frac{\partial^2 u}{\partial x^2}
    = f(x,t), \label{eq_diffu_1} \\
    &
    u(a_1,t) = g_1(t), \\
    &
    u(b_1,t) = g_2(t), \\
    &
    u(x,0) = h(x), \label{eq_diffu_2}
  \end{align}
\end{subequations}
where $u(x,t)$ is the field function to be solved for, $f(x,t)$
is a prescribed source term,
the constant $\nu$ is the diffusion coefficient,
$g_1(t)$ and $g_2(t)$ are the boundary conditions, and
$h(x)$ is the initial field distribution.
The values for the constant parameters involved in the above equations
and in the domain specification are,
\begin{equation*}
  a_1=0,\quad
  b_1=5, \quad
  \nu = 0.01, \quad
  t_f = 10 \ \text{or}\ 1.
\end{equation*}
We choose the source term $f$ such that the following function satisfies
equation \eqref{eq_diffu_1},
\begin{equation}\label{eq_diffu_3}
  u(x,t) = \left[2\cos\left(\pi x+\frac{\pi}{5}\right)
    + \frac32\cos\left(2\pi x - \frac{3\pi}{5} \right) \right]
  \left[2\cos\left(\pi t+\frac{\pi}{5}\right)
    + \frac32\cos\left(2\pi t - \frac{3\pi}{5}\right) \right].
\end{equation}
We choose the boundary conditions $g_1(t)$ and $g_2(t)$ and the initial
condition $h(x)$ according to equation \eqref{eq_diffu_3},
by restricting this expression to the corresponding boundaries
of the spatial-temporal domain.
Therefore, the function given by \eqref{eq_diffu_3} solves
the initial/boundary value problem represented by
equations \eqref{eq_diffu_1}--\eqref{eq_diffu_2}.
%under these settings.


\begin{figure}
  \centerline{
    \includegraphics[height=2.3in]{Figures/Diffu1d/diffu1d_soln_dist_10tblock_5elem_colloc30sq_trapar300_randmag_1.0_B.pdf}(a)
    \includegraphics[height=2.3in]{Figures/Diffu1d/diffu1d_error_dist_10tblock_5elem_colloc30sq_trapar300_randmag_1.0_B.pdf}(b)
  }
  \caption{1D diffusion equation: distributions of the solution (a)
    and its absolute error (b) computed using the current locELM method.
    10 time blocks and 5 sub-domains per time block are employed.
    %30x30 collocation points/sub-domain,
    %and 300 training parameters/sub-domain, and rand-mag=1.0.
  }
  \label{fg_diffu_1}
\end{figure}


% how to simulate the problem?

We employ the locELM method together with block time marching from Section
\ref{sec:unsteady} to solve this initial/boundary value problem, by restricting
the method to one spatial dimension.
We partition the spatial-temporal domain $\Omega$ in time into $N_b$ uniform blocks,
and compute these time blocks individually and successively.
Within each time block, we further partition its spatial-temporal domain
into $N_x$ uniform sub-domains along the $x$ direction and $N_t$ uniform
sub-domains in time,
leading to $N_e=N_xN_t$ uniform sub-domains per time block.
We impose $C^1$ continuity conditions on the sub-domain boundaries in the $x$ direction
and $C^0$ continuity on the sub-domain boundaries in the temporal direction.
Within each sub-domain we use $Q_x$ uniform collocation points along the
$x$ direction and $Q_t$ uniform collocation points in time as the input training data,
leading to a total of $Q=Q_xQ_t$ uniform collocation points per sub-domain.

We use one local neural network to approximate the solution on each
sub-domain within the time block, thus leading to a total of $N_e$ local
neural networks in the locELM simulation. In the majority of
tests of this subsection the local neural networks each
contains a single hidden layer with $M$ nodes and the
$\tanh$  activation function. We also report results
obtained with the local neural networks containing more than one hidden layer.
The input layer of the local neural networks consists of two nodes,
representing $x$ and $t$. The output layer consists of a single node,
representing the solution $u$, and has no bias coefficient
and no activation function.
As in previous subsections, we incorporate an additional affine mapping operation
right behind the input layer of the local neural network
to normalize the input $x$ and $t$ data to the interval $[-1,1]\times[-1,1]$
in each sub-domain. The weight/bias coefficients in the hidden layer of
each of the local neural networks are set to uniform random values
generated on the interval $[-R_m,R_m]$.
We use a fixed seed value $22$ for the Tensorflow random number generator
for all the tests in this subsection.
%so that these tests become repeatable.
%Note that the number of training parameters in the local neural network
%for each sub-domain equals the width of its hidden layer ($M$).

The locELM simulation parameters  include
the number of time blocks ($N_b$), the number of sub-domains per
time block ($N_e, N_x, N_t$), the number of training parameters per
sub-domain ($M$), the number of collocation points per sub-domain ($Q_x$, $Q_t$, $Q$),
and the maximum magnitude of the random coefficients ($R_m$).
In accordance with previous subsections, we use $(Q,M)$ to characterize
the degrees of freedom within a sub-domain, and $(N_eQ,N_eM)$ to
characterize the degrees of freedom within a time block.

Figure \ref{fg_diffu_1} shows distributions of the locELM solution
and its absolute error in the spatial-temporal plane.
%against the exact solution given by \eqref{eq_diffu_3}.
In this test the temporal domain size is set to $t_f=10$.
We have employed $N_b=10$ uniform time blocks in the simulation,
$N_e=5$ uniform sub-domains per time block (with $N_x=5$ and $N_t=1$),
$Q=30\times 30$ uniform collocation points in each sub-domain
(with $Q_x=Q_t=30$), $M=300$ training parameters per sub-domain,
a single hidden layer in the local neural networks,
and $R_m=1.0$ when generating the random weight/bias coefficients for the
hidden layers of the local neural networks.
It is evident that the locELM method has captured the solution
very accurately, with the absolute error on the order of $10^{-9}\sim 10^{-8}$.


\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Diffu1d/diffu1d_error_elem_10tblocks_colloc30sq_trapar300_randmag_1.0_A.pdf}(a)
    \includegraphics[width=2in]{Figures/Diffu1d/diffu1d_error_colloc_10tblocks_trapar300_randmag_1.0_A.pdf}(b)
    \includegraphics[width=2in]{Figures/Diffu1d/diffu1d_error_trapar_10tblocks_5elem_colloc30sq_randmag_1.0_A.pdf}(c)
  }
  \caption{Effect of the degrees of freedom on simulation accuracy
    (1D diffusion equation): the maximum and rms errors in the domain as
    a function of (a) the number of sub-domains in each time block,
    (b) the number of collocation points in each direction
    in each sub-domain, and (c) the number of training parameters in each
    sub-domain.
    Temporal domain size is $t_f=10$ and $10$ uniform time blocks are used.
    %rand-mag=1.0.
    %In (a), 30x30 collocation points/sub-domain, 300 training parameters/sub-domain.
    %In (b) and (c), 5 sub-domains/block.
    %In (b), the number of training parameters/sub-domain is fixed at 300.
    %In (c), the number of collocation points/sub-domain is fixed at 30x30.
  }
  \label{fg_diffu_3}
\end{figure}


The effect of the degrees of freedom on the simulation accuracy
is illustrated by Figure \ref{fg_diffu_3}.
In this group of tests, the temporal domain size is set to $t_f=10$,
and we have employed $N_b=10$ uniform time blocks in the spatial-temporal domain,
a single hidden layer in each local neural network, 
%$N_e=5$ sub-domains (with $N_x=5$ and $N_t=1$) in each time block,
and $R_m=1.0$ when generating the random coefficients for the hidden layers
of the local neural networks.
The number of sub-domains in each time block, or the number of collocation
points per sub-domain, or the number of training parameters
per sub-domain has been varied in the tests.

Figure \ref{fg_diffu_3}(a) illustrates the effect of the number of sub-domains
within each time block, while the degrees of freedom per sub-domain are fixed.
%In this group of tests, the domain size in time is $t_f=10$,
%and we employ $N_b=10$ time blocks in the spatial-temporal domain
Here we fix the number of uniform collocation points per sub-domain at $Q=30\times 30$
($Q_x=Q_t=30$)
and the number of training parameters per sub-domain at $M=300$, and then
vary the number of uniform sub-domains per time block systematically.
%For a given number of sub-domains per time-block,
%we employ $Q=30\times 30$ uniform collocation points in each sub-domain
%($Q_x=Q_t=30$), $M=300$ training parameters per sub-domain,
%and $R_m=1.0$ when generating the random coefficients in the hidden layers of
%the local neural networks.
%Figure \ref{fg_diffu_2}(a)
This plot shows the maximum and rms errors of the locELM
solution in the overall spatial-temporal domain as a function of the
number of sub-domains per time block in the simulations.
%Figure \ref{fg_diffu_2}(b) shows the training time of the neural network
%versus the number of sub-domains per time block.
With increasing number of sub-domains, the numerical errors are observed to decrease
dramatically, from around $10^{-1}$ with one sub-domain/time-block
to around $10^{-8}$ with $5$ sub-domains/time-block.
%The training time also increases with the increase of the sub-domains,
%at a rate that is super-linear.



%The effect of the degrees of freedom in each sub-domain on the accuracy
%of simulation results is illustrated by Figure \ref{fg_diffu_3}.
%In this group of tests, the temporal domain size is $t_f=10$,
%and we have employed $N_b=10$ time blocks in the spatial-temporal domain,
Figure  \ref{fg_diffu_3}(b) illustrates the effect of the number of
collocation points per sub-domain on the simulation accuracy.
Here we use $N_e=5$ uniform sub-domains (with $N_x=5$ and $N_t=1$) in each time block,
fix the the number of training parameters per sub-domain
at $M=300$,
and vary the number of collocation points per sub-domain while maintaining
$Q_x=Q_t$.
%and $R_m=1.0$ when generating the random coefficients for the hidden layers
%of the local neural networks.
%The number of collocation points or the number of training parameters
%per sub-domain is varied in these simulations.
%Figure \ref{fg_diffu_3}(a)
This plot shows the maximum and rms errors
in the overall spatial-temporal domain as a function of the number of
collocation points in each direction in each sub-domain.
%where
%$Q_x=Q_t$ is maintained and the number of training parameters/sub-domain
%is fixed at $M=300$.
The numerical errors can be observed to initially
decrease exponentially with increasing
number of collocation points per direction when it is below about $20$,
and then stagnate at a level around $10^{-8}\sim 10^{-7}$ as the number
of collocation points per direction further increases.


Figure \ref{fg_diffu_3}(c) illustrates the effect of the number of training
parameters on the simulation accuracy.
Here we use $N_e=5$ sub-domains (with $N_x=5$ and $N_t=1$) in each time block,
fix the number of collocation points per sub-domain at
$Q=30\times 30$ ($Q_x=Q_t=30$), and vary the number of training parameters per
sub-domain.
The plot shows the maximum/rms errors in the overall domain
as a function of the number of training parameters per sub-domain.
%where the number of collocation points per sub-domain is fixed at
%$Q=30\times 30$ ($Q_x=Q_t=30$).
One can observe that the  errors initially decrease exponentially
with increasing number of training parameters/sub-domain when
it is below about $250$, and then the error reduction slows down
as the number of training parameters/sub-domain further increases.
These behaviors are  consistent with what have been observed with
other problems in previous subsections.

%% results with 2 hidden layers

\begin{figure}
  \centerline{
    %\includegraphics[height=2.3in]{Figures/Diffu1d/diffu1d_error_dist_10tblock_5elem_colloc30sq_trapar300_randmag_0.5_elm_2hlayers_A.pdf}(a)
    \includegraphics[height=2.4in]{Figures/Diffu1d/diffu1d_error_colloc_10tblocks_trapar300_randmag_0.5_2hlayers_A.pdf}(b)
  }
  \caption{Results obtained with two hidden layers in
    local neural networks (1D diffusion
    equation): %(a) error distribution in the spatial-temporal plane.
    The maximum/rms errors in the domain versus
    the number of collocation points in each direction per sub-domain.
    %$t_f=10$, $10$ time blocks in domain, 5 sub-domains per time block,
    %$300$ training parameters per sub-domain, $R_m=0.5$.
    %Two hidden layers in each local neural network, structure [2, 30, 300, 1].
    %In (a) $Q=30\times 30$ uniform collocation points per sub-domain.
    %In (b) the number of collocation points is varied (maintaining $Q_x=Q_t$).
  }
  \label{fg_diffu_a}
\end{figure}

In Figure \ref{fg_diffu_a} we show results obtained with local neural networks
containing more than one hidden layer.
In this group of simulations, each local neural network contains two
hidden layers, with $30$ and $300$ nodes in these two layers, respectively.
The activation function is $\tanh$ in both hidden layers.
%So the number of training parameters per sub-domain is $M=300$
%(width of the last hidden layer) in these simulations.
The temporal domain size is $t_f=10$, and $10$ uniform time blocks have been
used. We employ $N_e=5$ sub-domains per time block ($N_x=5$ and $N_t=1$),
$M=300$ training parameters per sub-domain (width of the last hidden layer),
and $R_m=0.5$ when generating the random weight/bias coefficients for
the hidden layers of the local neural networks.
The number of collocation points per sub-domain ($Q$) is varied systematically
while $Q_x=Q_t$ is maintained.
Figure \ref{fg_diffu_a}(a) shows the distribution of the absolute
error of the locELM solution in the spatial-temporal plane,
obtained with $Q=30\times 30$ uniform collocation points per sub-domain.
This figure can be compared with Figure \ref{fg_diffu_1}(b),
which corresponds to the same simulation resolution but is obtained with
local neural networks containing a single hidden layer.
Figure \ref{fg_diffu_a}(b) shows the maximum and rms errors in the overall domain
as a function of the number of collocation points in each direction
per sub-domain. This figure can be compared with
Figure \ref{fg_diffu_3}(b), which corresponds to a single hidden layer
in the local neural networks.
It is evident that the solution has been captured accurately by the current locELM
method using two hidden layers in the local neural networks.
The results shown here and those results in Section \ref{sec:helm1d}
(see Figures \ref{fig:helm1d_4} and \ref{fig:helm1d_5})
demonstrate that the current locELM method,
using local neural networks with a small number of (more than one) hidden layers,
is able to produce accurate simulation results.



%% effect of rand-mag

\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Diffu1d/diffu1d_error_randmag_10tblocks_5elem_colloc30sq_trapar300_A.pdf}
  }
  \caption{Effect of the random coefficients in local
    neural networks
    (1D diffusion equation): the maximum and rms errors in the domain as a
    function of $R_m$, the maximum magnitude of the random coefficients.
    %10 time blocks, 5 sub-domains/block, 30x30 collocation points/sub-domain,
    %300 training parameters/sub-domain, 1 hidden layer in locELM.
  }
  \label{fg_diffu_4}
\end{figure}

Figure \ref{fg_diffu_4} illustrates  the effect of the random weight/bias
coefficients in the local neural networks on
the simulation accuracy.
It shows the maximum and rms errors of the locELM solution in
the overall domain as a function of $R_m$, the maximum magnitude of the
random coefficients.
In this group of tests, the temporal domain size is $t_f=10$,
and we have employed $N_b=10$ uniform time blocks in the domain,
$N_e=5$ uniform sub-domains per time block ($N_x=5$, $N_t=1$),
$Q=30\times 30$ uniform collocation points per sub-domain,
$M=300$ training parameters per sub-domain, and a single hidden layer
in the local neural networks.
The random coefficients in the hidden layers are generated on $[-R_m,R_m]$,
and $R_m$ is varied systematically in these tests.
We observe a similar behavior to those in previous subsections.
A better accuracy can be attained with a range of moderate $R_m$ values,
while very large or very small $R_m$ values tend to produce less accurate
results.



\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Diffu1d/diffu1d_error_elem_fixedTotDOF.pdf}(a)
    \includegraphics[width=2in]{Figures/Diffu1d/diffu1d_traintime_elem_fixedTotDOF.pdf}(a)
  }
  \caption{Effect of the number of sub-domains, with fixed total degrees of freedom in
    the domain (1D diffusion equation): (a) the maximum and rms errors in the domain, and
    (b) the training time, as a function of the number of sub-domains per time
    block.
    %The case with one sub-domain per time block is equivalent to
    %that of a global extreme learning machine.
    %In the tests 10 time blocks are used in the domain,
    %the total number of
    %training parameters per time block is fixed at 1500, and
    %the total number of collocation points per time block
    %is approximately fixed at 2500.
    %1 sub-domain/block: rand-mag=3.0;
    %2 to 5 sub-domains/block: rand-mag = 1.0.
  }
  \label{fg_diffu_5}
\end{figure}

Figure \ref{fg_diffu_5} depicts a study of the effect of the number of sub-domains
in the simulation on the simulation accuracy and on the network training time,
while the total number of degrees of freedom
in the domain is fixed.
In this group of tests, the temporal domain size is $t_f=10$,
and we have used $N_b=10$ time blocks in the overall spatial-temporal domain.
The number of uniform sub-domains per time block is varied systematically
between $N_e=1$ and $N_e=5$ in the simulations, implemented by fixing
$N_t=1$ and varying $N_x$ between $1$ and $5$.
We set the number of training parameters per sub-domain ($M$), and the number of
uniform collocation points per sub-domain ($Q$, with $Q_x=Q_t$),
in a way such that the total number of training parameters per time block is
fixed at $N_eM = 1500$ and the total number of collocation points per
time block is approximately fixed at $N_eQ\approx 2500$.
Specifically, $M$ and $Q$ in different cases  are:
$M=1500$ and $Q=50\times 50$ for $1$ sub-domain per time block,
$M=750$ and $Q=35\times 35$ for $2$ uniform sub-domains per time block,
$M=500$ and $Q=29\times 29$ for $3$ uniform sub-domains per time block,
$M=375$ and $Q=25\times 25$ for $4$ uniform sub-domains per time block,
and $M=300$ and $Q=22\times 22$ for $5$ uniform sub-domains per time block.
We employ $R_m=3.0$ when generating the random coefficients
for the case with one sub-domain per time block,
which is approximately at the optimal range of $R_m$ values for this case.
We employ $R_m=1.0$ when generating the random coefficients
for the rest of the cases with $N_e=2 \sim 5$ sub-domains per time block. 
Note that the case
with one sub-domain per time block is equivalent to the configuration of
a global ELM in the simulation.
Figure \ref{fg_diffu_5}(a) shows a comparison of the maximum and rms errors
in the overall spatial-temporal domain as a function of the number of sub-domains
per time block in the simulations.
It can be observed that the numerical errors with $2$ or more sub-domains
are comparable to or smaller than the errors corresponding to one sub-domain
in the simulations.
Figure \ref{fg_diffu_5}(b) shows the neural-network training time
as a function of the number of sub-domains per time block.
One can observe that the training time decreases significantly
with increasing number of
sub-domains. Compared with the case of one sub-domain per time block,
the training time corresponding to $2$ and more sub-domains in the simulations
has been considerably reduced, e.g.~$277$ seconds with one sub-domain versus $79$
seconds with $2$ sub-domains.
These results confirm and reinforce our observations with the other problems
that, compared with global ELM, the use of domain decomposition and locELM
with multiple sub-domains can significantly reduce the network training time, and
hence the computational cost,
while attaining the same or sometimes
even better accuracy in the simulation results.


\begin{figure}
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Diffu1d/diffu1d_soln_dist_dnn_adam_4hlayers_40width.pdf}(a)
    \includegraphics[width=2.5in]{Figures/Diffu1d/diffu1d_error_dist_dnn_adam_4hlayers_40width_A.pdf}(a)
  }
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Diffu1d/diffu1d_soln_dist_dnn_lbfgs_4hlayers_40with.pdf}(c)
    \includegraphics[width=2.5in]{Figures/Diffu1d/diffu1d_error_dist_dnn_lbfgs_4hlayers_40with_A.pdf}(b)
  }
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Diffu1d/diffu1d_soln_dist_locelm_compare_dnn_1tblock_5elem_colloc30sq_trapar300_randmag_1.0_A.pdf}(e)
    \includegraphics[width=2.5in]{Figures/Diffu1d/diffu1d_error_dist_locelm_compare_dnn_1tblock_5elem_colloc30sq_trapar300_randmag_1.0_B.pdf}(c)
  }
  \caption{Comparison between locELM and DGM
    (1D diffusion equation): Distributions of the
     absolute errors  computed
    using DGM with the Adam optimizer (a) and the L-BFGS
    optimizer (b) and using the current locELM method (c).
    %With DGM, the DNN structure is [2, 40, 40, 40, 40, 1], with
    %tanh for all hidden layers;
    %Adam: total 135,000 epochs; learning rates: 1.0*default-lr for
    %the first 5000 epochs, 0.5*default-lr for next 5000 epochs,
    %0.25*default-lr for next 5000 epochs, 0.15*default-lr for next 10000
    %epochs, 0.1*default-lr for next 10000 epochs, 0.08*default-lr for
    %next 10000 epochs, 0.05*default-lr for next 10000 epochs, 0.025*default-lr
    %for next 10000 epochs, 0.01*default-lr for next 10000 epochs,
    %0.0075*default-lr for next 20000 epochs, 0.005*default-lr for next
    %20000 epochs, 0.0025*default-lr for next 20000 epochs.
    %L-BFGS: total 36000 iterations.
    %locELM: 1 time block, 5 sub-domains/block, 30x30 collocation
    %points/sub-domain, 300 training parameters/sub-domain, rand-mag=1.0,
    %1 hidden layer in locELM.
  }
  \label{fg_diffu_6}
\end{figure}


\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Diffu1d/diffu1d_soln_prof_comp_dnn_t1.0_line.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Diffu1d/diffu1d_error_prof_comp_dnn_t1.0_line.pdf}(b)
  }
  \caption{Comparison between locELM and DGM
    (1D diffusion equation): Profiles of the solutions (a)
    and their absolute errors (b) at $t=1.0$
    obtained using DGM (Adam/L-BFGS optimizers)
    and using the current locELM method.
    The problem settings and the simulation parameters
    correspond to those of Figure \ref{fg_diffu_6}.
  }
  \label{fg_diffu_7}
\end{figure}

Let us now compare the current locELM method and the deep Galerkin method (DGM)
for solving the 1D diffusion equation. 
Figure \ref{fg_diffu_6} compares distributions of the solutions and their
absolute errors obtained using DGM with the Adam optimizer and the L-BFGS
optimizer and using the current locELM method.
The temporal domain size is set to $t_f=1$ in these tests.
With DGM, the neural network consists of 4 hidden layers, with a width of
$40$ nodes and the $\tanh$ activation function in each layer.
When computing the loss function of the network,
we have divided the domain  into
$5$ uniform sub-regions along the $x$ direction, and computed the
residual norm integral by the Gaussian quadrature rule
on $20\times 20$ Gauss-Lobatto-Legendre quadrature points in each sub-region.
With the Adam optimizer, the neural network has been trained for
$135,000$ epochs, with the learning rate decreasing gradually from $0.001$
at the beginning to $2.5\times 10^{-6}$ at the end of the training.
With the L-BFGS optimizer, the neural network has been trained for
$36,000$ L-BFGS iterations.
In the simulation with the current locELM method, we
employ $N_b=1$ time block in the spatial-temporal domain,
$N_e=5$ sub-domains (with $N_x=5$, $N_t=1$) per time block,
$Q=30\times 30$ uniform collocation points per sub-domain,
$M=300$ training parameters per sub-domain, $1$ hidden layer in
each of the local neural networks, and
$R_m=1.0$ when generating the random weight/bias coefficients.
%for the hidden layer of the local neural networks.
The DGM has captured the solution reasonably well. But its error levels
are considerably higher, by about five orders of magnitude,
than that of the current locELM method ($10^{-3}$ versus $10^{-8}$).

A comparison of the solution and the error profiles between locELM and DGM
is provided in Figure \ref{fg_diffu_7}.
Figure \ref{fg_diffu_7}(a) compares the solution profiles at $t=1.0$
obtained using DGM (Adam/L-BFGS optimizers) and using the
current locELM method, together with that of the exact solution.
The settings and the parameters correspond to those of Figure \ref{fg_diffu_6}.
The computed profiles all agree with the exact solution quite well.
Figure \ref{fg_diffu_7}(b) compares profiles of the absolute errors at $t=1.0$
obtained with DGM and the current method.
The numerical error of the current method, which is at a level around $10^{-9}$,
is significantly smaller than those from DGM, which are at
a level around $10^{-4}$.

\begin{table}
  \centering
  \begin{tabular}{lllll}
    \hline
    method & maximum error & rms error & epochs/iterations & training time (seconds)\\
    DGM (Adam) & $2.59e-2$ & $3.84e-3$ & $135,000$ & $4194.5$ \\
    DGM (L-BFGS) & $5.82e-3$ & $8.21e-4$ & $36,000$ & $3201.4$ \\
    locELM & $5.82e-8$ & $6.25e-9$ & $0$ & $28.4$ \\
    \hline
  \end{tabular}
  \caption{1D diffusion equation: comparison between DGM (Adam/L-BFGS
    optimizers) and locELM.
    %in terms of the maximum/rms errors
    %in the domain and the training time.
    The settings and parameters correspond to those of Figure \ref{fg_diffu_6}.
  }
  \label{tab_diffu_8}
\end{table}

In Table \ref{tab_diffu_8} we provide some further comparisons between locELM and
DGM  in terms of the accuracy and the computational cost.
Here we list the maximum and rms errors in the overall spatial-temporal domain,
the number of epochs or iterations in network training, and the training time
of DGM with the Adam and L-BFGS optimizers and of the current locELM
method. The problem settings and the simulation parameters correspond to
those of Figure \ref{fg_diffu_6}.
The data demonstrate the clear superiority of locELM to DGM,
with the locELM errors five orders of magnitude smaller and
the training time over two orders of magnitude less.

% compare with FEM

\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Diffu1d/FEM/diffu1d_compare_FEM_soln_prof.pdf}(a)
    \includegraphics[width=2in]{Figures/Diffu1d/FEM/diffu1d_compare_FEM_error_prof.pdf}(b)
    \includegraphics[width=2in]{Figures/Diffu1d/FEM/diffu1d_fem_conv_temporal.pdf}(c)
  }
  \caption{Comparison between locELM and FEM (1D diffusion equation):
    Profiles of (a) the solutions and (b) their absolute errors at $t=1.0$, computed
    using the current locELM method and using the finite element
    method (FEM).
    (c) The FEM maximum and rms
    errors at $t=0.5$ versus $\Delta t$, showing the temporal second-order
    convergence rate of FEM.
    %In (a) and (b), $10,000$ uniform elements and $\Delta t=0.00025$
    %are used in the FEM simulation; In the locELM simulation we have
    %used one time block, $5$ sub-domains/time-block, $30\times 30$
    %collocation points/sub-domain, $300$ training parameters/sub-domain,
    %and $R_m=1.0$.
    %In (c), $10,000$ uniform elements are used in the FEM tests.
  }
  \label{fg_diffu_8}
\end{figure}

\begin{table}
  \centering
  \begin{tabular}{lllllllll}
    \hline
    method & $\Delta t$ & elements & sub-domains & $Q$ & $M$ & maximum & rms  & wall time\\
     & & & & & & error & error & (seconds) \\
    \hline
    locELM & -- & -- & $5$ & $20\times 20$ & $200$ & $2.48e-6$ & $2.23e-7$ & $7.9$ \\
     & -- & -- & $5$ & $20\times 20$ & $250$ & $8.97e-8$ & $2.25e-8$ & $11.3$ \\
     & -- & -- & $5$ & $30\times 30$ & $300$ & $5.82e-8$ & $6.25e-9$ & $28.4$ \\
    \hline
    FEM
    & $0.002$ & $2000$ & -- & -- & -- & $2.42e-4$ & $4.40e-5$ & $5.9$ \\
    & $0.001$ & $2000$ & -- & -- & -- & $9.82e-5$ & $2.01e-5$ & $12.0$ \\
    & $0.0005$ & $2000$ & -- & -- & -- & $1.54e-4$ & $2.61e-5$ & $24.0$ \\
    & $0.00025$ & $2000$ & -- & -- & -- & $1.72e-4$ & $2.85e-5$ & $48.3$ \\
    \cline{2-9}
    & $0.002$ & $5000$ & -- & -- & -- & $3.63e-4$ & $5.98e-5$ & $12.3$ \\
    & $0.001$ & $5000$ & -- & -- & -- & $6.99e-5$ & $1.22e-5$ & $24.6$ \\
    & $0.0005$ & $5000$ & -- & -- & -- & $1.69e-5$ & $3.43e-6$ & $48.8$ \\
    & $0.00025$ & $5000$ & -- & -- & -- & $2.26e-5$ & $3.91e-6$ & $97.9$ \\
    \cline{2-9}
     & $0.002$ & $10000$ & -- & -- & -- & $3.85e-4$ & $6.32e-5$ & $22.2$ \\
     & $0.001$ & $10000$ & -- & -- & -- & $9.11e-5$ & $1.49e-5$ &  $43.9$ \\
     & $0.0005$ & $10000$ & -- & -- & -- & $1.75e-5$ & $3.05e-6$ & $86.9$ \\
     & $0.00025$ & $10000$ & -- & -- & -- & $4.24e-6$ & $8.58e-7$ & $179.0$ \\
    \hline
  \end{tabular}
  \caption{1D diffusion equation: comparison between FEM
    and the current locELM method, in terms of the maximum/rms errors
    in the overall domain and the training/computation time.
    The temporal domain size is $t_f=1$. $R_m=1.0$ in locELM simulations. 
    %The domain setting corresponds to those of Figure \ref{fg_diffu_6}.
  }
  \label{tab_diffu_9}
\end{table}

Let us next compare the current
locELM method with the classical finite element method
for solving the 1D diffusion equation.
In Figures \ref{fg_diffu_8}(a) and (b) we compare profiles of
the solutions and their absolute errors at $t=1.0$,
obtained using the current locELM method and the finite element
method. The domain and problem settings in these tests correspond to those of
Figures \ref{fg_diffu_6}(e,f), with a temporal domain size $t_f=1$.
The simulation parameters for the locELM computation
also correspond to those of Figures \ref{fg_diffu_6}(e,f).
For the FEM simulation, the diffusion equation~\eqref{eq_diffu_1} is
discretized in time by the second-order backward differentiation formula (BDF2), and
the diffusion term is treated implicitly. We have employed a
time step size $\Delta t=0.00025$ and
$10,000$ uniform linear elements to discretize the spatial domain.  
It is evident from these data that both the FEM and the current method
have produced accurate results.
Figure \ref{fg_diffu_8}(c) shows the maximum and rms errors at $t=0.5$
versus the time step size $\Delta t$ with FEM, showing the
second-order temporal convergence rate. In these tests a fixed mesh
of $10,000$ uniform linear elements has been used, which accounts for
the observed error saturation in Figure \ref{fg_diffu_8}(c) when $\Delta t$
becomes sufficiently small.

Table \ref{tab_diffu_9} provides a comparison of the accuracy and
the computational cost of the locELM method and the finite element method.
In these tests the temporal domain size is set to $t_f=1$.
In the locELM simulations we employ a single time block in the spatial-temporal
domain, $5$ uniform sub-domains in the time block, and several sets of collocation
points/sub-domain and training parameters/sub-domain, a single hidden layer
in the local neural networks, and
$R_m=1.0$ when generating the random coefficients.
%for local neural networks.
In the FEM simulations, we employ several sets of elements and $\Delta t$
values. The maximum error and the rms error in the overall spatial-temporal domain
have been computed, and the wall time for the computation or network training
have been recorded.
In Table \ref{tab_diffu_9} we list these errors and the wall time numbers
corresponding to the different simulation cases with locELM and FEM.
We observe that the current method performs markedly better than FEM.
The current method achieves a considerably better
accuracy with the same computational cost as FEM, and it incurs a lower computational
cost while achieving the same accuracy as FEM.
For example, the locELM case with $(Q,M)=(20\times 20,250)$ has a computational
cost comparable to the FEM cases with $2000$ elements and $\Delta t=0.001$
and with $5000$ elements and $\Delta t=0.002$. But the numerical errors of
locELM are considerably smaller, by around three orders of magnitude,
than those of the FEM cases.
The locELM case with $(Q,M)=(30\times 30,300)$ has a lower computational cost,
by a factor of about three,
and a considerably better accuracy, by a factor of nearly three orders of magnitude,
than the FEM case with $10,000$ elements and $\Delta t=0.0005$.

% what else to discuss here?


\subsection{Nonlinear Examples}

\subsubsection{Nonlinear Helmholtz Equation}

As the first nonlinear example,
we test the locELM method using the boundary value problem
with the nonlinear
Helmholtz equation in one dimension.
Consider the domain $[a,b]$ and the following boundary value problem
on this domain,
\begin{subequations}
  \begin{align}
    &
    \frac{d^2u}{d x^2} - \lambda u + \beta\sin(u) = f(x),
    \label{eq_nl_hm_1} \\
    &
    u(a) = h_1, \\
    &
    u(b) = h_2, \label{eq_nl_hm_2}
  \end{align}
\end{subequations}
where $u(x)$ is the function to be solved for, $f(x)$ is a prescribed
source term, $\lambda$ and $\beta$ are constant parameters, 
and $h_1$ and $h_2$ are the boundary values.
We assume the following values for the constant parameters
involved in these equations and domain specification,
\begin{equation*}
  a = 0, \quad
  b = 8, \quad
  \lambda = 50, \quad
  \beta = 10. \quad
\end{equation*}
%
We choose the source term $f(x)$ and the boundary values $h_1$ and $h_2$
such that the following function satisfies the
equations \eqref{eq_nl_hm_1}--\eqref{eq_nl_hm_2},
\begin{equation}\label{eq_nhm_3}
  u(x) = \sin\left(3\pi x + \frac{3\pi}{20} \right)
  \cos\left(4\pi x - \frac{2\pi}{5} \right) + \frac32 + \frac{x}{10}.
\end{equation}


\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Nonl_Helm/nonl_helm_soln_dist_locelm_nonlstsq_4elem_colloc100_trapar200_randmag_5.0_A.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Nonl_Helm/nonl_helm_error_dist_locelm_nonlstsq_4elem_colloc100_trapar200_randmag_5.0_A.pdf}(b)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Nonl_Helm/nonl_helm_soln_dist_locelm_newton_4elem_colloc100_trapar200_randmag_5.0_A.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Nonl_Helm/nonl_helm_error_dist_locelm_newton_4elem_colloc100_trapar200_randmag_5.0_A.pdf}(d)
  }
  \caption{Nonlinear Helmholtz equation: profiles of the locELM
    solutions (a,c) and their absolute errors (b,d), computed
    using NLSQ-perturb (a,b) and Newton-LLSQ (c,d).
    %4 uniform sub-domains, 100 collocation points/sub-domain, 200 training
    %parameters/sub-domain, rand-mag=5.0, 1 hidden layer in locELM.
  }
  \label{fg_nhm_1}
\end{figure}



% how to simulate problem?

We employ the locELM method discussed in Section \ref{sec:nonl_steady}
for solving this problem, by restricting the method to one dimension.
We partition the domain $[a,b]$ into $N_e$ uniform sub-domains (sub-intervals),
and impose the $C^1$ continuity conditions across the sub-domain boundaries.
We employ $Q$ uniform collocation points within each sub-interval.

The local neural network for each sub-domain consists of
an input layer with one node (representing $x$), a single hidden layer
with $M$ nodes and the $\tanh$ activation function, and
an output layer with one node (representing the solution $u$)
and no activation function and no bias.
An additional affine mapping operation normalizing the input $x$ data to
the interval $[-1,1]$ is incorporated into the local neural networks
right behind the input layer for each sub-domain.
The weight and bias coefficients in the hidden layer of the local
neural networks are set to uniform random values generated on the interval
$[-R_m,R_m]$.
%In order to make the tests repeatable,
We employ a fixed seed value $12$ for the Tensorflow random generator
for all the tests reported in this sub-section.

% NLSQ-perturb and Newton-LLSQ parameters

We employ the nonlinear least squared method with perturbations
(NLSQ-perturb) and the combined Newton/linear least squared method (Newton-LLSQ)
from Section \ref{sec:nonl_steady} for computing the resultant
nonlinear problem. The initial guess to the solution % in these methods
is set to zero in all the tests of this subsection.
In the NLSQ-perturb method (see Algorithm \ref{alg:alg_1}),
we have employed $\delta = 0.2$, and $\xi_2=1$ as discussed in Remark \ref{rem_9},
for generating the random perturbations in the following tests.

% summary of simulation parameters

The locELM simulation parameters include
the number of sub-domains ($N_e$), the number of collocation points per
sub-domain ($Q$), the number of training parameters per sub-domain ($M$),
and the maximum magnitude of the random coefficients %of the hidden layers
of the local neural networks ($R_m$).
%Note that the number of training parameters per sub-domain
%corresponds to the number of nodes in the hidden layer of the local
%neural networks for each sub-domain.

% discussion of results

Figure \ref{fg_nhm_1} illustrates the profiles of
the locELM solutions and their absolute errors computed using
the NLSQ-perturb and Newton-LLSQ methods.
%against the exact solution given by \eqref{eq_nhm_3}.
In these simulations, we have employed $N_e=4$ uniform sub-domains,
$Q=100$ uniform collocation points per sub-domain, $M=200$ training
parameters per sub-domain, and $R_m=5.0$ for generating the random
weight/bias coefficients.
%in the hidden layer of the local neural networks.
%Figures \ref{fg_nhm_1}(a) and (b) show the solution and error profiles
%obtained using the nonlinear least squares method with perturbations (NLSQ-perturb).
%Figures \ref{fg_nhm_1}(c) and (d) depict the corresponding profiles
%obtained using the combined Newton/linear least squares method (Newton-LLSQ).
The profile of the exact solution given by \eqref{eq_nhm_3}
is also included in these plots.
The solution profiles obtained with the current method exactly overlap
with that of the exact solution.
%incidating that the locELM method with
%both NLSQ-perturb and Newton-LLSQ produces accurate results.
The error profiles indicate that the NLSQ-perturb method results in more
accurate results than Newton-LLSQ, with error levels
on the order $10^{-12}\sim 10^{-9}$ for NLSQ-perturb
versus $10^{-9}\sim 10^{-5}$ for Newton-LLSQ.


% 1D and 2D nonlinear helmholtz equations


\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Nonl_Helm/nonl_helm_error_colloc_compare_method_4elem_colloc100_trapar200_randmag_5.0_A.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Nonl_Helm/nonl_helm_traintime_colloc_compare_method_4elem_colloc100_trapar200_randmag_5.0_A.pdf}(b)
  }
  \caption{Effect of collocation points (nonlinear Helmholtz equation):
    (a) the maximum and rms errors in the domain,
    and (b) the network training time,
    as a function of the number of collocation points per
    sub-domain, computed using the locELM method
    with NLSQ-perturb and Newton-LLSQ.
    %4 sub-domains, rand-mag=5.0, the number of training parameters
    %per sub-domain is 200.
  }
  \label{fg_nhm_2}
\end{figure}

Figure \ref{fg_nhm_2} demonstrates the effect of the number of collocation
points per sub-domain on the simulation accuracy and
the computational cost.
In this group of tests, we have employed $N_e=4$ sub-domains,
$M=200$ training parameters per sub-domain, and $R_m=5.0$ when
generating the random coefficients.
%for the hidden layers of the local neural networks.
The number of uniform collocation points per sub-domain is varied systematically
between $Q=25$ and $Q=200$.
Figure \ref{fg_nhm_2}(a) shows the maximum and rms errors in the domain
as a function of the number of collocation points per sub-domain,
obtained with NLSQ-perturb and Newton-LLSQ.
Figure \ref{fg_nhm_2}(b) shows the corresponding
training time of the overall neural network
versus the number of collocation points per sub-domain.
%associated with these two methods.
With the Newton-LLSQ method, the errors are observed to
decrease gradually with increasing
number of collocation points, and appear to stagnate at a level around
$10^{-6}$ when the number of collocation points/sub-domain is
beyond $150$.
With the NLSQ-perturb method, the errors initially decrease exponentially
with increasing number of collocation points (when below $125$),
and then stagnate at a level around $10^{-11}$ when the number of collocation
points/sub-domain increases to $150$ and beyond.
The  NLSQ-perturb results are in general considerably more accurate
than those obtained with Newton-LLSQ.
%
% training time
In terms of the training time, the Newton-LLSQ method is consistently faster than
NLSQ-perturb, and the difference becomes larger as the number of
collocation points increases.
With the Newton-LLSQ method, the training time appears not sensitive to
the number of collocation points, and remains nearly the same
with increasing number of collocation points (Figure \ref{fg_nhm_2}(b)).
With the NLSQ-perturb method, the training time increases approximately
linearly with increasing number of collocation points per sub-domain,
and it becomes substantially slower than Newton-LLSQ when the number of
collocation points becomes large.


\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Nonl_Helm/nonl_helm_error_trapar_compare_method_4elem_colloc100_randmag_5.0_A.pdf}(c)
    \includegraphics[width=2.2in]{Figures/Nonl_Helm/nonl_helm_traintime_trapar_compare_method_4elem_colloc100_randmag_5.0_A.pdf}(c)
  }
  \caption{Effect of the number of training parameters (nonlinear Helmholtz equation):
    (a) the maximum and rms errors in the domain,
    and (b) the network training time,
     versus the number of
    training parameters per sub-domain, computed using the locELM method
    with NLSQ-perturb and Newton-LLSQ.
    %4 sub-domains, rand-mag=5.0,
    %the number of collocation points/sub-domain is 100.
  }
  \label{fg_nhm_3}
\end{figure}

Figure \ref{fg_nhm_3} demonstrates the effect of the number of training parameters
per sub-domain on the simulation accuracy and the computational cost.
In this group of tests, we have employed $N_e=4$ sub-domains,
$Q=100$ uniform collocation points per sub-domain, and $R_m=5.0$
when generating the random coefficients in the hidden layers of the local
neural networks. The number of training parameters per sub-domain
%i.e.~the number of nodes in the hidden layer of the local neural networks,
is varied systematically between $50$ and $350$.
Figure \ref{fg_nhm_3}(a) shows the maximum and rms errors of the solutions
as a function of the number of training parameters per sub-domain
obtained with  NLSQ-perturb and Newton-LLSQ.
With NLSQ-perturb, the numerical errors decrease substantially as
the number of training parameters per sub-domain increases,
reaching  a level around
$10^{-10}$ when the number of training parameters increases beyond $200$.
With Newton-LLSQ, one can also observe a decrease in the errors as the number of
training parameters increases. But the error reduction is much slower.
When the number of training parameters per sub-domain exceeds $200$,
the errors with Newton-LLSQ no longer seem to decrease further and remain
at a level around $10^{-5}$. It is evident that
the results from the Newton-LLSQ method are generally much less accurate
than those from the NLSQ-perturb method.
Figure \ref{fg_nhm_3}(b) shows the corresponding network training time
as a function of the number of training parameters per sub-domain.
In the range of training parameters tested here, the training time with both of these
two methods appear to fluctuate around a certain level.
But the training time with the Newton-LLSQ method is generally notably
smaller than that with the NLSQ-perturb method, except for the outlier point
corresponding to $100$ training parameters per sub-domain.
These data suggest
that Newton-LLSQ is generally faster than NLSQ-perturb.

%% effect of multiple hidden layers on results

\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Nonl_Helm/nonl_helm_soln_dist_locelm_nonlstsq_4elem_colloc175_trapar250_randmag_2.0_2hlayers_A.pdf}(a)
    \includegraphics[width=2in]{Figures/Nonl_Helm/nonl_helm_error_dist_locelm_nonlstsq_4elem_colloc175_trapar250_randmag_2.0_2hlayers_A.pdf}(b)
    \includegraphics[width=2in]{Figures/Nonl_Helm/nonl_helm_2hlayers_error_colloc_4elem_trapar250_randmag_2.0_A.pdf}(c)
  }
  \caption{Results obtained with $2$ hidden layers in local neural networks
    (nonlinear Helmholtz equation): profiles of (a) the locELM (NLSQ-perturb) solution
    and (b) its absolute error. (c) The maximum/rms errors in the domain versus
    the number of collocation points per sub-domain.
    %4 uniform sub-domains, $250$ training parameters per sub-domain,
    %$R_m=2.0$. 2 hidden layers in local NN, structure: [1, 25, 250, 1].
    %In (a,b) $175$ uniform collocation points per sub-domain are used.
  }
  \label{fg_nhm_a}
\end{figure}

With the current locELM method, the local neural network
%for each sub-domain
can contain more than one hidden layer. As shown in previous sub-sections,
local neural networks with a small number (more than one) of hidden layers
can also deliver accurate  results using the current method.
Figure \ref{fg_nhm_a} demonstrates again this point with the nonlinear
Helmholtz equation. In this group of tests, we employ $N_e=4$ uniform sub-domains,
$M=250$ training parameters per sub-domain, $2$ hidden layers (with widths
$25$ and $250$, respectively, and the $\tanh$ activation function) in
each local neural network, and $R_m=2.0$ when generating the
random weight/bias coefficients for these hidden layers.
The number of uniform collocation points per sub-domain is varied
systematically in these tests.
Figures \ref{fg_nhm_a}(a) and (b) show the locELM solution and error profiles
obtained with $Q=175$ uniform collocation points per sub-domain
using the NLSQ-perturb method.
Figure  \ref{fg_nhm_a}(c) shows the maximum and rms errors in the domain
as a function of the number of uniform collocation points per sub-domain.
We observe an essentially exponential decrease in the numerical errors
with increasing number of collocation points per sub-domain.


\begin{figure}
  \centerline{
    \includegraphics[width=2.in]{Figures/Nonl_Helm/nonl_helm_error_method_compare_4elem_colloc100_trapar200_randmag_5.0_B.pdf}
    %\includegraphics[width=3in]{Figures/Nonl_Helm/nonl_helm_traintime_method_compare_4elem_colloc100_trapar200_randmag_5.0_B.eps}(b)
  }
  \caption{Effect of the random coefficients in %hidden layers of
    local neural networks
    (nonlinear Helmholtz equation): the maximum  error in the domain
    %and (b) the training time,
    versus $R_m$, 
    %of the maximum magnitude of the random coefficients ($R_m$),
    obtained with the NLSQ-perturb and Newton-LLSQ methods. 
    %4 sub-domains, 100 collocation points/sub-domain, 200 training
    %parameters/sub-domain, 1 hidden layer in locELM.
    %NLSQ: nonlinear least squares method;
    %NLSQ-perturb: nonlinear least squares method, with perturbation/sub-iteration;
    %Newton-LLSQ: combined Newton/linear least squares method.
  }
  \label{fg_nhm_4}
\end{figure}

Figure \ref{fg_nhm_4} illustrates the effect of the random coefficients in
the hidden layers of the local neural networks.
In this group of tests we employ $N_e=4$ sub-domains, $Q=100$ uniform collocation
points per sub-domain, $200$ training parameters per sub-domain, and
a single hidden layer in the local neural networks.
As discussed before, the weight/bias coefficients in the hidden layer of
each local neural network are set to uniform random values generated
on $[-R_m,R_m]$. In these tests, we vary $R_m$ systematically and study its
effect. Figure \ref{fg_nhm_4} shows the maximum  error in the overall
domain as a function of $R_m$, % the maximum magnitude of the random coefficients,
obtained with the NLSQ-perturb and the Newton-LLSQ methods.
%Figure \ref{fg_nhm_4}(b) shows the corresponding training time of
%the overall neural network versus $R_m$.
The  error exhibits a behavior similar to what has been observed
with the linear problems. The methods have a better accuracy with a range
of moderate $R_m$ values, and the results are less accurate
with very large or very small $R_m$ values.
We again observe that the NLSQ-perturb result is significantly more accurate 
than that of Newton-LLSQ, except for a range of small $R_m$ values.
%In terms of the training time, the Newton-LLSQ method can again
%be observed to be markedly faster than NLSQ-perturb.


\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Nonl_Helm/nonl_helm_error_elem_fixedTOF_colloc400_trapar800_A.pdf}(a)
    \includegraphics[width=2in]{Figures/Nonl_Helm/nonl_helm_traintime_elem_fixedTOF_colloc400_trapar800_A.pdf}(b)
  }
  \caption{Effect of the number of sub-domains, with fixed total degrees of freedom
    in the domain (nonlinear Helmholtz equation): (a) the maximum and
    rms errors in the domain, and
    (b) the training time, as a function of the number of sub-domains
    in the locELM simulation
    with NLSQ-perturb. 
    %The total number of collocation points and the total number of training
    %parameters in the domain are fixed at $400$ and $800$, respectively,
    %while the number of sub-domains is varied.
    %1 sub-domain: rand-mag=20.0,
    %2 sub-domains: rand-mag = 10.0,
    %4 sub-domains: rand-mag = 4.5.
    %locELM-NLSQ-perturb has been used in the tests.
  }
  \label{fg_nhm_5}
\end{figure}


In Figure \ref{fg_nhm_5} we study the effect of the number of sub-domains
%in the locELM simulation
on the simulation accuracy and the computational cost, while the total degrees of
freedom %(number of collocation points/training parameters)
in the domain are fixed.
In these tests we vary the number of uniform sub-domains
($N_e$). We choose the number of uniform collocation
points per sub-domain ($Q$) and the training parameters per sub-domain ($M$)
such that the total number of collocation points in the domain
is fixed at $N_eQ=400$ and the total number of training parameters in
the domain is fixed at $N_eM=800$.
We have tested three cases, corresponding to $N_e=1$, $2$ and $4$.
As in the previous sections, the case with one sub-domain ($N_e=1$)
corresponds to use of a global ELM.
Figure \ref{fg_nhm_5}(a) shows the maximum and rms errors in the overall domain
as a function of the number of sub-domains.
%in the locELM simulations.
Figure \ref{fg_nhm_5}(b) shows the corresponding training time
%of the overall neural network
versus the number of sub-domains.
These results are obtained with the NLSQ-perturb method.
We have employed $R_m=20.0$ when generating the random coefficients with
one sub-domain ($N_e=1$), $R_m=10.0$ with two sub-domains ($N_e=10.0$)
and $R_m=4.5$ with four sub-domains ($N_e=4.5$).
These $R_m$ values approximately reside in the optimal range of $R_m$
values for these cases.
One can observe that the numerical errors obtained with
different number of sub-domains are comparable, with the errors obtained on
four sub-domains a little worse than those of the other cases.
On the other hand, the network training time decreases significantly with
increasing number of sub-domains.


\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Nonl_Helm/nonl_helm_soln_dist_dnn_adam_7hlayers_50width.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Nonl_Helm/nonl_helm_error_dist_dnn_adam_7hlayers_50width.pdf}(b)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Nonl_Helm/nonl_helm_soln_dist_dnn_lbfgs_4hlayer_50width.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Nonl_Helm/nonl_helm_error_dist_dnn_lbfgs_4hlayer_50width.pdf}(d)
  }
  \caption{Nonlinear Helmholtz equation:
    Distributions of the solutions (a,c)
    and their absolute errors (b,d) computed using PINN~\cite{RaissiPK2019}
    with the Adam optimizer
    (a,b) and the L-BFGS optimizer (c,d).
    These can be compared with those in Figure \ref{fg_nhm_1}
    computed using locELM.
    %Adam: DNN structure [1, 50, 50, 50, 50, 50, 50, 50, 1],
    %total 45,000 epochs; input data size: 400 uniform points;
    %learning rates: 1.0*default-lr for first 5000 epochs, 0.5*default-lr for next 5000
    %epochs, 0.25*default-lr for next 5000 epochs, 0.1*default-lr for next 5000 epochs,
    %0.05*default-lr for next 5000 epochs, 0.025*default-lr for next 5000 epochs,
    %0.0125*default-lr for next 5000 epochs, 0.01*default-lr for next 5000 epochs,
    %0.005*default-lr for next 5000 epochs, where default-lr = 0.001.
    %L-BFGS: DNN structure: [1, 50, 50, 50, 50, 1]; input data: 400 uniform points;
    %total 22,000 epochs.
  }
  \label{fg_nhm_6}
\end{figure}

We next compare the current locELM method with the PINN method~\cite{RaissiPK2019}
for solving the nonlinear Helmholtz equation.
Figure \ref{fg_nhm_6} shows distributions of
the PINN solutions and their absolute errors against the exact solution
given in equation~\eqref{eq_nhm_3}, computed using the Adam optimizer
(Figures \ref{fg_nhm_6}(a,b)) and the L-BFGS optimizer (Figures \ref{fg_nhm_6}(c,d)).
With the Adam optimizer, the neural network consists of $7$ hidden layers,
with a width of $50$ nodes in each layer and the $\tanh$ activation function,
in addition to the input layer of one node (representing $x$) and the output
layer of one node (representing the solution $u$).
%The input data consists of
%the $x$ data on $400$ uniform collocation points in the domain.
The network has been trained on the input data of $400$ uniform collocation points
for $45,000$ epochs, with the learning rate gradually decreasing from $0.001$
at the beginning to $5\times 10^{-6}$ at the end of the training.
With the L-BFGS optimizer, the neural network consists of $4$ hidden layers,
with a width of $50$ nodes in each layer and the $\tanh$ activation function,
apart from the input layer of one node and the output layer of one node.
The network has been trained on the input data of $400$ uniform collocation
points in the domain for $22,000$ L-BFGS iterations.
The results indicate that the PINN method has captured the solution quite accurately,
with the errors on the order $10^{-5}\sim 10^{-3}$ with the Adam optimizer
and on the order $10^{-5}\sim 10^{-4}$ with the L-BFGS optimizer.
Comparing the PINN results in this figure and the locELM  results in Figure~\ref{fg_nhm_1},
we can observe that the locELM method is considerably more accurate than
PINN.


\begin{table}
  \centering
  \begin{tabular}{lllll}
    \hline
    method & maximum error & rms error & epochs/iterations & training time (seconds)\\
    PINN (Adam) & $4.56e-3$ & $5.04e-4$ & $45,000$ & $578.2$ \\
    PINN (L-BFGS) & $1.69e-3$ & $1.69e-4$ & $22,000$ & $806.4$ \\
    locELM (NLSQ-perturb) & $1.45e-9$ & $2.34e-10$ & $71$ & $7.7$ \\
    locELM (Newton-LLSQ) & $1.28e-5$ & $1.75e-6$ & $5$ & $2.7$ \\
    \hline
  \end{tabular}
  \caption{Nonlinear Helmholtz equation: comparison between locELM and PINN
     in terms of the maximum/rms errors in the domain,
    the number of epochs or nonlinear iterations, and the network training time.
    The problem settings and simulation parameters correspond to those of Figures
    \ref{fg_nhm_1} and \ref{fg_nhm_6}.
  }
  \label{tab_nhm_7}
\end{table}

Table~\ref{tab_nhm_7} provides further comparisons between locELM
and PINN in terms of the accuracy and the computational cost.
Here we have listed the maximum and rms errors in the domain, the number
of epochs or nonlinear iterations in the training,
and the network training time, associated with
the PINN (with Adam/L-BFGS optimizers) simulations and the current locELM
simulations. The problem settings and the simulation parameters here
correspond to those in Figure \ref{fg_nhm_1} with locELM
and those in Figure \ref{fg_nhm_6} with PINN.
It is evident that the current locELM method is much more accurate than PINN.
For example, the errors obtained using locELM/NLSQ-perturb
are about six orders of magnitude smaller than those obtained by
PINN/L-BFGS. The errors obtained by locELM/Newton-LLSQ
are about two orders of magnitude smaller than those of PINN/L-BFGS.
Furthermore, the current method is computationally much cheaper than PINN,
with the training time approximately two orders of magnitude smaller
(e.g.~about $8$ seconds with locELM/NLSQ-perturb versus around $806$ seconds
with PINN/L-BFGS).

% compare with FEM

\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Nonl_Helm/FEM/nonl_helm1d_fem_soln_prof_200kelem.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Nonl_Helm/FEM/nonl_helm1d_fem_error_prof_200kelem.pdf}(b)
  }
  \caption{Nonlinear Helmholtz equation: profiles of the solution (a) and its
    absolute error (b) computed using the finite element method (FEM) with
    $200,000$ uniform elements.
  }
  \label{fg_nhm_8}
\end{figure}


\begin{table}
  \centering
  \begin{tabular}{l|lllllll}
    \hline
    method & elements & sub-domains & $Q$ & $M$ & maximum & rms
    & wall-time \\
    & & & & & error & error & (seconds) \\
    \hline
    locELM (NLSQ-perturb) & -- & 4 & $100$ & $200$ & $1.45e-9$ & $2.34e-10$  & $7.7$ \\
    & -- & 4 & $125$ & $200$ & $3.96e-11$ & $7.02e-12$ & $10.6$ \\
    %& -- & 4 & $150$ & $200$ & $1.90e-11$ & $4.25e-12$ & $17.4$ \\
    \hline
    FEM & $200,000$ & -- & -- & -- & $5.26e-9$ & $1.37e-9$ & $4.7$ \\
    & $400,000$ & -- & -- & -- & $1.31e-9$ & $3.43e-10$ & $8.8$ \\
    & $800,000$ & -- & -- & -- & $3.29e-10$ & $8.57e-11$ & $18.1$ \\
    \hline
  \end{tabular}
  \caption{Nonlinear Helmholtz equation: comparison between 
    locELM  and FEM,
    %the classical finite element method (FEM)
    in terms of the maximum/rms errors in the domain 
    and the training/computation time.
    The problem settings  correspond to those of Figures
    \ref{fg_nhm_1}(a,b) and \ref{fg_nhm_8}.
  }
  \label{tab_nhm_9}
\end{table}

Let us now compare the current locELM method with the finite element method for
solving the nonlinear Helmholtz equation.
Figure \ref{fg_nhm_8} shows the profiles of the finite element
solution and its absolute error
against the analytic solution, computed on a mesh of $200,000$ uniform elements.
The finite element method is again
implemented using the FEniCS library in Python, and the nonlinear algebraic
equation is solved using a Newton iteration.
The FEM result is observed to be accurate, with an error level
on the order $10^{-9}$.
%
In Table \ref{tab_nhm_9} we compare the locELM method and the finite element method
with regard to the accuracy and the computational cost.
The table lists the maximum and rms errors in the domain and the wall time
of the training or computation, obtained using locELM
and FEM on several sets of parameters corresponding to different
simulation resolutions.
One can observe that locELM exhibits a comparable, and generally superior,
performance to FEM. For example, the locELM case with $(Q,M)=(100,200)$
has a computational cost comparable to the FEM case with $400,000$ elements, and
their error levels are also comparable.
The locELM case with $(Q,M)=(125,200)$ has a lower cost ($\sim 10$ seconds)
than the FEM case with $800,000$ elements ($\sim 18$ seconds), and
 also has considerably smaller errors, by an order
of magnitude, than the latter.


% what else to discuss here?


% nonlinear spring

\input NonlSpr


\subsubsection{Viscous Burger's Equation}

% 1D+t Burger's equation

In this subsection we further test the locELM method using the
viscous Burger's equation.
Consider the spatial-temporal domain
$\Omega = \{(x,t)\ |\ x\in[a,b],\ t\in[0,t_f] \}$,
and the following initial/boundary value problem with
the Burger's equation,
\begin{subequations}
  \begin{align}
    &
    \frac{\partial u}{\partial t} + u\frac{\partial u}{\partial x}
    =\nu\frac{\partial^2 u}{\partial x^2} + f(x,t),
    \label{eq_bg_1} \\
    &
    u(a,t) = g_1(t), \\
    &
    u(b,t) = g_2(t), \\
    &
    u(x,0) = h(x), \label{eq_bg_2} 
  \end{align}
\end{subequations}
where $u(x,t)$ is the solution to be solved for, the constant $\nu$
denotes the viscosity, $f(x,t)$ is a
prescribed source term, $g_1(t)$ and $g_2(t)$ denote the boundary
distributions, and $h(x)$ is the initial distribution.
We employ the following values for the constant parameters,
%The values for the constant parameters in the domain and problem specifications are
%given by
\begin{equation*}
  \nu = 0.01, \quad
  a = 0, \quad
  b = 5, \quad
  t_f = 10,\ \text{or}\ 2.5,\ \text{or}\ 0.25.
\end{equation*}
We choose the source term $f(x,t)$ and the boundary/initial distributions
($g_1$, $g_2$ and $h$)
such that the following function
\begin{equation}\label{eq_bg_3}
  \begin{split}
  u(x,t) =& \left(1+\frac{x}{10} \right)\left(1+\frac{t}{10} \right)
  \left[2\cos\left(\pi x+\frac{2\pi}{5} \right)
    + \frac32\cos\left(2\pi x - \frac{3\pi}{5}  \right) \right] 
  \left[2\cos\left(\pi t+\frac{2\pi}{5} \right) \right. \\
    &
    \left.
    + \frac32\cos\left(2\pi t - \frac{3\pi}{5}  \right) \right]
  \end{split}
\end{equation}
satisfies this initial/boundary value problem.

% how to simulate problem?

We employ the method presented in Section \ref{sec:tnleq} with block time
marching to solve this problem, by restricting the method to one spatial
dimension. The spatial-temporal domain $\Omega$ is partitioned into
$N_b$ uniform blocks in time, and each time block is computed separately
and successively. Within each time block, we further partition its
spatial-temporal domain into $N_x$ uniform sub-domains in $x$
and $N_t$ uniform sub-domains in time, resulting in $N_e=N_xN_t$ uniform sub-domains
per time block. $C^1$ continuity is imposed on the sub-domain boundaries
in the $x$ direction, and $C^0$ continuity is imposed on the sub-domain
boundaries in the temporal direction. Within each sub-domain
we employ a total of $Q=Q_xQ_t$ uniform collocation points, with $Q_x$
uniform collocation points in the $x$ direction and $Q_t$ uniform collocation
points in time.


\begin{figure}
  \centerline{
    \includegraphics[height=2.3in]{Figures/Burgers/burgers_soln_dist_nlsq_40tblocks_5elem_colloc20_trapar200_randmag_0.75_B.pdf}(a)
    \includegraphics[height=2.3in]{Figures/Burgers/burgers_error_dist_nlsq_40tblocks_5elem_colloc20_trapar200_randmag_0.75_B.pdf}(b)
  }
  \caption{Burgers equation: distributions of (a) the solution, and (b) its
    absolute error in the spatial-temporal plane,
     computed using the current locELM (NLSQ-perturb)
    method.
    %40 time blocks, 5 sub-domains/block,
    %20x20 uniform collocation points/sub-domain,
    %200 training parameters/sub-domain, rand-mag=0.75. 1 hidden layer/blocELM.
  }
  \label{fig:burger}
\end{figure}



% NN structure

We employ a local neural network for each sub-domain, leading to a
total of $N_e$ local neural networks in the simulations.
Each local neural network consists of an input layer of two nodes,
representing the $x$ and $t$,
a single hidden layer
with a width of $M$ nodes and the $\tanh$ activation function,
and an output layer of a single node, representing the field solution $u$.
The output layer has no bias and no activation function.
Additionally, an affine mapping operation is incorporated into the network
right behind the input layer to normalize the input $x$ and $t$ data to
the interval $[-1,1]\times[-1,1]$ for each sub-domain.
The weight/bias coefficients in the hidden layer of the local neural networks
are pre-set to uniform random values generated on
the interval $[-R_m,R_m]$, and are fixed during the simulation.
A fixed seed value $22$ is used for the Tensorflow random number generator
for all the tests in this sub-section.
%so that the numerical tests are repeatable.
%With the above settings, the number of training parameters in the local
%neural network for each sub-domain equals the number of nodes in the
%hidden layer ($M$).

We employ the NLSQ-perturb method from Section \ref{sec:nonl_steady}
for computing the resultant nonlinear algebraic problem in the majority of tests
presented below. The results computed using Newton-LLSQ
are also provided for comparison in some cases. The initial guess
of the solution  is set to zero.
With the NLSQ-perturb method, we employ $\delta=0.5$ and $\xi_2=0$
(see Algorithm \ref{alg:alg_1}
and Remark \ref{rem_9})
for generating the random perturbations in the following tests.

The locELM simulation parameters include
the number of time blocks ($N_b$), the number of sub-domains per time
block ($N_e$, $N_x$, $N_t$), the number of collocation points per
sub-domain ($Q$, $Q_x$, $Q_t$), the number of training parameters
per sub-domain ($M$), and the maximum magnitude of the random coefficients ($R_m$).
%in the hidden layers of the local neural networks ($R_m$).



\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Burgers/burgers_soln_prof_line_t8.75_40tblocks_5elem_colloc20sq_trapar200_randmag_0.75_A.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Burgers/burgers_error_prof_line_t8.75_40tblocks_5elem_colloc20sq_trapar200_randmag_0.75_A.pdf}(b)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Burgers/burgers_soln_prof_line_x2.75_40tblocks_5elem_colloc20sq_trapar200_randmag_0.75_A.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Burgers/burgers_error_prof_line_x2.75_40tblocks_5elem_colloc20sq_trapar200_randmag_0.75_A.pdf}(d)
  }
  \caption{Burger's equation: Profiles of the locELM (NLSQ-perturb) solution (a) and
    its absolute error (b) at $t=8.75$.
    Time histories of the locELM (NLSQ-perturb) solution
    (c) and its absolute error (d) at the
    point $x=2.75$.
    The settings and simulation parameters correspond to those
    of Figure \ref{fig:burger}.
  }
  \label{fg_bg_2}
\end{figure}

% discussion of results

Figure \ref{fig:burger} shows distributions of
the solution and its absolute error
%against the exact solution \eqref{eq_bg_3}
in the spatial-temporal plane,
computed using the current locELM method (with NLSQ-perturb).
Here the temporal domain size is set to be $t_f=10$,
and $40$ uniform time blocks ($N_b=40$) are used 
in the spatial-temporal domain.
We have employed  $N_e=5$ uniform sub-domains
with $N_x=5$ and $N_t=1$ within each time block,
$Q=20\times 20$ uniform collocation points per sub-domain
($Q_x=Q_t=20$), $M=200$ training parameters per sub-domain,
and $R_m=0.75$ when generating the random coefficients.
%for the hidden layers of the local neural networks.
The current method has captured the solution accurately,
with the absolute error on the order $10^{-8}\sim 10^{-7}$
in the overall domain.


Figure \ref{fg_bg_2} further examines  the accuracy of the locELM solution.
The problem settings and the simulation parameters here correspond to those
of Figure \ref{fig:burger}.
Figures \ref{fg_bg_2}(a) and (b) depict the profiles of the locELM (NLSQ-perturb) solution
and its absolute error at the time $t=8.75$. The exact solution profile
at this time instant is also shown in Figure \ref{fg_bg_2}(a).
The locELM solution profile exactly overlaps with
that of the exact solution, and the absolute error is around the level
$10^{-10}\sim 10^{-7}$.
Figures \ref{fg_bg_2}(c) and (d) show the time histories of the locELM (NLSQ-perturb)
solution and its absolute error at the point $x=2.75$.
The time history of the exact solution at this point is also
shown in Figure \ref{fg_bg_2}(c).
The simulated signal overlaps with that of the exact
signal, and the absolute error can be observed to fluctuate around the level
$10^{-10}\sim 10^{-8}$.



\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Burgers/burgers_error_colloc_10tblocks_5elem_trapar200_randmag0.5_A.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Burgers/burgers_error_trapar_10tblocks_5elem_colloc20sq_randmag_0.5_A.pdf}(b)
  }
  \caption{Effect of the degrees of freedom on the accuracy (Burger's equation):
    the maximum and rms errors in the domain as a
    function of (a) the number of collocation points in each direction per
    sub-domain, and
    (b) the number of training parameters per sub-domain.
    %In (a) the number of training parameters per sub-domain is fixed at $M=200$.
    %In (b) the number of collocation points per sub-domain is fixed at
    %$Q=20\times 20$.
    %10 time blocks, 5 sub-domains/block, rand-mag = 0.5.
  }
  \label{fg_bg_3}
\end{figure}

Figure \ref{fg_bg_3} demonstrates the effect of the degrees of freedom
on the simulation accuracy.
In this group of tests the temporal domain size is set to
$t_f=2.5$. We have employed $N_b=10$ uniform time blocks in the overall
spatial-temporal domain, $N_e=5$ uniform sub-domains per time block
(with $N_x=5$ and $N_t=1$), and $R_m=0.5$ when generating random coefficients
for the hidden layers of the local neural networks.
First, we fix the number of training parameters per sub-domain
to $M=200$, and vary the number of (uniform) collocation points
per sub-domain systematically while maintaining $Q_x=Q_t$.
Figure \ref{fg_bg_3}(a) shows the maximum and rms errors in the overall domain
versus the number of collocation points in each direction
per sub-domain.
It is observed that the errors decrease essentially exponentially
with increasing number of collocation points per direction (when
below around $Q_x=Q_t=15$). Then the errors stagnate as the number of
collocation points per direction increases beyond $15$, due to
the saturation associated with the fixed number of training parameters
in the test.
Then, we fix the number of uniform collocation points
to $Q=20\times 20$ per sub-domain, and vary the number of training parameters
per sub-domain systematically in a range of values.
Figure \ref{fg_bg_3}(b) shows the resultant maximum/rms errors in the overall domain
versus the number of training parameters per sub-domain.
As the number of training parameters per sub-domain increases,
the locELM errors can be observed to
decrease substantially.


\begin{figure}
  \centerline{
    \includegraphics[width=2.0in]{Figures/Burgers/burgers_error_elem_fixedTOF_colloc2000_trapar1000_1tblock.pdf}(a)
    \includegraphics[width=2.0in]{Figures/Burgers/burgers_traintime_elem_fixedTOF_colloc2000_trapar1000_1tblock.pdf}(b)
  }
  \caption{Effect of the number of sub-domains,
    with fixed total degrees of freedom in the domain
    (Burger's equation): (a) the maximum and rms errors in the domain, and (b) the training
    time, as a function of the number of uniform sub-domains per time block.
    Temporal domain size is $t_f=0.25$, and a single time block is used in
    the spatial-temporal domain.
    %The total number of collocation points in domain is approximately kept at $2000$,
    %and the total number of training parameters in domain is approximately kept
    %at $1000$.
    %Total degrees of freedom in the domain is approximately fixed.
    %total collocation points in domain is about 2000, total training parameters in
    %domain is about 1000.
    %1 sub-domain/block, collocation points/sub-domain 45x45, training parameters/sub-domain
    %1000, rand-mag = 2.0.
    %2 sub-domains/block, collocation points/sub-domain 32x32, training parameters/sub-domain
    %500, rand-mag = 1.0.
    %3 sub-domains/block, collocation points/sub-domain 26x26, training parameters/sub-domain
    %333, rand-mag = 1.0.
    %4 sub-domains/block, collocation points/sub-domain 22x22, training parameters/sub-domain
    %250, rand-mag = 0.75.
    %5 sub-domains/block, collocation points/sub-domain 20x20, training parameters/sub-domain
    %200, rand-mag = 0.75.
  }
  \label{fg_bg_4}
\end{figure}

Figure \ref{fg_bg_4} demonstrates the effect of the number of sub-domains, with the
total number of degrees of freedom in the domain (approximately) fixed.
In this group of tests, the temporal domain size is set to $t_f=0.25$,
and we employ a single time block in the spatial-temporal domain.
We employ uniform sub-domains, and vary the number of
sub-domains within the time block systematically between $N_e=1$ and $N_e=5$
(with fixed $N_t=1$ and various $N_x$).
The number of (uniform) collocation points per sub-domain and
the number of training parameters per sub-domain are both varied, but
the total number of collocation points and the total number of
training parameters in the time block are fixed
approximately at $N_eQ\approx 2000$ and $N_eM\approx 1000$, respectively.
More specifically, we employ $Q=45\times 45$ collocation points/sub-domain
and $M=1000$ training parameters/sub-domain with $N_e=1$ sub-domain within
the time block, $Q=32\times 32$ collocation points/sub-domain and
$M=500$ training parameters/sub-domain with $N_e=2$ sub-domains,
$Q=26\times 26$ collocation points/sub-domain and $M=333$ training parameters/sub-domain
with $N_e=3$ sub-domains, $Q=22\times 22$ collocation points/sub-domain
and $M=250$ training parameters/sub-domain with $N_e=4$ sub-domains,
and $Q=20\times 20$ collocation points/sub-domain and $M=200$ training parameters/sub-domain
with $N_e=5$ sub-domains within the time block.
When generating the random weight/bias coefficients,
%for the hidden layers of the local neural networks,
we have employed
$R_m=2.0$ with $N_e=1$ sub-domain in the time block, $R_m=1.0$
with $N_e=2$ and $3$ sub-domains, and $R_m=0.75$ with
$N_e=4$ and $5$ sub-domains within the time block.
These values are approximately in the optimal range of $R_m$ values
for these cases.
Figure \ref{fg_bg_4}(a) shows the maximum and rms errors of the locELM (NLSQ-perturb)
solution in the domain as a function of the number of sub-domains within
the time block.
We observe that the errors decrease quite significantly, by nearly
two orders of magnitude, as the number of sub-domains increases from $N_e=1$
to $N_e=3$. The errors remain approximately at the same level
with three and more sub-domains.
Note that the case with one sub-domain % within the time block
corresponds to the global ELM computation.
These results indicate that the local ELM simulation with multiple sub-domains
appears to achieve a better accuracy than the global ELM simulation
for this problem.
Figure \ref{fg_bg_4}(b) shows the training time of the neural network
as a function of the number of sub-domains.
%in the time block.
The training time has been reduced substantially as
the number of sub-domains increases from one to three sub-domains
(from around $110$ seconds to about $40$ seconds),
and it remains approximately the same with three and more sub-domains.
These results show that,
compared with the global ELM, the use of
domain decomposition and multiple sub-domains in locELM
can significantly reduce the computational cost 
for the Burger's equation.
%when compared with the global ELM computation,
This is consistent with the observations with
the other problems in previous sections.


\begin{figure}
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Burgers/burgers_soln_dist_dnn_adam_10elem_5hlayers_40width.pdf}(a)
    \includegraphics[width=2.5in]{Figures/Burgers/burgers_error_dist_dnn_adam_10elem_5hlayers_40width_A.pdf}(a)
  %}
  %\centerline{
    %\includegraphics[width=2.5in]{Figures/Burgers/burgers_soln_dist_dnn_lbfgs_10elem_5hlayers_40width.pdf}(c)
    \includegraphics[width=2.5in]{Figures/Burgers/burgers_error_dist_dnn_lbfgs_10elem_5hlayers_40width_A.pdf}(b)
  }
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Burgers/burgers_soln_dist_locelm_nlsq_1tblock_5elem_colloc20sq_trapar200_randmag0.75_A.pdf}(e)
    \includegraphics[width=2.5in]{Figures/Burgers/burgers_error_dist_locelm_nlsq_1tblock_5elem_colloc20sq_trapar200_randmag0.75_B.pdf}(c)
  %}
  %\centerline{
    %\includegraphics[width=2.5in]{Figures/Burgers/burgers_soln_dist_locelm_newton_llsq_1tblock_5elem_colloc20sq_trapar150_randmag1.0_A.pdf}(g)
    \includegraphics[width=2.5in]{Figures/Burgers/burgers_error_dist_locelm_newton_llsq_1tblock_5elem_colloc20sq_trapar150_randmag1.0_B.pdf}(d)
  }
  \caption{Comparison between locELM and DGM (Burger's equation):
    distributions of the
    absolute errors  computed using DGM with the Adam optimizer
    (a) and L-BFGS optimizer (b), and using locELM
    with NLSQ-perturb (c) and with Newton-LLSQ (d).
    %10 elements (10x1) when computing loss, Gaussian quadrature points.
    %10x10 quadrature points/element. 5 hidden layers, 40 width, tanh activations
    %for hidden layers.
    %Adam: total $128,000$ epochs; learning rates:
    %1.0*default-lr for first 3000 epochs, 0.5*default-lr for next 5000 epochs,
    %0.25*default-lr for next 5000 epochs, 0.125*default-lr for next 5000 epochs,
    %0.1*default-lr for next 5000 epochs, 0.075*default-lr for next 5000 epochs,
    %0.05*default-lr for next 20000 epochs, 0.025*default-lr next 40000 epochs,
    %0.0125*default-lr for next 20000 epochs, 0.01*default-lr for next
    %20000 epochs.
    %L-BFGS: total 28000 epochs.
    %locELM NLSQ-perturb: 1 time block, 5 sub-domains/block,
    %20x20 collocation points/sub-domain, 200 training parameters/sub-domain,
    %rand-mag = 0.75.
    %locELM Newton-LLSQ: 1 time block, 5 sub-domains/block, 20x20 collocation
    %points/sub-domain, 150 training parameters/sub-domain, rand-mag=1.0.
  }
  \label{fg_bg_5}
\end{figure}


\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Burgers/burgers_soln_prof_compare_dnn_locelm_line_t0.2_A.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Burgers/burgers_error_prof_compare_dnn_locelm_line_t0.2_A.pdf}(b)
  }
  \caption{Burger's equation: Profiles of (a) the solutions and (b) their absolute errors
    at $t=0.2$ computed using DGM and locELM.
    The settings and simulation parameters correspond to those of Figure \ref{fg_bg_5}.
  }
  \label{fg_bg_6}
\end{figure}


\begin{table}
  \centering
  \begin{tabular}{lllll}
    \hline
    method & maximum error & rms error & epochs/iterations & training time (seconds)\\
    DGM (Adam) & $4.57e-2$ & $5.76e-3$ & $128,000$ & $1797.8$ \\
    DGM (L-BFGS) & $7.50e-3$ & $1.55e-3$ & $28,000$ & $1813.5$ \\
    locELM (NLSQ-perturb) & $1.85e-8$ & $4.44e-9$ & $27$ & $27.6$ \\
    locELM (Newton-LLSQ) & $1.62e-5$ & $3.11e-6$ & $15$ & $9.1$ \\
    \hline
  \end{tabular}
  \caption{Burger's equation: comparison between locELM and DGM 
    in terms of the maximum/rms errors in the domain, the number of
    epochs or nonlinear iterations, and the network training time.
    The problem settings and simulation parameters correspond to those of
    Figure \ref{fg_bg_5}.
  }
  \label{tab_bg_7}
\end{table}


We next compare the current locELM method with the deep Galerkin method (DGM)
for the Burger's equation.
Figure \ref{fg_bg_5} is a comparison of distributions of the solutions (left column)
and their absolute errors (right column) in the spatial-temporal plane,
obtained using DGM with the Adam and L-BFGS optimizers (top two rows)
and using the current locELM method with NLSQ-perturb and Newton-LLSQ (bottom
two rows). In these tests the temporal domain size is set to $t_f=0.25$.
For DGM, the neural network consists of an input layer of two nodes (representing
$x$ and $t$), $5$ hidden layers with a width of $40$ nodes in each layer
and the $\tanh$ activation function, and an output layer of a single node
(representing $u$) with no bias and no activation function.
When computing the loss function, the spatial-temporal domain has been divided into
$10$ uniform sub-domains along the $x$ direction, and we have used $10\times 10$
Gauss-Lobatto-Legendre quadrature points in each sub-domain
for computing the residual norms.
With the Adam optimizer, the neural network has been trained for $128,000$
epochs, with the learning rate gradually decreasing from $0.001$ at the beginning
to $10^{-5}$ at the end of the training.
With the L-BFGS optimizer, the neural network has been trained for $28,000$ iterations.
For the current locELM method, we have employed a single time block
in the spatial-temporal domain and $N_e=5$ uniform sub-domains along the $x$ direction
within this time block.
With NLSQ-perturb, we have employed $Q=20\times 20$ uniform collocation points
per sub-domain, $M=200$ training parameters per sub-domain, and $R_m=0.75$
when generating the random coefficients.
%for the hidden layer of the local neural networks.
With Newton-LLSQ, we have employed $Q=20\times 20$ uniform collocation points
per sub-domain, $M=150$ training parameters per sub-domain,
and $R_m=1.0$ when generating the random coefficients.
%for the hidden layer of the local neural networks.
The results in Figure \ref{fg_bg_5} indicate that the current locELM method is
considerably more accurate than DGM for the Burger's equation.
The errors of the current method is generally several orders of magnitude smaller
than those of DGM. The locELM method with NLSQ-perturb provides the best
accuracy, with the errors on the order $10^{-9}\sim 10^{-8}$.
Then it is the locELM method with Newton-LLSQ, with the errors on the level
$10^{-6}\sim 10^{-5}$. In contrast, the errors of the DGM with Adam and L-BFGS
are generally on the levels $10^{-3}\sim 10^{-2}$ and $10^{-3}$, respectively.

Figure \ref{fg_bg_6} compares the profiles of the DGM and locELM solutions
(plot (a))
and their errors (plot (b)) at the time instant $t=0.2$.
The profile of the exact solution at this instant is also included
in Figure \ref{fg_bg_6}(a) for comparison.
The problem settings and the simulation parameters here correspond to
those of Figure \ref{fg_bg_5}.
The solution profiles from DGM and locELM simulations are in good agreement
with that of the exact solution.
The error profiles, on the other hand,
reveal disperate accuracies in the results obtained using
these methods. They confirm the ordering of these methods,
from the most to the least accurate, to be locELM/NLSQ-perturb,
locELM/Newton-LLSQ, DGM/L-BFGS, and DGM/Adam.

Table \ref{tab_bg_7} provides a further comparison between locELM and DGM
for the Burger's equation,
in terms of their accuracy and computational cost.
We have listed the maximum and rms errors in the overall spatial-temporal domain,
the number of epochs or nonlinear iterations in the training,
and the training time of the neural network corresponding to DGM with
the Adam/L-BFGS optimizers and the current locELM method with NLSQ-perturb
and Newton-LLSQ.
The observations here are consistent with those of previous sections.
The locELM method is orders of magnitude more accurate than DGM
(e.g.~$10^{-8}$ with locELM/NLSQ-perturb versus $10^{-3}$ with DGM/L-BFGS),
and its training time is orders of magnitude smaller than that of
DGM (e.g.~around $28$ seconds with locELM/NLSQ-perturb versus
around $1800$ seconds with DGM/L-BFGS).

% compare with FEM

\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Burgers/FEM/burger_compare_FEM_soln_prof.pdf}(a)
    \includegraphics[width=2in]{Figures/Burgers/FEM/burger_compare_FEM_error_prof.pdf}(b)
    \includegraphics[width=2in]{Figures/Burgers/FEM/burger_FEM_conv_temporal.pdf}(c)
  }
  \caption{Comparison between locELM and FEM (Burger's equation): Profiles
    of (a) the solutions and (b) their absolute errors at $t=0.2$,
    computed using locELM (with NLSQ-perturb) and using FEM.
    %the finite element method.
    (c) The maximum and rms errors at $t=0.25$ versus
    $\Delta t$ computed using FEM (with a mesh of $10,000$ uniform elements),
    showing its second-order convergence rate in time.
    %In (a,b), the FEM simulation is on a mesh of $10,000$ uniform elements
    %with $\Delta t=0.000125$; The locELM simulation is conducted with a single
    %time block, $5$ sub-domains in the block,
    %$20\times 20$ collocation points/sub-domain, $200$ training parameters/sub-domain,
    %and $R_m=0.75$.
  }
  \label{fg_bg_7}
\end{figure}


\begin{table}
  \centering
  \begin{tabular}{l|llllllll}
    \hline
    method & elements  & $\Delta t$ & sub- & $Q$ & $M$ & maximum & rms & wall-time \\
      & & & domains & & & error & error & (seconds) \\
    \hline
    locELM 
     & -- & -- & $5$ & $15\times 15$ & $150$ & $2.10e-6$ & $4.35e-7$ & $14.7$ \\
    (NLSQ-perturb)  & -- & -- & $5$ & $20\times 20$ & $200$ & $1.85e-8$ & $4.44e-9$  & $27.6$ \\
    \hline
    locELM 
     & -- & -- &  $5$ & $15\times 15$ & $150$ & $1.25e-5$ & $2.71e-6$ & $6.8$ \\
    (Newton-LLSQ) &  -- & -- & $5$ & $20\times 20$ & $150$  & $1.62e-5$ & $3.11e-6$  & $9.1$ \\
    \hline
    FEM
    & 2000 & $0.001$ & -- & -- & -- & $2.64e-5$ & $5.15e-6$ & $12.5$ \\
    & 2000 & $0.0005$ & -- & -- & -- & $3.07e-5$ & $5.76e-6$ & $25.4$ \\ \cline{2-9}
    & 5000 & $0.001$ & -- & -- & -- & $1.89e-5$ & $1.74e-6$ & $26.0$ \\
    & 5000 & $0.0005$ & -- & -- & -- & $4.13e-6$ & $7.90e-7$ & $50.8$ \\ \cline{2-9}
    & 10000 & $0.001$ & -- & -- & -- & $2.22e-5$ & $1.99e-6$ & $47.7$ \\
    & 10000 & $0.0005$ & -- & -- & -- & $4.74e-6$ & $4.36e-7$ & $92.6$ \\
    \hline
  \end{tabular}
  \caption{Burger's equation: comparison between locELM and FEM
    in terms of the maximum/rms errors in the domain and the training/computation time.
    %Temporal domain size is $t_f=0.25$.
    %$R_m=0.75$ for locELM (NLSQ-perturb)
    %and $R_m=1.0$ for locELM (Newton-LLSQ).
    %The maximum/rms errors refer to
    %the $L^{\infty}$ and $L^2$ errors in the entire spatial-temporal domain.
    $Q$ and $M$ denote the number of collocation points per sub-domain and
    the number of training parameters per sub-domain, respectively.
    %Temporal domain size is $t_f=0.25$.
    %The problem settings correspond to those of
    %Figure \ref{fg_bg_7}.
  }
  \label{tab_bg_8}
\end{table}

Finally, we compare the current locELM method with the classical finite element
method for solving the Burger's equation.
In the FEM simulation, we discretize the Burger's equation~\eqref{eq_bg_1}
in time using a semi-implicit scheme.
We treat the nonlinear term explicitly and the viscous term implicitly,
and discretize the time derivative by the second-order backward differentiation
formula (BDF2). The method is again implemented using the FEniCS library
in Python.
Figures \ref{fg_bg_7}(a) and (b) show a comparison of the solution and error profiles
at $t=0.2$ obtained using the current locELM (NLSQ-perturb)
method and using the finite element method.
%(plots (a) and (b)),
Figure \ref{fg_bg_7}(c) shows the numerical errors
at $t=0.25$ as a function of the time step
size $\Delta t$ computed using the finite element method.
%(plot (c)).
In these simulations the temporal domain size is $t_f=0.25$.
In Figures \ref{fg_bg_7}(a,b),
the FEM simulation is conducted with $\Delta t=1.25e-4$ on a mesh of
$10,000$ uniform elements, and the locELM simulation is conducted with
a single time block in the domain and $N_e=5$ sub-domains in the time block, with
$(Q,M)=(20\times 20, 200)$ and $R_m=0.75$.
In Figure \ref{fg_bg_7}(c), the simulations are performed
with a fixed mesh of $10,000$ uniform elements.
It can be observed that both locELM and FEM have produced accurate solutions,
and that the FEM exhibits a second-order convergence rate in time
before the error saturation when $\Delta t$ becomes very small.

Table \ref{tab_bg_8} provides a comparison between locELM and FEM
in terms of their accuracy and computational cost for the Burger's equation.
The temporal domain size is $t_f=0.25$ in these tests.
We solve the problem using locELM and FEM
on several sets of simulation parameters with different numerical resolutions.
The maximum and rms errors in the spatial-temporal domain are computed,
and we also record the training time of locELM and the computation time
of FEM in these simulations.
We list in this table the maximum and rms errors, as well
as the training/computation time, corresponding to
different simulation parameters for the locELM method with NLSQ-perturb
and Newton-LLSQ and for the finite element method.
A single time block has been used in the spatial-temporal domain 
for the locELM simulations, and we employ $R_m=0.75$ with
locELM/NLSQ-perturb and $R_m=1.0$ with locELM/Newton-LLSQ for generating
the random coefficients.
% in the local neural networks.
%
% observations
It is observed that the current locELM method with both NLSQ-perturb and
Newton-LLSQ shows a superior performance to the FEM.
For example,
the two cases with locELM/Newton-LLSQ have numerical errors comparable to
the FEM cases with $2000$ elements (for both $\Delta t$), $5000$ elements ($\Delta t=0.001$)
and $10000$ elements ($\Delta t=0.001$), but the computational cost
of locELM/Newton-LLSQ is notably smaller than the cost of these FEM cases.
The locELM/NLSQ-perturb case with $(Q,M)=(15\times 15, 150)$ has numerical
errors comparable to the FEM cases with $5000$ elements ($\Delta t=0.0005$)
and $10000$ elements ($\Delta t=0.0005$), but the computational cost of 
this locELM/NLSQ-perturb case is only a fraction of those of these two FEM cases.
The locELM/NLSQ-perturb case with $(Q,M)=(20\times 20, 200)$ has a computational
cost comparable to the FEM cases with $2000$ elements ($\Delta t=0.0005$)
and $5000$ elements ($\Delta t=0.001$), but the errors of this
locELM/NLSQ-perturb case are nearly three orders of magnitude smaller than
those of these two FEM cases.



% what else to discuss here?



