
\subsection{One-Dimensional Helmholtz Equation}
\label{sec:helm1d}

% what is the goal of this test?
% how is the equation solved?
% what are the simulation parameters?
% what is the locELM structure?
% are the details sufficient for others to reproduce the test results?
% what are the results?
% what do the results mean?

In the first test we consider the boundary value problem with
the one-dimensional (1D) Helmholtz equation on the domain $x\in[a,b]$,
\begin{subequations}
\begin{align}
  &
  \frac{d^2u}{dx^2} - \lambda u = f(x), \label{equ_t1} \\
  &
  u(a) = h_1, \\
  &
  u(b) = h_2, \label{equ_t3}
\end{align}
\end{subequations}
where $u(x)$ is the field function to be solved for,
$f(x)$ is a prescribed source term, $h_1$ and $h_2$ are the boundary
values, and the other constants in the above equations and the domain
specification are
\begin{equation*}
  \lambda = 10, \quad
  a = 0, \quad
  b = 8.
\end{equation*}
We choose the source term $f(x)$ such that the equation~\eqref{equ_t1}
has the following solution,
\begin{equation}\label{equ_t2}
  u(x) = \sin \left(3\pi x+\frac{3\pi}{20}\right) \cos \left(2\pi x+\frac{\pi}{10}\right) + 2.
\end{equation}
We choose $h_1$ and $h_2$ according to this
analytic solution by setting $x=a$ and $x=b$ in \eqref{equ_t2},
respectively.
Under these settings
the boundary value problem 
\eqref{equ_t1}--\eqref{equ_t3} has the analytic solution \eqref{equ_t2}.



\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_soln_fixedParamPerElem.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_error_dist_fixedParamPerElem.pdf}(b)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_error_elem_fixedParamPerElem.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_traintime_elem_fixedParamPerElem.pdf}(d)
  }
  \caption{
    Effect of the number of sub-domains, with fixed degrees of freedom
    per sub-domain (1D Helmholtz equation): Profiles of
    (a) the locELM solutions
    and (b) their absolute errors, computed using one sub-domain and four sub-domains.
    (c) The maximum and rms errors in the domain, and 
    (d) the neural-network training time,
    as a function of the number of sub-domains.
    %In these tests, $Q=50$, $M=50$, $R_m=3.0$,
    %uniform collocation points, one hidden layer.
  }
  \label{fig:helm1d_1}
\end{figure}


% how to solve the problem?

We solve this problem using the locELM method
presented in Section \ref{sec:steady}, by restricting the scheme to
one spatial dimension. We partition $[a,b]$ into $N_e$ uniform
sub-domains (sub-intervals), and impose the $C^1$
continuity conditions across the sub-domain boundaries.
Let $Q$ denote the number of
collocation points within each sub-domain, and consider three types
of collocation points: uniform grid points, the Gauss-Lobatto-Legendre
quadrature points, and random points. The majority of tests reported below
are performed with uniform collocation points in each sub-domain.

For the majority of tests in this subsection,
each local neural network consists of
an input layer with one node (representing $x$),
an output layer with one node
(representing the solution $u$), and one hidden layer in between.
We have also considered local neural networks with two or three
hidden layers between the input
and the output layers.
We employ $\tanh$ as the activation function for all the hidden layers.
The output layer contains no bias and no activation function, as
discussed in Section \ref{sec:loc_elm}.
Additionally, an affine mapping  operation
that normalizes the input $x$ data on each sub-domain to the interval $[-1,1]$ is
incorporated into the local neural networks right behind the input layer.
This operation is 
implemented using the ``lambda'' layer in Keras, which contains
no adjustable parameters and we do not count it
toward the number of  hidden layers.
Following Section \ref{sec:method}, let $M$  denote the number of
nodes in the last hidden layer, which is also the number of training parameters
for each sub-domain.
As discussed in Section \ref{sec:loc_elm},
the weight and bias coefficients in the hidden layers are pre-set
to uniform random values generated on the interval $[-R_m,R_m]$ and
are fixed in the computation.

The main simulation parameters with locELM include
the number of sub-domains ($N_e$), the number of collocation points
per sub-domain ($Q$), the number of training parameters per
sub-domain ($M$), the maximum magnitude of the random coefficients ($R_m$),
the number of hidden layers in the local neural network,
and the type of collocation points in
each sub-domain. We will use the total number of
collocation points ($N_eQ$) and the total number of training parameters ($N_eM$)
to characterize the total degrees of freedom in the simulation.
The effects of the above  parameters on
the simulation results will be investigated.
To make the numerical tests repeatable, all the random numbers are
generated by the Tensorflow library, and we employ a fixed seed value $1$ for
the random number generator with all the tests in this sub-section.


Figure \ref{fig:helm1d_1} illustrates the effect of the number of
sub-domains in the locELM simulation, with the degrees
of freedom per sub-domain (i.e.~the number of collocation points
and the number of training parameters per sub-domain) fixed.
Figures \ref{fig:helm1d_1}(a) and (b) show the solution and error profiles
obtained with one sub-domain and $4$ sub-domains in the locELM simulation.
Figure \ref{fig:helm1d_1}(c) shows the maximum ($L^{\infty}$)
and the rms ($L^2$) errors of the locELM solution in the overall domain
as a function of the number of sub-domains.
Figure \ref{fig:helm1d_1}(d) shows the training time of the overall neural network
as a function of the number of sub-domains.
Here the error refers to the absolute value of the difference between
the locELM solution and the exact solution give by equation \eqref{equ_t2}.
As discussed before,
the training time refers to the total computation time of the locELM method,
and includes the time for computing the output of the last hidden layer
$V_j^{s}(x)$ ($1\leqslant s\leqslant N_e$, $1\leqslant j\leqslant M$)
and its derivatives, the coefficient matrix and the right hand side,
and for solving the linear least squares problem.
In this set of tests, we have employed $Q=50$ uniform collocation points per sub-domain
and $M=50$ training parameters per sub-domain.
Each local neural network contains a single hidden layer,
and we have employed $R_m=3.0$ when generating the random weight/bias coefficients
for the hidden layers of the local neural networks.
%the maximum magnitude of the random weight/bias coefficients in
%the hidden layer is set to $R_m=3.0$.
It can be observed that the locELM method produces dramatically (nearly exponentially)
more accurate
 results with increasing number of sub-domains, with
the maximum error in the domain reduced from around $10^1$ for a single sub-domain
to about $10^{-7}$ for $8$ sub-domains.
The training time for the neural network, on the other hand,
increases approximately linearly with increasing sub-domains, 
with the training time from about $0.1$ seconds for a single sub-domain
to about $0.8$ seconds for 8 sub-domains.




\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Helm1d/errror_colloc_point_2elem_trainparam_200_perElem.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Helm1d/helm1d_error_trainparam_2elem_collocPoints_100_perElem.pdf}(b)
  }
  \caption{Effect of the number of collocation points and training parameters
    (1D Helmholtz equation):
    the maximum and rms errors as a function of (a) the number of collocation
    points/sub-domain, and (b) the number of training parameters/sub-domain.
    Two uniform sub-domains  are used.
    %$R_m=3.0$, uniform collocation points.
    %In (a) the number of training parameters/sub-domain is fixed at $M=200$.
    %In (b) the number of collocation points/sub-domain is fixed at $Q=100$.
    %1 hidden layer in local neural networks, $R_m=3.0$.
  }
  \label{fig:helm1d_2}
\end{figure}

% what is it?
% what are the parameters?
% what is the result?
% what does it mean?


Figure \ref{fig:helm1d_2} illustrates the effects of the number of
collocation points and the number of training parameters per sub-domain
on the simulation accuracy.
Figure \ref{fig:helm1d_2}(a) depicts the maximum and rms errors in the domain
versus the number of collocation points/sub-domain.
Figure \ref{fig:helm1d_2}(b) depicts the maximum and rms errors in the domain
versus the number of training parameters/sub-domain.
In these tests we have employed $N_e=2$ uniform sub-domains,
uniform collocation points in each sub-domain,
one hidden layer in each local neural network,
and $R_m=3.0$ when generating the random weight/bias coefficients
for the hidden layer.
For the tests in plot (a) the number of training parameters/sub-domain
is fixed at $M=200$, and for the tests in plot (b) the number of
collocation points/sub-domain is fixed at $Q=100$.
%
Increasing the collocation points per sub-domain
causes an exponential decrease in the numerical errors initially.
The errors then stagnate as the number of collocation points/sub-domain
exceeds a certain point ($Q\sim 100$ in this case).
The error stagnation is due to the fixed number of training
parameters/sub-domain ($M=200$) here.
%When the number of collocation
%points/sub-domain is sufficiently large, the error component associated with
%the number of training parameters will likely become dominant.
The number of training parameters/sub-domain appears to have
a similar effect on the errors.
Increasing the training parameters per sub-domain also causes
a nearly exponential decrease in the errors initially.
The errors then stagnate as the number of training parameters increases
beyond a certain point ($M\sim 175$ in this case).

The results in Figures \ref{fig:helm1d_1} and \ref{fig:helm1d_2} show
that the current locELM method exhibits a clear sense of convergence
with respect to the degrees of freedom.
%in the simulation.
The numerical errors decrease exponentially or nearly exponentially,
as the number of sub-domains, or the number of collocation points
per sub-domain, or the number of training parameters per sub-domain
increases.



\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Helm1d/helm1d_error_type_colloc.pdf}
  }
  \caption{
    Effect of the collocation-point distribution (1D Helmholtz equation):
    the maximum error in the domain versus the number of
    collocation points/sub-domain, obtained with three 
    collocation-point distributions: uniform points, quadrature points,
    and random points.
    %$N_e=2$ sub-domains, $M=200$ training parameters per sub-domain,
    %$R_m = 3.0$, 1 hidden layer in local neural network.
  }
  \label{fig:helm1d_3}
\end{figure}

Figure \ref{fig:helm1d_3} illustrates the effect of the collocation-point
distribution on the simulation accuracy.
It shows the maximum error in the domain versus
the number of collocation points/sub-domain in
the locELM simulation using three types of
% distributions for
collocation points:
uniform regular points, Gauss-Lobatto-Legendre quadrature points,
and random points (see Remark~\ref{rem_cc}).
In this group of tests we have employed two
sub-domains ($N_e=2$) with $M=200$ training parameters/sub-domain,
and the local neural networks each contains a single hidden layer
with $R_m=3.0$ when generating the random weight/bias coefficients.
%for the hidden layer.
With the same number of collocation points, we observe that
the results corresponding to
the random collocation points are the least accurate.
The results obtained with the quadrature points
are the most accurate among the three, whose errors can be orders
of magnitude smaller than those with the random collocation points.
The accuracy corresponding to the uniform regular collocation points
lies between the other two.
With the quadrature points, however, we have encountered practical
difficulties in
our implementation  when the
number of quadrature points becomes larger (above $100$), because the
library our implementation is based on is unable to
compute the quadrature points accurately when the number of quadrature
points exceeds $100$ due to an inherent limitation.
Consequently, we are unable to obtain results with more than $100$ collocation
points/sub-domain when quadrature points are used, which hampers
our ability to perform certain types of tests.
Therefore, the majority of locELM  simulations in
the current work are conducted with uniform
collocation points.


% 2 layers, 2 elements

\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_soln_2elem_2layers_trapar300perelem.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_error_2elem_2layers_trapar300perelem.pdf}(b)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_error_colloc_2elem_2layers_trapar300perelem.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_traintime_colloc_2elem_2layers_trapar300perelem.pdf}
  }
  \caption{locELM simulations with 2 hidden layers in local neural networks
    (1D Helmholtz equation): profiles of (a) the locELM solutions
    and (b) their absolute errors, computed
    with $30$ and $200$ uniform collocation points per sub-domain.
    (c) the maximum and rms errors in the domain, and (d) the training time,
    as a function of the number of uniform collocation points per sub-domain.
    %In these tests, two uniform sub-domains are used,
    %two hidden layers in local network, with fixed [1, 20, 300, 1],
    %tanh activations,
    %300 training parameters per sub-domain,
    %fixed rand-mag = 3.0.
  }
  \label{fig:helm1d_4}
\end{figure}

\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_soln_2elem_3layers_trapar300perelem_randmag1.0_A.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_error_2elem_3layers_trapar300perelem_randmag1.0_A.pdf}(b)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_error_colloc_2elem_3layers_trapar300perelem.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_traintime_colloc_2elem_3layers_trapar300perelem.pdf}(d)
  }
  \caption{locELM simulations with 3 hidden layers in local neural networks
    (1D Helmholtz equation): profiles of (a) the locELM solutions
    and (b) their absolute errors, computed
    with $30$ and $200$ uniform collocation points per sub-domain.
    (c) the maximum and rms errors in the domain, and (d) the training time,
    as a function of the number of uniform collocation points per sub-domain.
    %In these tests, two uniform sub-domains are used,
    %3 hidden layers in local network, with fixed [1, 20, 20, 300, 1],
    %tanh activations,
    %300 training parameters per sub-domain,
    %fixed rand-mag = 1.0.
  }
  \label{fig:helm1d_5}
\end{figure}

The test results discussed so far are obtained using a single hidden layer
in the local neural networks.
Traditional studies of global extreme learning machines %seem to be % largely
are confined to such a configuration, using  a single hidden
layer in the neural network~\cite{HuangZS2006}.
With the current locELM method,
it is observed that using more than one hidden layer in the local neural networks
one can also obtain accurate results.
This is demonstrated by the results in Figures \ref{fig:helm1d_4}
and \ref{fig:helm1d_5}.
Figure \ref{fig:helm1d_4} shows locELM simulation results obtained with
2 hidden layers in each of the local neural networks, and
Figure \ref{fig:helm1d_5} shows locELM results obtained with 3 hidden layers
in the local neural networks.
In these tests two uniform sub-domains ($N_e=2$) have been used.
The local neural networks corresponding to Figure \ref{fig:helm1d_4} each
contains 2 hidden layers with  $20$ and $300$ nodes, respectively,
and $R_m=3.0$ is employed when the random weight/bias coefficients
for the hidden layers are generated.
The local neural networks corresponding to Figure \ref{fig:helm1d_5} each
contains 3 hidden layers with $20$, $20$ and $300$ nodes, respectively,
and $R_m=1.0$ is employed when the random weight/bias coefficients are generated
for the hidden layers.
The number of training parameters per sub-domain in these tests
is therefore fixed at $M=300$, which corresponds to the number of nodes
in the last hidden layer.
We have used $\tanh$ as the activation function for all the hidden layers.
%The activation functions for all the hidden layers are the ``tanh'' function
%in these tests.
Uniform collocation points have been used in each sub-domain, and
the number of collocation points is varied in the tests.
In each of these two figures, the plots (a) and (b) are profiles of
the locELM solutions and their absolute errors
computed with $30$ and $200$ uniform collocation
points per sub-domain, respectively.
The plots (c) and (d) show the maximum/rms errors in the domain and the training time
as a function of the number of collocation points per sub-domain, respectively.
It is evident that the numerical errors decrease exponentially
with increasing collocation points/sub-domain, similar to what has been observed
with a single hidden layer from Figure \ref{fig:helm1d_2}(a),
until the errors saturate as the number of collocation points increases beyond
a certain point.
With more than one hidden layer, the locELM method can similarly produce
accurate results with a sufficient number of collocation points per sub-domain. 
The training time is also observed to increase essentially linearly
with respect to the number of collocation points per sub-domain.
Numerical experiments with even more hidden layers in the local neural networks
suggest that the simulation tends to be not as accurate
as those corresponding to one, two or three hidden layers.
It appears to be harder to obtain accurate or
more accurate results with even more hidden layers.
%in the local neural networks.




\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Helm1d/helm1d_error_rangmag_4elem.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Helm1d/helm1d_maxerror_randmag_2elem_200trapar_varcolloc.pdf}(b)
    %\includegraphics[width=3in]{Figures/Helm1d/helm1d_error_randmag.eps}
  }
  \caption{Effect of random weight/bias coefficients in hidden layers
    (1D Helmholtz equation): (a) The maximum error in the domain versus $R_m$,
    for several cases with the number of collocation points/sub-domain ($Q$)
    and the number of training parameters/sub-domain ($M$) kept identical.
    (b) The maximum error in the domain versus $R_m$,
    for several cases with the number of training parameters/sub-domain fixed
    %at $M=200$
    and the number of collocation points/sub-domain varied.
    Four uniform sub-domains are used in (a), and two uniform sub-domains
    are used in (b).
    %$\lambda=10$.
    %In (a), 4 uniform sub-domains are used.
    %In (b), 2 uniform sub-domains are used. 
    %1 hidden layer in local networks.
  }
  \label{fig:helm1d_6}
\end{figure}



Apart from the number of collocation points and the number of training parameters
in each sub-domain,
we observe that the random weight/bias coefficients in the hidden layers
%are another factor that has a crucial
can influence the accuracy of the locELM simulation results.
As discussed in Section \ref{sec:loc_elm}, the weight/bias coefficients
in the hidden layers of the local neural networks are pre-set to uniform random values
generated on the interval $[-R_m,R_m]$, and they are
fixed throughout the computation. It is observed that $R_m$,
the maximum magnitude of the random coefficients, can influence significantly
the simulation accuracy.
Figure \ref{fig:helm1d_6} demonstrates this effect with two groups of tests.
In the first group, four uniform sub-domains ($N_e=4$) are used.
The number of (uniform) collocation points per sub-domain (Q)
and the number of training parameters per sub-domain ($M$) are kept to be the same,
and several of these values have been considered ($Q=M=50$, $100$, $300$).
Then for each of these
cases we vary $R_m$ systematically and record the errors of the simulation
results. Figure \ref{fig:helm1d_6}(a) shows the maximum error in the domain
as a function $R_m$ for this group of tests.
In the second group of tests, two uniform sub-domains ($N_e=2$) are used.
The number of training parameters per sub-domain is fixed
at $M=200$, and several values for the number of (uniform) collocation
points are considered ($Q=50$, $100$, $200$, $300$).
For each of these cases, $R_m$ is varied systematically and the corresponding
errors of the simulation results are recorded.
Figure \ref{fig:helm1d_6}(b) shows the maximum error in the domain as a function
of $R_m$ for this group of tests.
In both groups of tests, the local neural networks each contains a single
hidden layer.
These results indicate that, for a fixed simulation resolution (i.e.~fixed
$Q$ and $M$),
the error  tends be worse as $R_m$ becomes very large
or very small. The simulation tends to produce more accurate results for a range
of moderate $R_m$ values, which is typically around $R_m\approx 1 \sim 10$. 
As the simulation resolution increases, the optimal range of $R_m$
values tends to expand and shift rightward (toward larger values) on the $R_m$ axis.
Further tests also suggest that with increasing number of sub-domains
the optimal range of $R_m$ values tends to shift leftward (toward smaller values)
along the $R_m$ axis.



\begin{figure}
  \centerline{
    %\includegraphics[width=3in]{Figures/Helm1d/helm1d_soln_4elem.eps}(a)
    %\includegraphics[width=3in]{Figures/Helm1d/helm1d_error_4elem.eps}(b)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_soln_prof_comp_fixedTotDOF.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_error_prof_comp_fixedTotDOF.pdf}(b)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_error_elem.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Helm1d/helm1d_traintime_elem.pdf}(c)
  }
  \caption{Effect of the number of sub-domains,
    with fixed total degrees of freedom in the domain
    (1D Helmholtz equation):
    profiles of (a) the locELM solutions and (b) their
    absolute errors, computed using
    one and four uniform sub-domains in the simulation.
    (c) The maximum and rms errors
    in the domain, and (d) the training time,
    as a function of the number of uniform sub-domains.
    %The total
    %number of collocation points in the domain is fixed at $200$,
    %and the total
    %number of training parameters in the overall network is fixed at $400$.
    %Uniform sub-domains are used.
    %In (a) and (b),
    %4 uniform sub-domains, 100 training parameters in each local ELM,
    %50 uniform collocation points per sub-domain, rand-mag = 3.0.
    %In (c) and (d), rand-mag = 6.0 for n-elem=1, rand-mag=3.0 for
    %n-elem=2, 3 and 4, rand-mag = 2.0 for n-elem=5, and
    %rand-mag = 1.0 for n-elem=8.
  }
  \label{fig:helm1d_7}
\end{figure}

% comparison between local and global ELMs?

We observe that
the use of multiple sub-domains and local extreme learning machines
can significantly accelerate the computation and reduce the network training time,
without seriously compromising the accuracy, when compared with 
global extreme learning machines.
This point is demonstrated by Figure \ref{fig:helm1d_7}.
Here we fix the total degrees of freedom in the domain,
i.e.~the total number of collocation points and the total number of training
parameters in the domain, and vary the number of sub-domains in
the locELM simulation. The locELM case with a single sub-domain is equivalent to
a global ELM.
% extreme learning machine.
The total number of collocation points in the domain is fixed at $N_eQ=200$, and
the total number of training parameters is fixed at $N_eM=400$.
Uniform sub-domains are employed in these tests, with uniform collocation points
in each sub-domain.
So with multiple
sub-domains the total degrees of freedom
are evenly distributed to different sub-domains and local
neural networks. The local neural networks each contains a single
hidden layer, and the maximum magnitudes of
the random coefficients ($R_m$) employed in the tests here
are approximately in their optimal range of values.
Figures \ref{fig:helm1d_7}(a) and (b) illustrates profiles of the localELM
solutions and their absolute errors obtained using a single sub-domain
($Q=200$, $M=400$, $R_m=6.0$)
and using four sub-domains ($Q=50$, $M=100$, $R_m=3.0$) in the locELM simulations.
Both simulations have produced accurate results, with comparable error levels.
Figure \ref{fig:helm1d_7}(c) shows the maximum and rms errors in the domain
versus the number of sub-domains in the simulations,
and Figure \ref{fig:helm1d_7}(d) shows the training time as a function
of the number of  sub-domains.
It can be observed that the error levels corresponding to multiple sub-domains
are comparable to, or in certain cases maybe slightly
better or worse than,  those of a single sub-domain.
But the training time of the neural network is dramatically reduced
with multiple sub-domains, when compared with a single sub-domain.
The reduction in the training time is due to the fact that,
with multiple sub-domains, the coefficient matrix in the linear
least squares problem becomes very sparse, because only neighboring sub-domains
are coupled through the $C^k$ continuity conditions while those sub-domains
that are not adjacent to each other are not coupled.
On the other hand, with a single sub-domain, all the degrees of freedom
in the domain are coupled with one another, leading to a dense coefficient
matrix in the linear least squares problem and larger computation time.
These results suggest that,
when compared with global ELM,
%extreme learning machines,
the use of domain decomposition and local
neural networks can reduce the coupling
among the degrees of freedom
in different sub-domains without seriously compromising the accuracy, and this can
significantly reduce the computation time for the least squares
problem, and hence the network training time.



% compare with DNN

\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Helm1d/helm1d_compare_dnn_soln_4elem.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Helm1d/helm1d_compare_dnn_error_4elem.pdf}(b)
  }
  \caption{Comparison between locELM and PINN
    (1D Helmholtz equation):
    profiles of (a) the solutions and (b) their absolute errors, obtained using
    %the deep Galerkin method (DGM)~\cite{SirignanoS2018}
    PINN~\cite{RaissiPK2019} 
    with the Adam and L-BFGS optimizers,
    and using the current locELM method.
    %with $300$ collocation points in the domain.
    %In the locELM simulations, we have used four uniforms sub-domains ($N_e=4$),
    %$100$ training parameters/sub-domain ($M=100$), $100$ uniform collocation
    %points/sub-domain,
    %one hidden layer
    %in the local neural networks, and $R_m=3.0$ when
    %generating the random coefficients.
    %PINN-Adam: network structure [1, 50, 50, 50, 50, 50, 50, 1], tanh activations,
    %uniform collocation points as input data;
    %45000 (300 colloc points).
    %PINN-LBFGS: network structure [1, 50, 50, 50, 50, 50, 50, 1],
    %tanh activations, uniform collocation
    %points as input data;
    %total epochs taken: 
    %22500 (300 colloc points).
  }
  \label{fig:helm1d_8}
\end{figure}

\begin{table}[tb]
  \centering
  \begin{tabular}{lllll}
    \hline
    method & maximum error & rms error & epochs/iterations & training
    time (seconds)\\
    PINN (Adam) & $1.06e-3$ & $1.57e-4$ & $45,000$ & $507.7$ \\
    PINN (L-BFGS) & $1.98e-4$ & $3.15e-5$ & $22,500$ & $1035.8$ \\
    locELM & $1.56e-9$ & $2.25e-10$ & $0$ & $1.1$ \\
    \hline
  \end{tabular}
  \caption{1D Helmholtz equation: Comparison between the current locELM method and
    PINN,
    in terms of the maximum/rms errors in the domain, the number of
    epochs or iterations in the training of neural networks, and
    the training time.
    The problem settings correspond to those of Figures \ref{fig:helm1d_8}.
  }
  \label{tab:tab_1}
\end{table}


We next compare the the current locELM method with
%the deep Galerkin method (DGM)~\cite{SirignanoS2018}
the physics-informed neural network (PINN)~\cite{RaissiPK2019} method,
an often-used PDE solver based on deep neural networks.
Figure \ref{fig:helm1d_8} compares profiles of the solutions (plot (a))
and their absolute errors (plot (b)) obtained using PINN
%the deep Galerkin method
with the Adam and the L-BFGS optimizers, and using the current locELM method.
%The PINN method is implemented using Tensorflow and Keras.
In the PINN simulations, the neural network contains $6$ hidden layers
with $50$ nodes and the $\tanh$
activation function in each layer, and
the output layer contains no activation function.
The input data consist of $300$ uniform
collocation points in the domain.
In the PINN/Adam simulation, the network has been trained on the input data
for $45,000$ epochs, with the learning rate gradually decreasing from $0.001$
at the beginning to $2.5\times 10^{-5}$ at the end.
In the PINN/L-BFGS simulation,
%the neural network consists of $2$ hidden layers
%with $100$ modes and the $\tanh$ activation function in each layer.
%Similar to the case with the Adam optimizer, the input data consist of
%$300$ uniform points, and the output layer is linear.
the network has been trained for $22,500$ L-BFGS iterations.
In the locELM simulation, four uniform sub-domains ($N_e=4$) have been used,
with $M=100$ training parameters per sub-domain and $Q=100$ uniform collocation
points per sub-domain.
The four local neural networks each consists of one hidden layer
with $M=100$ nodes and the $\tanh$ activation function,
and we have employed $R_m=3.0$ for generating
random weight/bias coefficients in the hidden layer.
Figure \ref{fig:helm1d_8} shows that both PINN and the current
locELM method have captured the solution quite accurately. But the current
method is considerably more accurate than PINN, by a factor of nearly five
orders of magnitude in terms of the errors. 

Table \ref{tab:tab_1} is a further comparison of PINN and locELM
in terms of the maximum/rms errors in the domain, and the computational
cost (the network training time and the number of epochs or iterations).
The problem setting corresponds to that of Figure \ref{fig:helm1d_8}.
The current method is not only  much more
accurate than PINN, but also considerably cheaper in terms of the
computational cost. The training time with the current locELM method
is on the order of a second.
In contrast, it takes
over $500$ seconds to train PINN with Adam
and over $1000$ seconds to train it with L-BFGS.
We observe a clear superiority of the current locELM
method to the PINN solver in terms of both accuracy and
computational cost.
These observations will be confirmed and reinforced with other problems
in subsequent sections.


% compare with FEM

\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Helm1d/FEM/helm1d_compare_locELM_FEM_soln_prof_fem100kElem_A.pdf}(a)
    \includegraphics[width=2in]{Figures/Helm1d/FEM/helm1d_compare_locELM_FEM_error_prof_fem100kElem_A.pdf}(b)
    \includegraphics[width=2in]{Figures/Helm1d/FEM/helm1d_fenics_FEM_error_elem.pdf}(c)
  }
  \caption{Comparison between locELM and FEM (1D Helmholtz equation):
    Profiles of (a) the solutions and (b) their absolute errors, computed using
    the finite element method (FEM) and the current locELM method.
    (c) The maximum and rms errors in the domain versus 
    the number of elements from the FEM simulations, showing its second-order
    convergence rate.
    %FEM is implemented using the FEniCS library in python.
    %In (a) and (b), $100,000$ uniform linear elements have been employed with FEM.
    %With locELM: 4 uniform sub-domains, 100 uniform collocation points/sub-domain,
    %100 training parameters/sub-domain, rand-mag=3.0, 1 hidden layer.
  }
  \label{fg_helm1d_9}
\end{figure}


\begin{table}[tb]
  \centering
  \begin{tabular}{l|lllllll}
    \hline
    method & elements & sub-domains & $Q$ & $M$ & maximum error & rms error  & wall
    time (seconds)\\ \hline
    locELM
    & -- & $4$ & $100$ & $75$ & $4.02e-8$ & $5.71e-9$ & $0.67$ \\
    & -- & $4$ & $100$ & $100$ & $1.56e-9$ & $2.25e-10$  & $1.1$ \\
    & -- & $4$ & $100$ & $125$ & $1.42e-10$ & $2.55e-11$ & $1.3$ \\
    \hline
    FEM
    %& $20,000$ & -- & -- & -- & $1.06e-7$ & $2.73e-8$ & $0.24$ \\
    & $25,000$ & -- & -- & -- & $6.82e-8$ & $1.74e-8$ & $0.32$ \\
    & $50,000$ & -- & -- & -- & $1.67e-8$ & $4.35e-9$ & $0.62$ \\
    & $100,000$ & -- & -- & -- & $1.33e-8$ & $3.30e-9$ & $1.24$ \\
    %& $200,000$ & -- & -- & -- & $1.56e-8$ & $3.23e-9$ & $2.4$ \\
    \hline
  \end{tabular}
  \caption{1D Helmholtz equation: Comparison between the current locELM
    method and the finite element method (FEM),
    in terms of the maximum/rms errors in the domain and
    the training or computation
    time. The problem settings correspond to those of Figure
    \ref{fg_helm1d_9}.
    %FEM corresponds to the case of $200000$ elements, which by extrapolation
    %will achieve similar to accuracy to locELM if without saturation.
  }
  \label{tb_helm1d_10}
\end{table}


Finally we compare the current locELM method with the classical
finite element method (FEM).
We observe that the computational
performance of locELM is comparable to that of FEM, and oftentimes
the locELM performance surpasses that of FEM, in terms of the accuracy and
computational cost.
Figures \ref{fg_helm1d_9}(a) and (b) are comparisons of the solution profiles
and the error profiles obtained using locELM and FEM.
Figure \ref{fg_helm1d_9}(c) shows the maximum and rms errors as a function
of the number of elements obtained using FEM, demonstrating its
second-order convergence rate.
As mentioned before,
%at the beginning of Section \ref{sec:tests},
the finite element method is implemented in Python
using the FEniCS library. In these
tests uniform linear elements have been used.
For the plots (a) and (b), $100,000$ elements are used in the FEM simulation.
In the locELM simulation, we have employed $N_e=4$ uniform sub-domains,
$Q=100$ uniform collocation points per sub-domain, $M=100$ training parameters
per sub-domain, a single hidden layer in the local neural networks,
and $R_m=3.0$ when generating the random coefficients.
%for the hidden layers of the local neural networks.
It is evident that both FEM and locELM produce accurate
solutions.

Table \ref{tb_helm1d_10} provides a more comprehensive comparison
between locELM and FEM for the 1D Helmholtz equation,
with regard to the accuracy and computational cost.
Here we list the maximum and rms errors in the domain,
and the training or computation time, obtained using
locELM and FEM corresponding
to several numerical resolutions.
The data show that the current locELM method is very
competitive compared with FEM. For example, the locELM case with
$M=75$ training parameters/sub-domain is similar in performance
to the FEM case with $50,000$ elements, with comparable
values for the numerical errors and
the wall time. The locELM cases with $M=100$ and $M=125$ training
parameters/sub-domain have wall time values comparable to the FEM case
with $100,000$ elements, but the numerical errors of these locELM
cases are considerably smaller than those of the FEM case.


% what else to discuss here?
%
% what are the errors? are they absolute/relative errors? how are they defined?
% need to define these errors somewhere.

% if paper is too long, can we put some of the sections into an appendix?

