
\subsection{Two-Dimensional Helmholtz Equation}

We consider the  boundary value problem with
the two-dimensional (2D) Helmholtz equation on
a rectangular domain, $\Omega=[a_1,b_1]\times [a_2,b_2]$, as follows,
\begin{subequations}
  \begin{align}
    &
    \frac{\partial^2u}{\partial x^2} + \frac{\partial^2u}{\partial y^2}
    - \lambda u = f(x,y), \label{equ_hm2_1} \\
    &
    u(a_1,y) = h_1(y), \\
    & u(b_1,y) = h_2(y), \\
    & u(x,a_2) = h_3(x), \\
    & u(x,b_2) = h_4(x),
  \end{align}
\end{subequations}
where $u(x,y)$ is the field function to be solved for, $f(x,y)$ is
a prescribed source term, and $h_i$ ($1\leqslant i\leqslant 4$) denote
the boundary distributions. The constant parameters
%in the above equations and the domain specification
are given by
\begin{equation*}
  \lambda = 10, \quad
  a_1 = a_2 = 0, \quad
  b_1 = b_2 = 3.6.
\end{equation*}
We choose the source term $f(x,y)$ such that the following function
satisfies equation \eqref{equ_hm2_1},
\begin{equation}\label{equ_hm2_2}
  u = -\left[\frac32\cos\left(\pi x+\frac{2\pi}{5}\right)
    +2\cos\left(2\pi x-\frac{\pi}{5}\right) \right]
  \left[\frac32\cos\left(\pi y+\frac{2\pi}{5}\right)
    +2\cos\left(2\pi y-\frac{\pi}{5}\right) \right].
\end{equation}
We choose the boundary distributions $h_1(y)$, $h_2(y)$, $h_3(x)$
and $h_4(x)$ in accordance with \eqref{equ_hm2_2}, by setting
\eqref{equ_hm2_2} on the
corresponding boundaries.
Consequently, \eqref{equ_hm2_2} provides the solution to
this boundary value problem.
%under these settings.

% how to solve problem with locELM?

We employ the locELM method from Section \ref{sec:steady} to solve this problem.
$\Omega$ is partitioned into $N_x$ and $N_y$ uniform sub-domains along
the $x$ and $y$ directions, respectively, leading to a total of
$N_e=N_xN_y$ uniform sub-domains. We impose $C^1$
continuity conditions on the sub-domain boundaries.
%Uniform regular collocation points are used in all sub-domains.
In each sub-domain, we employ $Q_x$
and $Q_y$ uniform collocation points  in
the $x$ and $y$ directions, respectively, 
leading to a total of $Q=Q_xQ_y$ collocation points per sub-domain.



\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_soln_dist_1elem_randmag_2.0_A.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_error_dist_1elem_randmag_2.0_A.pdf}(b)
  %}
  %\centerline{
    %\includegraphics[width=1.5in]{Figures/Helm2d/helm2d_soln_dist_4elem.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_error_dist_4elem.pdf}(d)
  }
  \caption{
    2D Helmholtz equation: (a) Solution distribution.
    Distributions of the absolute error obtained using one sub-domain (b)
    and using $4$ sub-domains (c) in locELM simulation.
    %Field distributions of the locELM
    %solutions (a,c) and their absolute errors (b,d),
    %computed using one sub-domain (a,b)
    %and using $4$ sub-domains (c,d) in the locELM simulation.
    %In (a, b), $Q_x=Q_y=50$,
    %1600 training parameters/sub-domain, rand-mag = 2.0.
    %In (c, d), 25 uniform collocation points in x/y directions
    %per sub-domain ($Q_x=Q_y=25$), $400$ training parameters/sub-domain,
    %rand-mag = 1.5.
  }
  \label{fig:helm2d_1}
\end{figure}

On each sub-domain, we employ a local neural network consisting of
an input layer with two nodes (representing
$x$ and $y$),
a single hidden layer with $M$ nodes and the $\tanh$ activation
function, and an output layer with one node (representing $u$).
The output layer is linear,
with no bias and no activation function.
An additional affine mapping normalizing the
input $x$ and $y$ data to the interval $[-1,1]\times[-1,1]$
has been incorporated
right behind the input layer for all sub-domains.
%The number of training parameters per sub-domain
%is $M$, the width of the hidden layer of the local neural networks.
The weight and bias coefficients in the hidden layers
are set to uniform random values generated
on the interval $[-R_m,R_m]$.


The simulation parameters  include the number of sub-domains
($N_x$, $N_y$, $N_e$), the number of collocation points per sub-domain
($Q_x$, $Q_y$, $Q$), the number of training parameters per sub-domain ($M$),
and the maximum magnitude of the random coefficients ($R_m$).
%We will use the total number of
%collocation points ($N_eQ$) and the total number of training parameters ($N_eM$)
%to characterize the total degrees of freedom in the locELM simulation.
We employ a fixed seed value $1$ for the Tensorflow random number generator
for all the tests in this subsection.

Figure \ref{fig:helm2d_1} depicts the field distributions of the
locELM solution (a), and the absolute errors of the solution computed
using one sub-domain (b) and $4$ uniform sub-domains (c).
%in the locELM simulation.
%Here the absolute error is defined as the absolute value of the difference between
%the locELM solution and the exact solution given by \eqref{equ_hm2_2}.
The case with one sub-domain in locELM computation
is equivalent to the configuration
of a global ELM.
In this case, we have employed a total of $Q=50\times 50$
uniform collocation points in the domain, $M=1600$ training parameters
in the neural network, and $R_m=2.0$ when generating the random
weight/bias coefficients for the hidden layers.
For the case with $4$ sub-domains, % in the locELM simulation,
we have partitioned the domain into $2$ sub-domains in each direction
($N_x=N_y=2$), and 
employed $Q=25\times 25$ uniform collocation points in each sub-domain
(i.e.~$Q_x=Q_y=25$), $M=400$ training parameters per sub-domain,
and $R_m=1.5$ when generating the random weight/bias coefficients.
%for the hidden layers of local neural networks.
Therefore the total degrees of freedom for these two cases,
i.e.~the number of collocation points and the number of training parameters
in the domain, are the same.
These simulations have both captured the solution accurately.
The resultant errors  are also
comparable, with the case
of $4$ sub-domains slightly better.


Figure \ref{fig:helm2d_3} shows the effect of the degrees of freedom on
the locELM accuracy.
The effect of varying the number of sub-domains,
with the degrees of freedom per
sub-domain fixed, is illustrated in Figure \ref{fig:helm2d_3}(a).
In this group of tests, we employ $Q=25\times 25$ uniform collocation
points/sub-domain ($Q_x=Q_y=25$),
$M=400$ training parameters/sub-domain,
and $R_m=1.5$ for generating the random weight/bias coefficients.
%for the hidden layers of the local neural networks.
The domain is partitioned into one to nine sub-domains.
Figure \ref{fig:helm2d_3}(a) shows the maximum and rms errors
in the domain versus the number of sub-domains in the locELM simulation.
%Figure \ref{fig:helm2d_2}(b) shows the corresponding
%training time of the neural network.
We can observe that the  errors decrease
essentially exponentially
with increasing number of sub-domains.
%and the training time 
%increases approximately linearly as the number of sub-domains increases.


\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Helm2d/helm2d_error_num_elem_colloc_25sq_trapar_400_perelem_rangmag_1.5_aa.pdf}(a)
    \includegraphics[width=2in]{Figures/Helm2d/helm2d_error_colloc_trapar400_randmag1.5_4elem.pdf}(b)
    \includegraphics[width=2in]{Figures/Helm2d/helm2d_error_trapar_colloc30sq_randmag_1.5_4elem.pdf}(c)
  }
  \caption{Effect of degrees of freedom
    (2D Helmholtz equation): the maximum/rms errors in the domain as a function
    of (a) the number of sub-domains,
    (b) the number of collocation points per direction per sub-domain,
    and (c) the number of
    training parameters per sub-domain.
    In (b,c) four sub-domains are used.
    %and $R_m=1.5$ when generating the random coefficients.
    %In (a), the number of training parameters/sub-domain is fixed at 400.
    %In (b), the number of collocation points/sub-domain is fixed
    %at $30\times 30$.
  }
  \label{fig:helm2d_3}
\end{figure}

The effect of varying the number of collocation points or
the number of training parameters per sub-domain is illustrated in
Figures \ref{fig:helm2d_3}(b) and (c).
In this group of tests we have employed four uniform sub-domains ($N_x=N_y=2$),
and $R_m=1.5$ when generating the random weight/bias
coefficients,
%for the hidden layers of the local neural networks,
while the number of collocation points/sub-domain or the number of
training parameters/sub-domain is varied systematically.
Figure \ref{fig:helm2d_3}(b) depicts the maximum and rms errors in the domain
as a function of the number of uniform collocation points in each direction ($Q_x=Q_y$)
per sub-domain, where the number of training parameters per sub-domain
is fixed at $M=400$.
It can be observed that the errors initially decrease exponentially with
increasing collocation points, until the number of collocation points reaches
a certain level. Then the errors stagnate, and remains essentially
the same as the number of collocation points
further increases.
%The observed error saturation  is due to the fixed
%number of training parameters in these tests.
%The error associated
%with the training parameters becomes dominant when the number of collocation
%points is sufficiently large.
Figure \ref{fig:helm2d_3}(c) shows the maximum and rms errors in the domain
as a function of the training parameters per sub-domain, where the number of
uniform collocation points per sub-domain is fixed at $30\times 30$
($Q_x=Q_y=30$). It is observed that the errors decrease dramatically
with increasing training parameters per sub-domain. The error reduction rate
is nearly exponential initially, and the reduction slows down
as the number of training parameters/sub-domain becomes large.
These results show that
%here and those in the previous sub-section demonstrate that
the locELM method exhibits a clear sense of convergence with increasing
degrees of freedom.
%(the number of sub-domains, collocation points, or training parameters).
The locELM  errors decrease
essentially exponentially as the number of sub-domains, or the number of
collocation points in each direction
per sub-domain, or the number training parameters per sub-domain increases.



\begin{figure}
  \centerline{
    %\includegraphics[width=1.5in]{Figures/Helm2d/helm2d_soln_dnn_adam_1elem_colloc50sq_5hlay40width.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_error_dnn_adam_1elem_colloc50sq_5hlay40width_A.pdf}(a)
  %}
  %\centerline{
    %\includegraphics[width=1.5in]{Figures/Helm2d/helm2d_soln_dnn_lbfgs_1elem_colloc50sq_4hlay50width.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_error_dnn_lbfgs_1elem_colloc50sq_4hlay50width_A.pdf}(b)
  }
  \caption{PINN solution of
    2D Helmholtz equation: Field distributions of %the solutions (a,c)
    and the absolute errors obtained using
    PINN~\cite{RaissiPK2019} with the Adam optimizer (a)
    and the L-BFGS optimizer (b).
    This figure can be compared with Figure \ref{fig:helm2d_1}, which
    is obtained using the current locELM method.
    %Results in (a) and (b) are obtained using the Adam optimizer,
    %and the DNN contains 5 hidden layers, each with a width of 40 neurons,
    %with ``tanh'' activation functions.
    %DNN structure: [2, 40, 40, 40, 40, 40, 1]. The last layer is a linear
    %activation. Training data: 50x50 uniform points in domain.
    %Adam optimizer, with total 107,000 epochs. Learning rates:
    %first 7000 epochs, 1.0*default-lr; next 5000 epochs, 0.5*default-lr;
    %next 5000 epochs, 0.25*default-lr; next 5000 epochs, 0.15*default-lr;
    %next 15000 epochs, 0.1*default-lr; next 10000 epochs, 0.075*default-lr;
    %next 20000 epochs, 0.05*default-lr; next 20000 epochs, 0.03*default-lr;
    %next 20000 epochs, 0.01*default-lr, where default-lr=0.001.
    %Results in (c) and (d) are obtained using the L-BFGS optimizer, and
    %the DNN contains 4 hidden layers, each with a width of 50 neurons,
    %with ``tanh'' activation functions. The last layer is linear activation.
    %Structure: [2, 50, 50, 50, 50, 1].
    %Training data: 50x50 uniform points in domain.
    %L-BFGS optimizer. Total 24500 L-BFGS iterations.
    %BC penalty coefficient: 0.98; equation penalty coefficient: 0.02.
  }
  \label{fig:helm2d_5}
\end{figure}



\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_soln_profile_y1.0_dnn_elm_compare.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_error_profile_y1.0_dnn_elm_compare.pdf}(b)
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_soln_profile_y2.6_dnn_elm_compare.pdf}(c)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Helm2d/helm2d_error_profile_y2.6_dnn_elm_compare.pdf}(d)
  }
  \caption{Comparison between locELM and PINN 
    (2D Helmholtz equation): profiles of the solutions (a,c)
    and their absolute errors (b,d) along two horizontal lines across
    the domain at $y=1.0$ (a,b) and $y=2.6$ (c,d),
    obtained using PINN~\cite{RaissiPK2019} with the Adam/L-BFGS optimizers and
    using locELM with 4 sub-domains.
    The PINN results correspond to Figure \ref{fig:helm2d_5}.
    The locELM results correspond to Figure \ref{fig:helm2d_1}(c,d).
  }
  \label{fig:helm2d_6}
\end{figure}


We next compare the locELM method  with
PINN~\cite{RaissiPK2019}
for solving the 2D Helmholtz equation.
Figure \ref{fig:helm2d_5} shows the field distributions of
the solutions and their errors obtained using PINN together with
the Adam and L-BFGS optimizers.
The input data consist of $50\times 50$ uniform points in the domain. 
For the Adam optimizer, the neural network contains $5$ hidden layers,
with a width of $40$ nodes for each layer and the $\tanh$ activation function.
The neural network has been trained on the input data for $107,000$ epochs, with
the learning rate gradually decreasing from $0.001$ at the beginning
to $10^{-5}$ at the end of training.
For the L-BFGS optimizer, the neural network contains $4$ hidden layers,
with a width of $50$ nodes for each layer and the $\tanh$ activation function.
The network has been trained on the input data for $24,500$ L-BFGS
iterations.
The L-BFGS result is observed to be generally more accurate
than the Adam result.
The results in this figure can be compared with those of Figure \ref{fig:helm2d_1},
which are obtained using the current locELM method.
The field distributions indicate that the locELM results 
are considerably more accurate than the PINN results.

Figure \ref{fig:helm2d_6} shows a comparison of the solution and 
error profiles along two horizontal lines across the domain,
at $y=1.0$ and $y=2.6$, obtained using PINN and using the current locELM
method (with $4$ sub-domains).
The superior accuracy of the current method is evident.


\begin{table}
  \centering
  \begin{tabular}{lllll}
    \hline
    method  & maximum error & rms error & epochs/iterations & training time (seconds) \\
    PINN (Adam)  & $4.13e-2$ & $3.91e-3$ & $107,000$ & $7213.7$ \\
    PINN (L-BFGS)  & $1.27e-2$ & $9.25e-4$ & $24,500$ & $5051.2$ \\
    global ELM  & $4.17e-5$ & $4.54e-6$ & $0$ & $410.6$ \\
    locELM (4 sub-domains)  & $2.01e-5$ & $1.41e-6$ & $0$ & $33.6$ \\
    \hline
  \end{tabular}
  \caption{2D Helmholtz equation: comparison between locELM and PINN 
    in terms of the accuracy (maximum/rms errors in domain) and the
    computational cost (epochs/iterations and the training time).
    The PINN results correspond to those in Figure \ref{fig:helm2d_5}.
    The locELM results correspond to those in Figure
    \ref{fig:helm2d_1}.
  }
  \label{tab:helm2d_1}
\end{table}

Table \ref{tab:helm2d_1} is a further comparison between the current locELM method
and PINN in terms of the maximum and rms errors in the domain,
the number of epochs or iterations during the training, and the network training time.
The PINN results with the Adam/L-BFGS optimizers correspond to those
from Figure \ref{fig:helm2d_5}.
The global ELM results correspond to those in Figure \ref{fig:helm2d_1}(a,b), and
the locELM results with %one sub-domain
$4$ sub-domains correspond to those in Figure \ref{fig:helm2d_1}(c,d).
Note that the input data for all these cases (PINN and locELM)
are the same (total $50\times 50$ uniform collocation points in the domain).
It can be observed that the locELM results are two to
three orders of magnitude more accurate than the PINN results.
In terms of the training time, the current locELM method is much faster than
PINN (by over two orders of magnitude).
The global ELM method, which is equivalent to the locELM method
with one sub-domain, is more than an order of magnitude faster than
PINN (around $410$ seconds
versus over $5000$ seconds). With $4$ sub-domains, the current locELM method
is also much faster than the global ELM (around $33$ seconds versus $410$ seconds).
These results clearly signify the superiority of the current
method to PINN, in term of both accuracy and the computational cost.
%based on deep neural networks.

% compare with FEM

\begin{figure}
  \centerline{
    \includegraphics[height=2.1in]{Figures/Helm2d/FEM/helm2d_fem_error_dist_A.pdf}(a)
    \includegraphics[height=2.2in]{Figures/Helm2d/FEM/helm2d_FEM_error_elem.pdf}(b)
    %\includegraphics[width=2in]{Figures/Helm2d/FEM/helm2d_FEM_solvetime_elem.eps}(c)
  }
  \caption{FEM solution of the 2D Helmholtz equation: (a) FEM error distribution
    computed on a $590\times 590$ mesh.
    (b) The FEM maximum/rms errors in the domain 
    versus the number of elements in each direction,
    showing the second-order convergence rate.
    On a $N\times N$ mesh, the number of triangular elements is $2N^2$.
  }
  \label{fg_helm2d_7}
\end{figure}

\begin{table}
  \centering
  \begin{tabular}{l|lllllll}
    \hline
    method & mesh & sub-domains & $Q$ & $M$  & max-error & rms-error &  wall-time (seconds) \\ \hline
    FEM & $500\times 500$ & -- & -- & -- & $8.20e-4$ & $2.00e-4$ & $18.5$ \\
    & $590\times 590$ & -- & -- & -- & $5.89e-4$ & $1.51e-4$ & $35.4$ \\
    \hline
    locELM & --  & $4$ & $20\times 20$ & $300$ & $7.28e-4$ & $5.28e-5$ & $17.1$ \\
     & --  & $4$ & $25\times 25$ & $400$ & $2.01e-5$ & $1.41e-6$ & $33.6$ \\
    \hline
  \end{tabular}
  \caption{
    2D Helmholtz equation: comparison between locELM and
    the finite element method (FEM)
    in terms of the maximum/rms errors in the domain and the
    training or computation time.
    %$Q$ and $M$ denote the number of collocation points/sub-domain and
    %the number of training parameters/sub-domain,
    %respectively, in the locELM simulation.
    The FEM results correspond to those in Figure \ref{fg_helm2d_7}.
    The locELM results correspond to those in Figure
    \ref{fig:helm2d_1}.
  }
  \label{tb_helm2d_2}
\end{table}

Finally we compare the performance of the current locELM method with
the classical finite element method for the 2D Helmholtz equation.
Figure \ref{fg_helm2d_7} illustrates the FEM solution and its
second-order convergence rate. 
Figure \ref{fg_helm2d_7}(a) shows the distribution of
the absolute error of
the FEM solution computed on a $590\times 590$ uniform rectangular mesh.
Note that
each rectangle in the mesh is further divided along its diagonal into two triangular
linear elements, as stipulated by the FEniCS library.
Therefore, in the current work,
an $N_1\times N_2$ rectangular mesh
contains a total of $2N_1N_2$ triangular elements
for the FEM simulations.
Figure \ref{fg_helm2d_7}(b) shows the maximum/rms errors
of the FEM solution in the domain
versus the number of rectangles in each direction
in the rectangular mesh, demonstrating the second-order convergence
rate of the method.

In Table \ref{tb_helm2d_2} we compare the locELM method
and the finite element method with regard to their
accuracy and the computational cost.
Here we list the maximum and rms errors in the domain,
and the wall time for the training or computation, corresponding to a set of
different meshes or simulation parameters obtained using locELM and
FEM. We observe that the locELM performance is on par with or better than
that of the FEM. For example,
the FEM case with the $500\times 500$ mesh and the locELM
case with $Q=20\times 20$ and $M=300$
have a comparable computational cost and also a similar accuracy.
The FEM case with the $590\times 590$ mesh has a comparable computational
cost to the locELM case with $Q=25\times 25$ and $M=400$,
but its errors are  over an order of
magnitude larger than those of the latter.


% what else to discuss here?


