\subsection{Second-Order Wave Equation}


We next test the locELM method
using the 1D second-order wave equation (plus time).
Consider a rectangular domain in the spatial-temporal plane,
$\Omega=\{(x,t)\ |\ x\in[a_1,b_1], \ t\in[0,t_f]  \}$, and
the initial/boundary-value problem with
the second-order wave equation on this domain,
\begin{subequations}
  \begin{align}
    &
    \frac{\partial^2 u}{\partial t^2} - c^2\frac{\partial^2 u}{\partial x^2}
    = 0, \label{eq_wav2_1} \\
    &
    u(a_1,t) = u(b_1,t), \\
    &
    \frac{\partial }{\partial x}u(a_1,t) = \frac{\partial }{\partial x}u(b_1,t), \\
    &
    u(x,0) = h(x), \\
    &
    \left.\frac{\partial u}{\partial t}\right|_{(x,0)} = 0
  \end{align}
\end{subequations}
where $u(x,t)$ is the field solution to be solved for, periodic conditions are
imposed on $x=a_1$ and $b_1$,
the constant $c$
is the wave speed, and the initial wave profile $h(x)$ is given by
\eqref{eq_wav1_ic}.
The values for the constant parameters involved in these equations and
the domain specification are given by
\begin{equation*}
  a_1 = 0, \quad
  b_1 = 5, \quad
  c = 2, \quad
  \delta_0 = 1, \quad
  x_0 = 3, \quad
  t_f = 10 \ \text{or}\ 1.
\end{equation*}
This problem has the following solution,
\begin{equation}\label{eq_wav2_2}
  \left\{
  \begin{split}
    &
    u(x,t) = \sech\left[\frac{3}{\delta_0}\left(-\frac{L_1}{2}+\xi  \right)  \right]
    + \sech\left[\frac{3}{\delta_0}\left(-\frac{L_1}{2}+\eta  \right)  \right], \\
    &
    \xi = \bmod\left(x-x_0+ct+\frac{L_1}{2}, L_1  \right), \quad
    \eta = \bmod\left(x-x_0-ct+\frac{L_1}{2}, L_1  \right), \quad
    L_1 = b_1-a_1.
  \end{split}
  \right.
\end{equation}
The two terms in this solution represent the leftward- and rightward-traveling
waves, respectively.



\begin{figure}
  \centerline{
    \includegraphics[height=2.3in]{Figures/Wave_2nd/wave2nd_soln_dist_10tblocks_randmag1.0_colloc25sq_trapar350_8elem.pdf}(a)
    \includegraphics[height=2.3in]{Figures/Wave_2nd/wave2nd_error_dist_10tblocks_randmag1.0_colloc25sq_trapar350_8elem.pdf}(b)
  }
  \caption{Second-order wave equation: distributions of (a) the locELM solution
    and (b) its absolute error.
    %against the exact solution, computed using the current locELM method.
    %In this simulation $10$ time blocks in the domain and $8$ sub-domains
    %per time block have been used.
    %10 time blocks; 8 (4x2) sub-domains/block;
    %25x25 uniform collocation points/sub-domain,
    %350 training parameters/sub-domain; rand-mag=1.0.
  }
  \label{fg_wav2_1}
\end{figure}

% how to solve problem?


We employ the locELM method from Section \ref{sec:unsteady} together with block time
marching to simulate this problem, by restricting the method to one spatial
dimension. Equation \eqref{eq_wav2_1} involves the second derivative in time,
which can be dealt with in a way analogous to the first temporal derivative
%discussed in Section \ref{sec:unsteady}
and computed by auto-differentiation.
We partition the spatial-temporal domain $\Omega$ along the temporal
direction into $N_b$ uniform blocks (time blocks), with a
block size $\Gamma = \frac{t_f}{N_b}$. The time blocks are computed separately
and successively. We further partition the spatial-temporal domain
of each time block into $N_x$ uniform sub-domains along the $x$ direction and
$N_t$ uniform sub-domains in time, resulting in $N_e=N_xN_t$ uniform sub-domains
per time block. We impose the $C^1$ continuity conditions
on the sub-domain boundaries in both the $x$ and $t$ directions.
Within each sub-domain we use $Q_x$ uniform collocation points
along the $x$ direction and $Q_t$ uniform collocation points in time,
leading to  $Q=Q_xQ_t$ uniform collocation points in each
sub-domain.


We use a local neural network for each sub-domain within the time block, leading
to a total of $N_e$ local neural networks in the simulation.
We employ a single hidden layer in each local neural network, with a width of
$M$ nodes and $\tanh$ as the activation function.
The input layer consists of two nodes, representing $x$ and $t$, and
the output layer consists of a single node, representing the field solution $u$.
The output layer is linear and has no bias.
%No activation function is applied to the output layer.
An additional affine mapping normalizing the input $x$ and $t$ data to
the interval $[-1,1]\times[-1,1]$ is incorporated into the local neural networks
right behind the input layer for each sub-domain.
%The number of training parameters in each sub-domain corresponds to
%the width of the hidden layer ($M$) of the local neural network.
The weight/bias coefficients in the hidden layer
are pre-set to uniform random values generated on the interval $[-R_m,R_m]$.
%similar to what is done in previous sections.

The simulation parameters include the number of time blocks ($N_b$),
the number of sub-domains per time block ($N_x$, $N_t$, $N_e$),
the number of training parameters per sub-domain ($M$),
the number of collocation points per sub-domain ($Q_x$, $Q_t$, $Q$),
and the maximum magnitude of the random coefficients ($R_m$).
%We again use ($Q,M$) to characterize the degrees of freedom within
%a sub-domain, and ($N_eQ,N_eM$) to characterize the degrees of freedom
%within a time block.
A fixed seed value $12$ is employed for the Tensorflow random
number generator for all the tests in this sub-section.
%and all the numerical tests reported here are repeatable.

% results discussion

Figure \ref{fg_wav2_1} depicts the distributions of the locELM solution
and its absolute error in the spatial-temporal plane.
%computed using the current locELM method.
The temporal domain size is set to $t_f=10$.
We have used $10$ uniform time blocks ($N_b=10$)
in the spatial-temporal domain, $8$ sub-domains ($N_x=4$, $N_t=2$)
per time block, $25\times 25$ uniform collocation points per
sub-domain ($Q_x=Q_t=25$), $350$ training parameters per sub-domain ($M=350$),
and $R_m=1.0$ when generating the random weight/bias coefficients.
%in the hidden layers of the local neural networks.
One can observe the wave propagation pattern in the spatial-temporal
plane, and the peaks formed periodically by the superposition
of the leftward- and rightward-traveling waves.
The error distribution shows that locELM has captured
the solution quite accurately.



\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Wave_2nd/wave2nd_error_elem_10tblocks_colloc25sq_trapar350_randmag1.0_A.pdf}(a)
    %\includegraphics[width=3in]{Figures/Wave_2nd/wave2nd_error_colloc_10tblocks_8elem_trapar350_randmag_1.0_A.eps}(a)
    \includegraphics[width=2in]{Figures/Wave_2nd/wave2nd_error_trapar_10tblocks_8elem_colloc25sq_randmag_1.0_A.pdf}
  }
  \caption{Second-order wave equation: the maximum and rms errors in the domain
    %as a function of the collocation points in each direction
    %in each sub-domain (a), and
    as a function of
    (a) the number of sub-domains per time block, and (b)
    the number of training parameters per sub-domain.
    %In (a) the degrees of freedom in each sub-domain is fixed.
    %In (b) the number of sub-domains per time block is fixed at $8$.
    %The temporal domain size is $t_f=10$, and we have employed
    %$10$ time blocks in the domain, $8$ sub-domains per time block,
    %and $25\times 25$ uniform collocation points per sub-domain,
    %and $R_m=1.0$ when generating random coefficients.
    %In (a) the number of training parameters/sub-domain is 350.
    %In (b) the number of collocation points/sub-domain is 25x25.
  }
  \label{fg_wav2_5}
\end{figure}

Figure \ref{fg_wav2_5} demonstrates the effects of the number of sub-domains
and the number of training parameters on the simulation accuracy.
%when the  degrees of freedom in each sub-domain are fixed.
In this group of tests, the temporal domain size is set to $t_f=10$, and
$N_b=10$ time blocks are used in the spatial-temporal domain.
%vary the number of uniform sub-domains per time block.
Figure \ref{fg_wav2_5}(a) shows the maximum and rms errors in the
overall domain as a function of the number of sub-domains per time block.
In these simulations we have employed $Q=25\times 25$ uniform collocation points per
sub-domain ($Q_x=Q_t=25$), $M=350$ training parameters per sub-domain,
and $R_m=1.0$ when generating the random coefficients for the hidden layers,
while the number of sub-domains per time block is varied systematically.
The  errors can be
observed to decrease nearly exponentially
with increasing number of sub-domains.
%Figure \ref{fg_wav2_4}(b) shows the training time of the entire
%neural network as a function of the number of sub-domains per time block.
%The increase in training time is linear initially from one to two sub-domains,
%and is nearly quadratic from $4$ to $8$ sub-domains.
%
Figure \ref{fg_wav2_5}(b) shows the maximum and rms errors in the overall
domain as a function of the number of training parameter per
sub-domain. In these simulations,
%the temporal domain size is
%$t_f=10$. We have employed $10$ time blocks in the spatial-temporal domain,
we have employed $N_e=8$ uniform sub-domains per time block (with $N_x=4$, $N_t=2$),
$Q=25\times 25$ uniform collocation points per sub-domain ($Q_x=Q_t=25$),
and $R_m=1.0$ when generating the random coefficients,
%for the hidden layers of local neural networks,
while the number of training parameters per sub-domain is varied systematically.
We observe an exponential decrease in the numerical errors
with increasing number of training parameters per sub-domain (when below $300$).
%training parameters/sub-domain is below $300$.
The error reduction appears to slow down when the
the number of training parameters per sub-domain increases further (beyond $300$).


\begin{figure}
  \centerline{
    \includegraphics[width=2in]{Figures/Wave_2nd/wave2nd_error_elem_fixedTotDOF_10tblocks.pdf}(a)
    \includegraphics[width=2in]{Figures/Wave_2nd/wave2nd_traintime_elem_fixedTotDOF_10tblocks.pdf}(b)
  }
  \caption{Effect of the number of sub-domains, with fixed total degrees of
    freedom in the domain (2nd-order wave equation):
    (a) the maximum/rms errors in the domain, and (b) the
    training time, as a function of the number of sub-domains in each time block.
    %The temporal domain size is $t_f=10$, and $10$ time blocks has been used.
    %In all cases, the total number of training parameters per time block is $1600$,
    %and the total number of collocation points per time block is approximately $2500$.
    %1 sub-domain/block: 50x50 collocation points, 1600 training parameters, rand-mag=2.5.
    %2 sub-domains/block: 35x35 collocation points/sub-domain, 800 training
    %parameters/sub-domain, rand-mag=1.0.
    %4 sub-domains/block: 25x25 collocation points/sub-domain, 400 training
    %parameters/sub-domain, rand-mag=1.0.
    %8 sub-domains/block: 18x18 collocation points/sub-domain, 200 training
    %parameters/sub-domain, rand-mag=1.0.
  }
  \label{fg_wav2_6}
\end{figure}

In Figure \ref{fg_wav2_6} we look into the effect of the number of sub-domains,
with the total degrees of freedom in the domain fixed,
and  compare results of the locELM and global ELM simulations.
In these tests, the temporal domain size is set to $t_f=10$.
We employ $10$ uniform time blocks in the spatial-temporal domain,
and vary the number of sub-domains per time block, with
the total number of training parameters per time block fixed at $N_eM=1600$
and the total number of collocation points per time block approximately
fixed at $N_eQ=2500$. We have considered $N_e=1$ ($N_x=N_t=1$), $2$ ($N_x=2$, $N_t=1$),
$4$ ($N_x=4$, $N_t=1$), and $8$ ($N_x=4$, $N_t=2$) sub-domains
per time block. The collocation points and training parameters per sub-domain for
these cases are respectively: $(Q,M)=(50\times 50,1600)$
%uniform collocation points/sub-domain and $1600$ training parameters/sub-domain
for $1$ sub-domain/time-block,
$(Q,M)=(35\times 35,800)$
%uniform collocation points/sub-domain and $800$ training parameters/sub-domain
for $2$ sub-domains/time-block,
$(Q,M)=(25\times 25,400)$
%uniform collocation points/sub-domain and $400$ training parameters/sub-domain
for $4$ sub-domains/time-block, and
$(Q,M)=(18\times 18,200)$
%uniform collocation points/sub-domain and $200$ training parameters/sub-domain
for $8$ sub-domains/time-block.
For the case of $1$ sub-domain/time-block, we employ $R_m=2.5$ when generating
the random weight/bias coefficients.
%for the hidden layer of the local neural network.
For the cases of $2$ to $8$ sub-domains/time-block, we employ
$R_m=1.0$ when generating the random coefficients.
%for the hidden layers of the local neural networks.
These $R_m$ values approximately belong to their optimal range of values.
Notice that the case with one sub-domain per time block
is equivalent to the configuration of a global ELM.
Therefore, comparison of these results sheds light on the
performance of locELM (multiple
sub-domains) versus the global ELM methods.
Figure \ref{fg_wav2_6}(a) shows the maximum and rms errors in the overall
domain as a function of the number of sub-domains,
and Figure \ref{fg_wav2_6}(b) shows the training time of the neural network
versus the number of sub-domains from these tests.
Similar to what has been observed in previous sections,
the locELM errors with multiple sub-domains
are comparable to those obtained with one sub-domain (i.e.~global ELM).
However, the training time  has been dramatically
reduced by locELM with multiple sub-domains, when compared with that of
the global ELM.


\begin{figure}
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Wave_2nd/wave2nd_soln_dist_dnn_adam_5hlayers_40width.pdf}(a)
    \includegraphics[width=2.5in]{Figures/Wave_2nd/wave2nd_error_dist_dnn_adam_5hlayers_40width.pdf}(a)
  }
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Wave_2nd/wave2nd_soln_dist_dnn_lbfgs_5hlayers_40width.pdf}(c)
    \includegraphics[width=2.5in]{Figures/Wave_2nd/wave2nd_error_dist_dnn_lbfgs_5hlayers_40width.pdf}(b)
  }
  \centerline{
    %\includegraphics[width=2.5in]{Figures/Wave_2nd/wave2nd_soln_dist_locelm_1tblock_8elem_colloc25sq_trapar350_randmag1.0_A.pdf}(e)
    \includegraphics[width=2.5in]{Figures/Wave_2nd/wave2nd_error_dist_locelm_1tblock_8elem_colloc25sq_trapar350_randmag1.0_A.pdf}(c)
  }
  \caption{Comparison between locELM and DGM (2nd-order wave equation):
    Distributions of %the solutions    (left column) and their
    the absolute errors %(right column)
    computed using DGM with
    the Adam optimizer (a) and with the L-BFGS optimizer (b), and computed
    using the current locELM method (c).
    %In DNN-DGM, the DNN structure is [2, 40, 40, 40, 40, 40, 1], $C^{\infty}$
    %periodic layer in first hidden layer, and the other hidden layers have tanh
    %activation function, and no activation with last layer; Region partitioned into
    %4 elements (4x1) when computing loss, and 20x20 quadrature points in each element;
    %equation-penalty-coefficient = 0.9, ic-penalty-coefficient = 0.1.
    %With Adam optimizer, total 77,000 epochs; learning rate: 1.0*default-lr for
    %first 700 epochs, 0.5*default-lr for next 5000 epochs, 0.25*default-lr for
    %next 5000 epochs, 0.125*default-lr for next 5000 epochs, 0.1*default-lr for next
    %5000 epochs, 0.075*default-lr for next 5000 epochs, 0.05*default-lr for next
    %5000 epochs, 0.025*default-lr for next 10000 epochs, 0.0125*default-lr for next
    %10000 epochs, 0.01*default-lr for next 10000 epochs, 0.0075*default-lr for next
    %10000 epochs.
    %With L-BFGS optimizer, total 17000 iterations.
    %LocELM: 1 time block, 8 sub-domains/block, 25x25 collocation points/sub-domain,
    %350 training parameters/sub-domain, rand-mag=1.0.
  }
  \label{fg_wav2_7}
\end{figure}

% compare with DNN-DGM

We next compare the current locELM method and the deep Galerkin method (DGM)
for solving the second-order wave equation.
Figure \ref{fg_wav2_7} shows distributions of the solutions and their absolute
errors in the spatial-temporal plane, computed using DGM with the Adam and L-BFGS
optimizers and using the current locELM method.
In these tests the temporal domain size is set to $t_f=1$.
With DGM, the neural network consists of $5$ hidden layers, with a
width of $40$ nodes in each layer and the $\tanh$ activation function,
in addition to the input layer of two nodes and the output layer
of a single node (with no bias and no activation function).
When computing the loss function, the spatial-temporal domain is
divided into $4$ uniform sub-domains along the $x$ direction, and
$20\times 20$ Gauss-Lobatto-Legendre quadrature points are used
in each sub-domain for computing the residual norm.
With the Adam optimizer, the neural network has been trained for
$77,000$ epochs, with the learning rate gradually decreasing from
$0.001$ at the beginning to $7.5\times 10^{-6}$ at the end of the training.
With the L-BFGS optimizer, the network has been trained
for 17000 iterations.
With locELM, we have used one time block in
the spatial-temporal domain ($N_b=1$), $8$ uniform sub-domains per time block
(with $N_x=4$, $N_t=2$), $25\times 25$ uniform collocation points/sub-domain
($Q_x=Q_t=25$), $350$ training parameters/sub-domain ($M=350$),
one hidden layer in each local neural network ($\tanh$ activation function),
and $R_m=1.0$ when generating the random coefficients.
%for the hidden layer of the local neural networks.
Both DGM and locELM have captured the characteristics
of the solution well. But DGM is considerably less accurate than locELM.




% table comparing errors/training time for locELM and DNN

\begin{table}
  \centering
  \begin{tabular}{lllll}
    \hline
    method & maximum error & rms error & epochs/iterations & training time (seconds) \\
    DGM (Adam) & $3.27e-2$ & $4.68e-3$ & $77,000$ & $3384.3$ \\
    DGM (L-BFGS) & $1.44e-2$ & $2.03e-3$ & $17,000$ & $2375.6$ \\
    locELM & $2.21e-4$ & $2.96e-5$ & $0$ & $67.1$ \\
    \hline
  \end{tabular}
  \caption{Second-order wave equation:
    comparison between DGM
    (Adam/L-BFGS optimizers) and  locELM in terms of
    the maximum/rms errors in the domain, the number of epochs or iterations,
    and the training time of the neural network.
    The problem settings correspond to those of Figure \ref{fg_wav2_7}.
  }
  \label{tab_wav2_9}
\end{table}

As a further comparison, in Table \ref{tab_wav2_9} we list the
maximum and rms errors in the domain, the number of epochs or iterations
in the training, and the network training time, corresponding to the DGM
and locELM simulations.
The problem setting and  the simulation parameters here correspond to
those of Figure \ref{fg_wav2_7}.
The data show that the locELM method is about two orders of magnitude
more accurate than DGM, and is about $40$ or $50$ times faster than
DGM. These results are consistent with the observations in
previous sections and confirm the superiority of the current method.
in term of both accuracy and computational cost.

% what else to discuss here?


