
\subsubsection{Nonlinear Spring Equation}
% Initial Value Problems
%with Ordinary Differential Equations

% ODE, time-stepping schemes, Newton-like iterations

\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Nonl_Spring_Sin/nonl_spring_soln_hist_long.pdf}(a)
    \includegraphics[width=2.2in]{Figures/Nonl_Spring_Sin/nonl_spring_error_hist_long.pdf}(a)
  }
  \caption{Nonlinear spring: time histories of (a) the locELM solution
  and (b) its absolute error against the exact solution.
  $40$ uniform time blocks are used.
    %computed with $40$ uniform time blocks, a single
    %sub-domain in each time block, $60$ uniform
    %collocation points per time block, and $100$ training parameters per time block.
    %$R_m=5.0$, temporal domain size $t_f=100$.
  }
  \label{fg_ns_1}
\end{figure}


In the next example we test the locELM method using an initial value problem,
the nonlinear spring. The goal here is to assess the performance
of the locELM method together with the block time marching scheme,
especially for long-time dynamic simulations.

Consider the temporal domain, $\Omega=[0, t_f]$, and the following
initial value problem on this domain,
\begin{subequations}
\begin{align}
&
\frac{d^2u}{dt^2} + \omega^2 u + \alpha\sin(u) = f(t), \label{eq_ns_1} \\
&
u(0) = u_0, \\
& \left.\frac{du}{dt}\right|_{t=0} = v_0, \label{eq_ns_2} 
\end{align}
\end{subequations}
where $u(t)$ is the displacement, $f(t)$ is an imposed external force,
$\omega$ and $\alpha$ are constant parameters, $u_0$ is the initial
displacement, and $v_0$ is the initial velocity.
The parameters in the above domain and problem specifications assume
the following values in this subsection,
\begin{equation*}
\omega = 2, \quad
\alpha = 0.5, \quad
t_f = 100,\ \text{or}\ 15,\ \text{or}\ 2.5.
\end{equation*}
We choose the external force $f(t)$ such that the following function
satisfies the equation \eqref{eq_ns_1},
\begin{equation}\label{eq_ns_3}
u(t) = t\sin(t).
\end{equation}
We set the initial displacement and the initial velocity both to zero,
i.e.~$u_0=0$ and $v_0=0$.
Under these settings, the initial value problem
consisting of equations \eqref{eq_ns_1}--\eqref{eq_ns_2} has
the solution given by \eqref{eq_ns_3}.

% how to simulate the problem?

We employ  the locELM method and the block time marching scheme
from Section \ref{sec:tnleq} to solve this initial value problem.
We partition the domain $[0,t_f]$ into $N_b$ uniform
time blocks, and solve this initial value problem on
each time block individually and successively.
For the computation within each time block, we use a single sub-domain
in the simulation,
as the amount of  data involved in is quite small because
the function does not depend on space.
%is not a spatial distribution.
We enforce the equations on $Q$ uniform collocation points
within each time block.
%and we use $Q$ to denote the number of
%these collocation points in each time block.
Accordingly, we employ a single neural network within each
time block for this problem.
%since a single sub-domain has been used therein.
The neural network consists of an input layer of one
node (representing the time $t$),
a single hidden layer with a width of $M$ nodes and the $\tanh$ activation
function, and an output layer of one node (representing the
solution $u$).
The output layer is assumed to be linear (no activation function) and
contains no bias.
As in previous sections, we incorporate an affine mapping operation
right behind the input layer to normalize the input $t$ data
to the interval $[-1,1]$ for each time block.
The weight and bias coefficients in the hidden layer of the neural network
are pre-set to uniform random values generated on the interval
$[-R_m,R_m]$.
A fixed seed value $1234$ is used for the random number generator.
%
%{\color{red}Also what are the parameters with
%the perturbations?}
%
We employ the NLSQ-perturb method from Section~\ref{sec:nonl_steady}
for computing the resultant nonlinear algebraic problem.
The initial guess of the solution is set to zero.
In the event the random perturbation is triggered, we employ $\delta=1.0$ and
$\xi_2=1$ (see Algorithm~\ref{alg:alg_1} and Remark~\ref{rem_9})
for generating the random perturbations in the tests of
this subsection.

The locELM simulation parameters  include
the number of time blocks $N_b$,
the number of collocation points per time block $Q$,
the number of training parameters per time block $M$ (i.e.~the number of nodes in
the hidden layer of the neural network), and
the maximum magnitude of the random coefficients $R_m$.


\begin{figure}
  \centerline{
    \includegraphics[width=1.5in]{Figures/Nonl_Spring_Sin/nonl_spring_sin_error_hist_6tblocks_trapar100_randmag5.0_A.pdf}(a)
    \includegraphics[width=1.5in]{Figures/Nonl_Spring_Sin/nonl_spring_sin_error_colloc_6tblocks_trapar100_randmag5.0_A.pdf}(b)
  %}
  %\centerline{
    \includegraphics[width=1.5in]{Figures/Nonl_Spring_Sin/nonl_spring_sin_error_hist_6tblocks_60colloc_20n30trapar_randmag_5.0_A.pdf}(c)
    \includegraphics[width=1.5in]{Figures/Nonl_Spring_Sin/nonl_spring_sin_error_trapar_6tblocks_60colloc_randmag_5.0_A.pdf}(d)
  }
  \caption{Nonlinear spring: (a) Error histories obtained with $20$ and $40$
    collocation points per time block.
    (b) The maximum/rms errors in the domain versus the number of
    collocation points per time block.
    In (a) and (b), the number of training parameters per time block is fixed at $100$.
    (c) Error histories obtained with $20$ and $30$ training parameters
    per time block.
    (d) The maximum/rms errors versus the number of training parameters per time block.
    In (c) and (d) the number of the collocation points per time block is fixed at $60$.
    %The temporal domain size is $t_f=15$, and $6$ time blocks are used all simulations,
    %and $R_m=5.0$.
    }
    \label{fg_ns_2}
\end{figure}

% discussion of results

Figure \ref{fg_ns_1} shows the time histories of the displacement
and its absolute error obtained using locELM in a fairly long-time simulation.
%against the exact solution given in~\eqref{eq_ns_2}.
The time history of the exact solution given by~\eqref{eq_ns_3} has also
been shown in Figure \ref{fg_ns_1}(a) for comparison.
In this test the domain size is set to $t_f=100$. We have employed
$N_b=40$ uniform time blocks within the domain, $Q=60$ uniform collocation
points per time block, $M=100$ training parameters per time block,
and $R_m=5.0$ when generating the random weight/bias coefficients
for the hidden layer of the neural network.
It is evident that the current locELM method has captured the solution
very accurately, with the maximum level of the absolute error on
the order $10^{-8}$ over the entire domain.

Figure \ref{fg_ns_2} illustrates the effect of the number of
degrees of freedom (collocation points, training parameters)
on the simulation accuracy.
%of the locELM results.
In this group of tests the temporal domain size is set to $t_f=15$, and
we employ $N_b=6$ time blocks within the domain.
Figure \ref{fg_ns_2}(a) shows the absolute-error histories
of the locELM solution against the exact solution, obtained using
$20$ and $40$ collocation points per time block.
Figure \ref{fg_ns_2}(b) shows the maximum and rms errors in
the overall domain obtained with different numbers of collocation
points in the locELM simulation.
The number of training parameters per time block is fixed at $M=100$
with the tests in these two plots.
The errors can be observed to decrease exponentially
as the number of collocation points per time block increases
(when below around $60$), and then become stagnant
as the number of collocation points increases further.
Figure \ref{fg_ns_2}(c) shows time histories of the absolute
errors corresponding to $20$ and $40$ training parameters per time block.
Figure \ref{fg_ns_2}(d) shows the maximum/rms errors in the overall
domain, obtained with different numbers of training parameters per
time block.
In the tests of these two plots, the number of collocation points
per time block has been fixed at $Q=60$.
The convergence  with respect to the training parameters
is not as regular as that for the collocation points.
Nonetheless, one can see that the errors approximately decrease exponentially
with increasing number of training parameters (when below $50$),
and then they essentially stagnate as the number of training parameters
increases further.


% need figure: error vs. training parameters per block

\begin{figure}
  \centerline{
    \includegraphics[width=2.2in]{Figures/Nonl_Spring_Sin/nonl_spring_sin_error_randmag_6tblocks_60colloc_100trapar.pdf}
  }
  \caption{Nonlinear spring: The maximum and rms errors in the overall domain
    as a function of $R_m$, the maximum magnitude of the random coefficients.
    %Temporal domain size $t_f=15$,
    %$6$ time blocks in domain, 1 sub-domain per time block, $60$ collocation
    %per sub-domain, $100$ training parameters per sub-domain.
  }
  \label{fg_ns_3}
\end{figure}

% effect of R_m

Figure \ref{fg_ns_3} demonstrates the effect of $R_m$, the maximum magnitude
of the random coefficients, on the simulation accuracy.
In this set of tests, the temporal domain size is $t_f=15$.
We have employed $N_b=6$ uniform time blocks in the domain,
$Q=60$ uniform collocation points in each time block, and
$M=100$ training parameters per time block.
The value of $R_m$ is varied systematically in the tests.
In this figure we plot the maximum and rms errors in the
overall domain corresponding to different $R_m$ values.
The characteristics observed here are consistent with those from previous subsections.
%We can observe the characteristics similar to those in previous sections.
The locELM method has a better accuracy with $R_m$ in a range of moderate
values, and in this case approximately $R_m=1 \sim 9$.
The results are less accurate if $R_m$ is very large or very small.


% need figure: comparison with DNN-DGM or DNN-PINN

% do we still need the case with tanh nonlinear spring?
% maybe not anymore

\begin{figure}
\centerline{
\includegraphics[width=2.2in]{Figures/Nonl_Spring_Sin/nonl_spring_sin_compare_dnn_elm_soln_hist.pdf}(a)
\includegraphics[width=2.2in]{Figures/Nonl_Spring_Sin/nonl_spring_sin_compare_dnn_elm_error_hist.pdf}(a)
}
\caption{Comparison between locELM and PINN (nonlinear spring):
Time histories of (a) the solutions and (b) their absolute errors,
computed using PINN~\cite{RaissiPK2019} with the Adam optimizer
and using locELM with the NLSQ-perturb method.
The temporal domain size is $t_f=2.5$.
%PINN architecture: [1, 10, 10, 10, 1], tanh activation, 500 uniform
%collocation points as input, 20000 epochs.
%Learning rates: 0.01 for first 2000 epochs, 0.001 for next 4000 epochs,
%0.0001 for next 9000 epochs, $1e-5$ for next 5000 epochs.
%locELM: architecture [1, 100, 1], tanh activation, 1 time block, 1 sub-domain/block,
%60 uniform collocation points/sub-domain, rand-mag = 5.0.
}
\label{fg_ns_4}
\end{figure}

\begin{table}
\centering
\begin{tabular}{lllll}
\hline
method & maximum error & rms error & epochs/iterations & training time (seconds) \\
PINN (Adam) & $1.21e-4$ & $6.71e-5$ & $20,000$ & $26.3$ \\
locELM (NLSQ-perturb) & $2.82e-11$ & $1.12e-11$ & $48$ & $0.34$ \\
\hline
\end{tabular}
\caption{Nonlinear spring: Comparison between locELM and PINN in terms of
the maximum/rms errors in the domain, the number of epochs or nonlinear iterations in
the training, and the network training time.
The problem settings and the simulation parameters correspond to
those of Figure \ref{fg_ns_4}.
}
\label{tab_ns_5}
\end{table}


Let us next compare the current locELM method with PINN~\cite{RaissiPK2019}
for solving the nonlinear spring equation.
Figure \ref{fg_ns_4} shows a comparison of the time histories of
the solutions and their absolute errors obtained using
PINN with the Adam optimizer and using the current locELM method
with NLSQ-perturb.
In this group of tests, the temporal domain size is set to $t_f=2.5$.
In the PINN simulation, the neural network consists of
an input layer of one node (representing $t$),
three hidden layers with a width of $10$ nodes and
the $\tanh$ activation function in each layer,
and an output layer of one node (representing the solution $u$).
The input data consists of $500$ uniform collocation points
from the domain $[0, t_f]$. The neural network has been
trained using the Adam optimizer for $20,000$ epochs,
with the learning rate decreasing from $0.01$ at the beginning
to $1e-5$ at the end of the training.
In the locELM simulation, we employ a single time block ($N_b=1$)
in the domain, $Q=60$ uniform collocation points within the
time block, $M=100$ training parameters in the time block,
a single hidden layer in the neural network, and $R_m=5.0$
for generating the random weight/bias coefficients.
%in the hidden layer of the neural network.
Figure \ref{fg_ns_4} demonstrates that both PINN and locELM
have captured the solution accurately, but
the error of the locELM result is considerably
smaller than that of PINN.

Table \ref{tab_ns_5} provides a further comparison
between locELM and PINN in terms of their accuracy and
computational cost.
The problem settings and the simulation parameters here correspond to
those of Figure \ref{fg_ns_4}.
We have listed the maximum and rms errors of the PINN and locELM
results in the overall domain, the number of epochs or nonlinear
iterations in the training, and the network training time.
The data demonstrate that the current locELM method is
much more accurate, by six orders of magnitude, than PINN,
and the network training time of locELM is much smaller,
by nearly two orders of magnitude, than that of PINN.





