
  
\subsubsection{Time-Stepping Scheme Based on Local Extreme Learning Machines}
\label{sec:stepping}

For certain linear differential operators $L$, in order to
conduct longer-time simulations,
one can also discretize the equation \eqref{equ_10a} in time first,
and then solve the resultant semi-discretized equation based
on local extreme learning machines. This leads to locELM-based
time-stepping schemes analogous to those in the context of
classical numerical methods.

To illustrate this idea, in this subsection
we specifically consider the diffusion
equation in two spatial dimensions (plus time) and discuss a
locELM-based time-stepping scheme for the corresponding
initial/boundary value problem.
Consider the diffusion equation
\begin{equation}\label{equ_22}
  \frac{\partial u}{\partial t} = \nu \left(
  \frac{\partial^2u}{\partial x^2} + \frac{\partial^2u}{\partial y^2}
  \right)
  + f(x,y,t),
\end{equation}
where $u(x,y,t)$ is the unknown field function to be solved for,
the constant $\nu$ denotes the diffusion coefficient, and
$f(x,y,t)$ is a prescribed source term.
It is supplemented by the following Dirichlet boundary condition and
initial condition,
\begin{align}
  &
  u(x,y,t) = g(x,y,t), \quad
  \text{for} \ (x,y) \ \text{on spatial boundaries}, \\
  &
  u(x,y,0) = h(x,y), \label{equ_24}
\end{align}
where $g(x,y,t)$ denotes the Dirichlet data on the spatial boundary,
and $h(x,y)$ denotes the initial field distribution.

% temporal discretization

Let $k\geqslant 0$ denote the time step index, $(\cdot)^k$
denote the variable $(\cdot)$ at time step $k$, and
$\Delta t$ denote the time step size.
We discretize the equations \eqref{equ_22}--\eqref{equ_24}
in time as follows:
\begin{subequations}
  \begin{align}
    &
    \frac{
    \gamma_0 u^{k+1} - \hat{u} }{\Delta t}
    = \nu \left(
    \frac{\partial u^{k+1}}{\partial x^2}
    + \frac{\partial u^{k+1}}{\partial y^2}
    \right)
    + f^{k+1}(x,y), \label{equ_25a}
    \\
    &
    u^{k+1}(x,y) = g^{k+1}(x,y), \quad
    \text{on spatial boundary}, \label{equ_25b}
    \\
    &
    u^0(x,y) = h(x,y),
  \end{align}
\end{subequations}
where $g^{k+1} = g(x,y,t^{k+1})$ with $t^{k+1}=(k+1)\Delta t$,
$f^{k+1} = f(x,y,t^{k+1})$, and
\begin{equation}
  \gamma_0 = \left\{
  \begin{array}{ll}
    1, & \text{if}\ k=0, \\
    3/2, & \text{if}\ k\geqslant 1;
  \end{array} \qquad
  \hat{u} = \left\{
  \begin{array}{ll}
    u^k, & \text{if}\ k=0, \\
    2u^k-\frac12 u^{k-1}, & \text{if}\ k\geqslant 1.
  \end{array}
  \right.
  \right.
\end{equation}
It can be noted that the time derivative is discretized
by the 2nd-order backward differentiation formula (BDF2)
except for the first time step, where the first-order 
backward differentiation formula has been used.
Equation \eqref{equ_25a} can be written as
\begin{equation}\label{equ_26}
  \gamma_0 u^{k+1} - \nu\Delta t\left(
  \frac{\partial u^{k+1}}{\partial x^2}
  + \frac{\partial u^{k+1}}{\partial y^2}
  \right)
  = f^{k+1}\Delta t + \hat{u}
  = R(x,y).
\end{equation}
This equation, together with \eqref{equ_25b},
is a linear differential equation about $u^{k+1}(x,y)$.
These equations
have the same form as \eqref{equ_1}--\eqref{equ_2}.
Therefore, they can be solved using
the method with local extreme learning
machines from Section~\ref{sec:steady}.

More specifically,
we consider the spatial domain
$
\Omega = \{
(x,y)\ |\ x\in[a_1,b_1], \ y\in[a_2,b_2]
\}
$
for this problem.
Following the notation and settings of Section \ref{sec:steady},
we use $N_x$ and $N_y$ to denote the number of
sub-domains along the $x$ and $y$ directions,
where the element-boundary coordinates are
given by $X_m$ ($0\leqslant m\leqslant N_X-1$)
and $Y_n$ ($0\leqslant n\leqslant N_y-1$).
On each sub-domain $e_{mn}$ ($0\leqslant m\leqslant N_X-1$,
$0\leqslant n\leqslant N_y-1$),
$(x_p^{e_{mn}},y_q^{e_{mn}})$ ($0\leqslant p\leqslant Q_x-1$,
$0\leqslant q\leqslant Q_y-1$) denote the $Q_xQ_y$ collocation
points within. Again let $M$ denote the number of
nodes in the last hidden layer of each local neural network,
$V_j^{e_{mn}}(x,y)$ ($1\leqslant j\leqslant M$)
denote the output of the last hidden layer of the local network
for sub-domain $e_{mn}$, and $w_j^{e_{mn}}$ denote the training parameters
of the corresponding output layer.

%To simplify the presentation, let
%\begin{equation}
%  \phi(x,y) = u^{k+1}(x,y), \quad
%  R(x,y) = \Delta t f^{k+1}(x,y) + \left(2u^{k} -\frac12 u^{k-1}  \right).
%\end{equation}
Then on sub-domain $e_{mn}$, equation \eqref{equ_26} becomes
\begin{equation}\label{equ_28}
  \begin{split}
    &
  \sum_{j=1}^M \left[
    \gamma_0 V_j^{e_{mn}}(x_p^{e_{mn}},y_q^{e_{mn}})
    - \nu\Delta t\left.
    \left(
    \frac{\partial V_j^{e_{mn}}}{\partial x^2}
    + \frac{\partial V_j^{e_{mn}}}{\partial y^2}
    \right)
    \right|_{(x_p^{e_{mn}},y_q^{e_{mn}})}
    \right] w_j^{e_{mn}}
  = R^{e_{mn}}(x_p^{e_{mn}},y_q^{e_{mn}}), \\
  & \qquad \text{for} \
  0\leqslant m\leqslant N_x-1, \
  0\leqslant n\leqslant N_y-1, \
  0\leqslant p\leqslant Q_x-1, \
  0\leqslant q\leqslant Q_y-1, \
  \end{split}
\end{equation}
where $R^{e_{mn}}(x,y)$ and $u^{k+1,e_{mn}}$ represent $R(x,y)$ and $u^{k+1}(x,y)$
restricted to
the sub-domain $e_{mn}$, respectively, and
\begin{equation}
  u^{k+1,e_{mn}}(x,y) = \sum_{j=1}^M V_{j}^{e_{mn}}(x,y) w_j^{e_{mn}}.
\end{equation}
%
% BCs
On the domain boundaries $x=a_1$ or $b_1$ and $y=a_2$ or $b_2$,
the boundary condition \eqref{equ_25b} is reduced to,
\begin{subequations}
  \begin{align}
    &
    \sum_{j=1}^M V_{j}^{e_{0n}}\left(a_1,y_q^{e_{0n}} \right)w_j^{e_{0n}} =
    g\left(a_1,y_q^{e_{0n}},t^{k+1} \right),
    \ \ 0\leqslant n\leqslant N_y-1, \ 0\leqslant q\leqslant Q_y-1;
    \\
    &
    \sum_{j=1}^M V_{j}^{e_{mn}}\left(b_1,y_q^{e_{mn}} \right)w_j^{e_{mn}} =
    g\left(b_1,y_q^{e_{mn}}, t^{k+1} \right),
    \ \ m=N_x-1, \ 0\leqslant n\leqslant N_y-1, \ 0\leqslant q\leqslant Q_y-1;
    \\
    &
    \sum_{j=1}^M V_{j}^{e_{m0}}\left(x_p^{e_{m0}},a_2 \right)w_j^{e_{m0}} =
    g\left(x_p^{e_{m0}},a_2, t^{k+1} \right),
    \ \ 0\leqslant m\leqslant N_x-1, \ 0\leqslant p\leqslant Q_x-1;
    \\
    &
    \sum_{j=1}^M V_{j}^{e_{mn}}\left(x_p^{e_{mn}},b_2 \right)w_j^{e_{mn}} =
    g\left(x_p^{e_{mn}},b_2, t^{k+1} \right),
    \ \ n=N_y-1, \ 0\leqslant m\leqslant N_x-1, \ 0\leqslant p\leqslant Q_x-1.
    \label{equ_30d}
  \end{align}
\end{subequations}
Lastly, we impose $C^1$ continuity conditions for $u^{k+1}(x,y)$
across the sub-domain boundaries, and these conditions are
given by the equations \eqref{equ_8a}--\eqref{equ_9b}.

We march in time step by step.
Within each time step, the training parameters $w_j^{e_{mn}}$
for the local neural networks are determined by solving the
linear system consisting of equations \eqref{equ_28}--\eqref{equ_30d}
and \eqref{equ_8a}--\eqref{equ_9b} for its least squares
solution. This is is a linear least squares problem and can be solved
efficiently. It should be noted that the coefficient matrix
of the system is time-independent and can be pre-computed, as
well as its pseudo-inverse.
When the pseudo-inverse is used to compute the least squares solution,
the main operations within each time step involves only a matrix-vector
multiplication, apart from the computation of the right hand side.
So the computation within each time step using
this time-stepping scheme is very fast, with a very low computational cost.


% comment on form of semi-discretized equation, scaling of equation
%   on least squares solution, instability if use \frac{\gamma_0}/dt u^{k+1}+...

\begin{remark}
  \label{rem_3}
  The form of the semi-discretized equation~\eqref{equ_26}, and hence
  the fully discretized form~\eqref{equ_28}, is crucial to the stability
  of the current scheme. We observe that the following equivalent form,
  \begin{equation}\label{equ_31}
    \frac{\gamma_0}{\Delta t} u^{k+1} - \nu\left(
    \frac{\partial u^{k+1}}{\partial x^2}
    + \frac{\partial u^{k+1}}{\partial y^2}
    \right)
    = f^{k+1} + \frac{\hat{u}}{\Delta t},
  \end{equation}
  when combined with the boundary/initial conditions and $C^k$ continuity
  conditions, is unstable. Equation \eqref{equ_31} is a scaled form
  of equation \eqref{equ_26} by the factor $\frac{1}{\Delta t}$. This scaling factor
  can be significant for a typical $\Delta t$ value.
  Even though a scaling should not alter the solution theoretically,
  re-scaling some equations in the system can affect the least squares solution.
  The scaling in equation \eqref{equ_31} seems to have caused
  those coefficients from the equation and those resulting from the boundary/initial
  conditions and the $C^k$ continuity conditions to become ``imbalanced'',
  leading to an instability in the resultant time-stepping scheme.

\end{remark}

% cost of time-stepping and time-marching

\begin{remark}
  \label{rem_4}
  The time-stepping scheme and the block time-marching scheme with
  the current method are feasible for long-time simulations,
  because the computational cost involved in each time step or
  each time block is low. The training of the local neural networks
  within each time step or each time block
  is essentially a linear least squares solution process, which
  can be performed very efficiently.
  A comparison between the time-stepping scheme and the block time-marching
  scheme for the same problems indicates that the block time-marching
  scheme is usually more efficient.
  This is because the time step size $\Delta t$ in the time-stepping scheme
  cannot be very large in typical simulations for accuracy and stability.
  On the other hand, the time block size in the block time-marching scheme
  can be large, e.g.~several orders of magnitude larger than
  the typical time step size.
  We observe that, to reach the same physical time in simulations,
  the block time-marching scheme can take significantly less time
  than the time-stepping scheme, with comparable accuracy in the simulation
  results.

\end{remark}


% comment on: time-stepping scheme may not work for all equations
%   e.g. wave equation
%   time-stepping is feasible because of the low cost of each time step
%   compare cost with block time-marching


\begin{remark}
  \label{rem_5}
  The time-stepping strategy does not seem to work with equations
  involving certain linear differential operators. The first- and
  second-order wave equations are such examples.
  The time-stepping scheme (with e.g.~Newmark and BDF-type methods)
  for these equations, together with
  the least squares solution within a time step, is observed to be
  unstable.
  On the other hand, the block time-marching scheme does not have such
  an issue. It can produce stable and accurate results for the wave
  equations in long-time simulations, which will be demonstrated
  by numerical experiments in Section \ref{sec:tests}. 
  

\end{remark}

% what else to discuss here?


%%%%%%%%%%%%%%%%%%%%%%
\begin{comment}
  
We have implemented two methods for solving this system of
equations. The first method is
%We seek the least squares solution for this system. We employ
the same nonlinear least squares
method~\cite{BranchCL1999} as in Section \ref{sec:nonl_steady},
available as the ``least\_squares'' routine in
the scipy.optimize package.
This method typically works very well, and exhibits a smooth convergence
toward the solution. 
However, we have observed that
in certain cases, especially in longer-time simulations
(see the block time-marching below),
this method at times tends to be attracted to
and trapped in a local
minimum solution. While the method indicates that the nonlinear iterations
have converged (based on any of the several stopping criteria within), 
the norm of the converged equation residuals turns out to be not quite small.
In the envent this takes place,  the obtained
result is observed to contain pronounced errors and the simulation completely
loses accuracy from that point onward.
This has been a major issue with the nonlinear least squares
computations using this method.
To alleviate this problem and make the method more robust,
we have incorporated a sub-iteration procedure when invoking the nonlinear
least squares routine in our implementation.
In case the nonlinear least squares routine converges with
the converged equation-residual norm exceeding a threshold,
the sub-iteration procedure will be triggered. Within
each sub-iteration a random initial guess for the solution
is generated
and fed into the nonlinear least squares routine.
This strategy turns out to be very effective in practice.
The true solution can typically be obtained with only
a few (e.g.~$4$ or $5$) sub-iterations in such situations.
This sub-iteration procedure is critical to this method
for long-time simulations.

The second method for solving the nonlinear system
is a combination of Newton-like iterations
with linear least squares solutions.
The convergence of this method is not
as smooth as the first method, but it is less likely
to be trapped to local minimum solutions in longer-time nonlinear
simulations.
To outline the idea of the method, let
\begin{equation}
  {\mbs G}(\mbs W) = 0, \quad
  \text{where}\ \mbs G=(G_1,G_2,\dots,G_m), \
  \mbs W=(w_1,w_2,\dots,w_n)
\end{equation}
denote a system of $m$ nonlinear algebraic equations about
$n$ variables $\mbs W$. Let the superscript in ${\mbs W}^{(k)}$ denote
the approximation of the solution at the $k$-th iteration, and
$\Delta \mbs W$ denote the solution increment.
We update the solution iteratively as follows in a way similar to the
Newton's method,
\begin{align}
  &
  \mbs J(\mbs W^{(k)})\Delta \mbs W = -\mbs G(\mbs W^{(k)}), \label{equ_39}
  \\
  &
  \mbs W^{(k+1)} = \mbs W^{(k)} + \Delta \mbs W,
\end{align}
where $\mbs J(\mbs W^{(k)})$ is the Jacobian matrix given by
$
J_{ij} = \frac{\partial G_i}{\partial w_j}
$
($1\leqslant i\leqslant m$, $1\leqslant j\leqslant n$)
and evaluated at $\mbs W^{(k)}$.
The departure point from the standard Newton method lies in
that the linear algebraic system
represented by \eqref{equ_39} involves a non-square coefficient
matrix (Jacobian matrix).
We seek a least squares solution to the linear system \eqref{equ_39},
and solve this system for
the increment $\Delta \mbs W$ using the linear least squares routine
from LAPACK.

\end{comment}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

