\section{Domain Decomposition and Local Extreme Learning Machines}
\label{sec:method}

\subsection{Local Extreme Learning Machines (locELM)
  for Representing Functions }
\label{sec:loc_elm}

% ideas for representing functions with local extreme learning machines
% C^k continuity,
% NN representation with one or a few hidden layers,
%   wide second to last layer
%   random coefficients for hidden layers
%   least squares solve for coefficients of last layer
%
% emphasis:
%  locality, local representations
%  different domains logically coupled
%  accuracy, more accurate
%  lower computational cost, fast
%

Consider the domain $\Omega$ in $d$ ($d=1$, $2$ or $3$) dimensions,
where one of the dimensions may denote time and so $\Omega$ in general
can be a spatial-temporal domain.
We consider a function $f(\mbs x)$ ($\mbs x\in\Omega$) defined on this domain,
and would like to represent this function using neural networks.

We partition $\Omega$ into $N_e$ ($N_e\geqslant 1$) non-overlapping
sub-domains, % (or elements),
\begin{equation*}
  \Omega = \Omega_1\cup\Omega_2\cup \dots \cup\Omega_{N_e},
\end{equation*}
where $\Omega_i$ denotes the $i$-th sub-domain.
If $\Omega_i$ and $\Omega_j$ ($1\leqslant i,j\leqslant N_e$)
share a common boundary, we will denote this common boundary
by $\Gamma_{ij}$.

We will represent $f(\mbs x)$, in a spirit analogous
to the finite elements or
spectral elements~\cite{KarniadakisS2005,ZhengD2011,DongY2009,DongS2012},
locally on the sub-domains by local neural networks.
More specifically, on each sub-domain $\Omega_i$ ($1\leqslant i\leqslant N_e$)
we represent $f(\mbs x)$ by a shallow feed-forward neural network~\cite{GoodfellowBC2016}.
Here ``shallow'' refers to the configuration that
each local neural network has only a small number 
(e.g.~one, two or perhaps three) of hidden layers,
apart from the input layer (representing $\mbs x$) and the
output layer (representing $f(\mbs x)$, restricted to $\Omega_i$).
%Figure \ref{fig:loc_elm} illustrates the configuration with a cartoon.

Let $f_i(\mbs x)$ ($1\leqslant i\leqslant N_e$) denote the function
$f(\mbs x)$ restricted to $\Omega_i$.
On any common boundary $\Gamma_{ij}$ between $\Omega_i$
and $\Omega_j$ (for all $1\leqslant i,j\leqslant N_e$),
we impose the requirement that $f_i(\mbs x)$ and
$f_j(\mbs x)$ satisfy the $C^{\mbs k}$ continuity conditions
with an appropriate $\mbs k=(k_1,k_2,\dots,k_d)$.
%
In other words, their function values and partial derivatives
up to the order $k_s$ ($1\leqslant s\leqslant d$) should be continuous across the
sub-domain boundary in the $s$-th direction.
The order $\mbs k$ in the $C^{\mbs k}$ continuity is a user-defined parameter.
%and in general it can assume different values in different
%coordinate directions in $d$ dimensions.
When solving differential equations, one can determine
$\mbs k$ for a specific coordinate direction
based on the order of the differential equation along that
direction. For example, if the highest derivative with
respect to the coordinate $x_s$ ($1\leqslant s\leqslant d$)
involved in the equation is $m$, one would typically
impose $C^{m-1}$ continuity to the solution %with $k=m-1$ (or higher)
on the sub-domain boundary along the $s$-th direction.
Thanks to these $C^{\mbs k}$ continuity conditions,
the local  neural networks for the sub-domains,
while physically separated, are coupled with one another
logically, and need to be trained together in a coupled fashion.
The local neural networks collectively constitute the representation
of the function $f(\mbs x)$ on the overall domain $\Omega$.


% network coefficients, training parameters

We impose further requirements on 
the local neural networks.
%for each sub-domain.
Suppose a particular layer in the local neural network
contains $n$ nodes, and the previous layer contains $m$ nodes.
Let $\phi_i(\mbs x)$ ($1\leqslant i\leqslant m$) denote the output
of the previous layer,
and $\varphi_i(\mbs x)$ ($1\leqslant i\leqslant n$)
denote the output of this layer.
Then the logic of this layer is represented 
by~\cite{GoodfellowBC2016},
\begin{equation}
  \varphi_i(\mbs x) = \sigma\left(\sum_{j=1}^m \phi_j(\mbs x)w_{ji} + b_i\right),
  \quad 1\leqslant i\leqslant n,
\end{equation}
where the constants $w_{ji}$ and $b_i$ ($1\leqslant i\leqslant n$,
$1\leqslant j\leqslant m$) are the weight and bias coefficients
associated with this layer, and $\sigma(\cdot)$ is
the activation function of this layer and is in general
nonlinear.
We assume the following for the local neural networks:
\begin{itemize}

\item
  The weight and bias coefficients for all the hidden layers are pre-set
  to uniform random values generated on the interval $[-R_m,R_m]$,
  where $R_m>0$ is a user-defined constant parameter.
  Once these coefficients are set randomly,
  they are fixed throughout the training and computation. These weight/bias
  coefficients are not adjustable, and they are not
  training parameters of the neural network.
  We hereafter refer to $R_m$ as the maximum magnitude of the
  random coefficients of the neural network.

\item
  The last hidden layer, i.e.~the layer before the output layer, can be wide.
  In other words, this layer may contain a large number of nodes.
  We use $M$ to denote the number of nodes in the last hidden layer
  of each local neural network.

\item
  The output layer contains no bias (i.e.~$b_i=0$)
  and no activation function. In other words, the output layer
  is linear, i.e.~$\sigma(x)=x$.
  The weight coefficients in the output layers of the local neural networks
  are adjustable.
  The collection of these weight coefficients
  constitutes the  training parameters of the overall neural network.
  Therefore, the number of training parameters in each local neural network
  equals $M$, the number of nodes in the last hidden layer of
  the local neural network.

\item
  The set of training parameters for the overall neural network
  is to be determined and set by a linear or nonlinear least squares
  computation, not by the back propagation-type algorithm.
  %by some flavor of the stochastic gradient descent-based approach
  %or its variants as
  %commonly-used in the literature for training deep neural networks.
  
\end{itemize}

\begin{remark}\label{rem_aa}
When a subset of the above requirements is imposed on a
single global neural network,
containing a single hidden layer, for the entire domain, the resultant
network, when trained with a linear least squares method,
is known as an extreme learning
machine (ELM)~\cite{HuangZS2006}.
In the current work we follow this terminology, and will refer to the local
neural networks presented here as local
extreme learning machines (or locELM).

\end{remark}


Let $N$ ($N\geqslant 1$) denote the number of nodes in
the output layer of the local neural networks.
Based on the above
assumptions, on the sub-domain $\Omega_s$ ($1\leqslant s\leqslant N_e$)
we have the relation,
\begin{equation}
  u_{i}^{s}(\mbs x) = \sum_{j=1}^M V_j^{s}(\mbs x) w^{s}_{ji},
  \quad \mbs x \in \Omega_s, \ \
  1\leqslant i\leqslant N,
  \label{equ_a}
\end{equation}
where $V_j^s(\mbs x)$ ($1\leqslant j\leqslant M$) denote
the output of the last hidden layer, $u_i^{s}(\mbs x)$
denote the the components of output function
of the network, $w_{ji}^s$
are the training parameters on $\Omega_s$, and $M$ denotes the number of
nodes in the last hidden layer.
The function
\begin{equation}
f_s(\mbs x) = (u_1^s, u_2^s, \dots, u_N^s)
\end{equation}
is the local representation of $f(\mbs x)$ on the sub-domain $\Omega_s$.

It should be noted that the set of output
functions of the last hidden layer, $V_j^s(\mbs x)$ ($1\leqslant j\leqslant M$),
are known functions and they are fixed throughout the computation.
%
%%%%%%%%%%%%%%%%%%%%%%
\begin{comment}
Let $R_m>0$ denote a user-defined constant parameter.
For each local neural network,
in the pre-processing stage
we generate a set of random numbers on $[-R_m,R_m]$, and
assign these random values to the weight and bias coefficients
in the hidden layers of local neural network.
Once the weight/bias coefficients in the hidden layers
of the local neural networks
have been randomly set, they will be fixed throughout the computation.
So the parameter $R_m$ represents the maximum magnitude
of the random weight/bias coefficients in the hidden layers of
the local neural networks.
\end{comment}
%%%%%%%%%%%%%%%%%%%%%
%
Since the weight/bias coefficients in the hidden layers are pre-set
to random values on $[-R_m,R_m]$ and are fixed,
$V_j^s(\mbs x)$ can be pre-computed
by a forward evaluation of the local neural network (up to
the last hidden layer)
against the input $\mbs x$ data.
The first, second, and higher-order derivatives of $V_j^s(\mbs x)$
with respect to the input $\mbs x$ can then be computed
by auto-differentiations.


The collection of local representations $f_s(\mbs x)$ ($1\leqslant s\leqslant N_e$),
with $C^{\mbs k}$ continuity imposed on the sub-domain boundaries and
with $w_{ij}^s$ ($1\leqslant i\leqslant M$, $1\leqslant j\leqslant N$,
$1\leqslant s\leqslant N_e$) as the  training parameters,
form the set of trial functions for representing the function
$f(\mbs x)$. Hereafter, we will refer to this representation as
the locELM representation of a function. 
Once the data for $f(\mbs x)$ or the data
for the governing equations that describe $f(\mbs x)$ are given,
the adjustable parameters $w_{ij}^s$ can be trained and determined by a
linear or nonlinear least squares computation.

\begin{remark}
\label{rem:rem_1}
In the locELM representation, the hyper-parameters for the local
neural networks associated with different sub-domains
(e.g.~depths, widths and activation
functions of the hidden layers) can in principle assume different values.
This can allow one to place more degrees of freedom locally in
regions where the field function may be more complicated and thus require
more resolution.
For simplicity of implementation, however, in the current work
we will employ the same hyper-parameters for all the local
neural networks for different sub-domains.

\end{remark}


In the following sub-sections we focus on how to use local
extreme learning machines to represent the solutions to
ordinary or partial differential equations (ODE/PDE), and discuss how to train
the overall neural network by least squares computations.
We consider two cases: (i) linear differential equations,
%time-independent linear differential equations,
%(ii) time-dependent linear differential equations,
and (ii) nonlinear differential equations, and
discuss how to treat them individually.
%For linear differential equations,
Apart from the basic algorithm,
we develop a block time-marching scheme
%together with the locELM method,
for long-time simulations of time-dependent linear/nonlinear PDEs.
%
In the presentations we use two spatial dimensions, and 
plus time if the problem is time-dependent, as examples.
The formulations can be reduced to one spatial dimension
or extended to higher spatial dimensions in a straightforward fashion.
For simplicity we concentrate on rectangular spatial-temporal domains
in the current work.

% why local representation?
% what is the benefit or advantage of local ELM?

%\subsection{Function Approximation}

\subsection{Linear Differential Equations}

\subsubsection{Time-Independent Linear Differential Equations}
\label{sec:steady}

% Lu = f, in domain
% Bu = g, on boundary
%   L and B are linear operators
% too broad,
% restrict to second-order for all directions, with Dirichlet BCs
% restrict to rectangular domains, for more concrete discussions
% partition of domain into regular sub-domains

Let us first consider
the boundary value problem involving
linear partial differential equations together
with Dirichlet boundary conditions, and discuss how to
solve the problem by using the locELM representation for
the solution.
To make the discussion concrete, we concentrate on
two dimensions ($d=2$, with the coordinates $x$ and $y$),
%in the following discussions,
and consider second-order partial differential equations with respect to
both $x$ and $y$ (i.e.~highest partial derivatives with respect to
$x$ and to $y$ are both two).
The procedure outlined below can be extended to higher dimensions
or to higher-order differential equations, with appropriate boundary
conditions and $C^{\mbs k}$ continuity conditions taken into account.

Let us consider the following generic
second-order linear partial differential equation
\begin{subequations}
  \begin{align}
    &
    L u = f(x,y), \label{equ_1} \\
    &
    u(x,y) = g(x,y), \quad \text{on} \ \partial\Omega,
    \label{equ_2}
  \end{align}
\end{subequations}
where $L$ is a  linear second-order operator with respect to
both $x$ and $y$, 
$u(x,y)$ is the scalar unknown field function to be
solved for, $f(x,y)$ and $g(x,y)$ are prescribed source
terms for the equation and the Dirichlet boundary condition,
and $\partial\Omega$ denotes the boundary of $\Omega$.
%The linear operator $L$ is  given by
%\begin{equation}
%  L = A(x,y)\frac{\partial^2}{\partial x^2}
%  + B(x,y)\frac{\partial^2}{\partial y^2}
%  + C(x,y)\frac{\partial^2}{\partial x\partial y}
%  + D(x,y)\frac{\partial}{\partial x}
%  + E(x,y) \frac{\partial}{\partial y}
%  + F(x,y)
%\end{equation}
We assume that
%the PDE and the boundary condition are such that
this boundary value problem is well-posed.
Our goal here is to illustrate the procedure for
numerically solving this problem
by approximating its solution using local extreme learning
machines.

% outline the general idea here first, least squares etc.

Here is the general idea for the solution process.
We partition the overall domain
into a number of sub-domains, and represent the field solution
using the locELM representation described in Section \ref{sec:loc_elm}.
We next choose a set of
points (collocation points) within each sub-domain,
which can have a regular or
random distribution.
We enforce the governing equations on the collocation points
within each sub-domain, and enforce the boundary conditions
on those collocation points in those sub-domains that reside
on $\partial\Omega$. % the overall domain boundaries.
We further enforce the $C^{\mbs k}$ continuity conditions on
those collocation points that reside on the sub-domain
boundaries.
Auto-differentiations are employed to compute
the first or higher-order derivatives involved in the above operations.
These operations result in a system of algebraic equations,
which may be linear or nonlinear depending on the boundary value problem,
about the training parameters in the locELM representation.
We seek a least squares solution to this algebraic system,
and compute the solution by either a linear least squares method
or a nonlinear least squares method.
The training parameters of the local neural networks are then
determined by the least squares computation.



% restrict to rectangular domains, regular partitions

For simplicity of implementation, we concentrate on the case
with $\Omega$ being a rectangular domain,
i.e.~$\Omega=[a_1,b_1]\times [a_2,b_2]$.
Let $N_x$ ($N_x\geqslant 1$) and $N_y$ ($N_y\geqslant 1$)
denote the number of sub-domains along the $x$ and $y$ directions,
respectively, with a total number of $N_e = N_xN_y$ sub-domains
in $\Omega$.
Let the two vectors $[X_0, X_1, \dots, X_{N_x}]$ and
$[Y_0, Y_1, \dots, Y_{N_y}]$ denote the coordinates of the sub-domain
boundaries along the $x$ and $y$ directions, where
$(X_0,Y_0)=(a_1,a_2)$ and $(X_{N_x},Y_{N_y})=(b_1,b_2)$.
Let $\Omega_{e_{mn}}=[X_m,X_{m+1}]\times [Y_n,Y_{n+1}]$  denote
the region occupied by the sub-domain $e_{mn}$, for
$0\leqslant m\leqslant N_x-1$ and 
$0\leqslant n\leqslant N_y-1$.
Here $e_{mn}$ represents the linear index of the sub-domain associated
with the 2D index $(m,n)$, with $e_{mn}=mN_y+n+1$,
and so $1\leqslant e_{mn}\leqslant N_e$.

% discuss locELM representation here

We approximate the unknown field function $u(x,y)$ using
the locELM representation as discussed in Section \ref{sec:loc_elm}.
On each sub-domain $e_{mn}$
%($0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-1$)
we represent the solution by a shallow neural network,
which consists of an input layer with two nodes (representing
the coordinates $x$ and $y$), one or a small number of hidden layers,
and an output layer with one node (representing the solution $u^{e_{mn}}$).
Let $V_{j}^{e_{mn}}(x,y)$ ($1\leqslant j\leqslant M$) denote the output
of the last hidden layer, where $M$ is the number of nodes in this layer.
Then equation \eqref{equ_a} becomes
\begin{equation}\label{equ_b}
  u^{e_{mn}}(x,y) = \sum_{j=1}^M V_j^{e_{mn}}(x,y) w_{j}^{e_{mn}}, \quad
  (x,y)\in\Omega_{e_{mn}}, \quad
  0\leqslant m\leqslant N_x-1, \ \ 0\leqslant n\leqslant N_y-1,
\end{equation}
where $w_j^{e_{mn}}$ ($1\leqslant j\leqslant M$) are the training parameters
in the sub-domain $e_{mn}$.
Again note that $V_j^{e_{mn}}(x,y)$ is known, once the weight/bias coefficients
in the hidden layers have been pre-set to random values on $[-R_m,R_m]$.

\begin{remark}\label{rem_bb}
Apart from the above logical operations,
in the implementation we incorporate an additional normalization
layer immediately behind the input layer in each of the local
neural networks.
For each sub-domain $e_{mn}$,
the normalization layer performs an affine mapping
and normalizes the input data,
$(x,y)\in\Omega_{e_{mn}}= [X_m,X_{m+1}]\times [Y_n, Y_{n+1}]$, such that
the output data of the normalization layer fall into
the domain $[-1,1] \times [-1,1]$.
This extra normalization layer contains
no adjustable (training) parameters.
  
\end{remark}


On the sub-domain $e_{mn}$ ($0\leqslant m\leqslant N_x-1$,
$0\leqslant n\leqslant N_y-1$), let $(x_{p}^{e_{mn}},y_q^{e_{mn}})$
($0\leqslant p\leqslant Q_x-1$,
$0\leqslant q\leqslant Q_y-1$)
denote a set of distinct collocation points,
where $x_p^{e_{mn}}$ ($0\leqslant p\leqslant Q_x-1$) denote a set of $Q_x$
collocation points on the interval $[X_m, X_{m+1}]$
and $y_q^{e_{mn}}$ denote a set of $Q_y$ collocation points on
the interval $[Y_n, Y_{n+1}]$.
The total number of collocation points is $Q=Q_xQ_y$
within each sub-domain $e_{mn}$. % for all sub-domains.
In the current work we primarily consider the following uniform distribution
for the collocation points:
\begin{itemize}
\item
  Uniform distribution: $x_{p}^{e_{mn}}$ forms a set of $Q_x$ uniform
  grid points on $[X_m,X_{m+1}]$, with both end points included,
  i.e.~$x_0^{e_{mn}}=X_m$ and $x_{Q_x-1}^{e_{mn}}=X_{m+1}$.
  $y_q^{e_{mn}}$ forms a set of $Q_y$ uniform grid points on $[Y_n,Y_{n+1}]$,
  with both end points included,
  i.e.~$y_0^{e_{mn}}=Y_n$ and $y_{Q_y-1}^{e_{mn}}=Y_{n+1}$.

%\item
%  Quadrature points: $x_{p}^{e_{mn}}$ forms a set of $Q_x$ Gauss-Lobatto-Legendre
%  quadrature points on the interval $[X_m,X_{m+1}]$.
%  $y_q^{e_{mn}}$ forms a set of $Q_y$ Gauss-Lobatto-Legendre quadrature points
%  on the interval $[Y_n,Y_{n+1}]$.
  
\end{itemize}

\begin{remark}\label{rem_cc}
  Besides the uniform distribution, we also consider a quadrature-point distribution
  and a random distribution for the collocation points.
  With the quadrature-point distribution,
  $x_{p}^{e_{mn}}$ are taken to be a set of $Q_x$ Gauss-Lobatto-Legendre
  quadrature points on the interval $[X_m,X_{m+1}]$, and
  $y_q^{e_{mn}}$ are taken to be a set of $Q_y$ Gauss-Lobatto-Legendre quadrature points
  on the interval $[Y_n,Y_{n+1}]$.
  With the random distribution, the collocation points in the sub-domain $e_{mn}$
  are taken to be uniformly generated
  random points $(x_l^{e_{mn}},y_l^{e_{mn}})\in \Omega_{e_{mn}}$ ($0\leqslant l\leqslant Q-1$),
  where $Q$ is the total number of collocation points in the sub-domain,
  among which a certain number of points are generated 
  on the sub-domain boundaries and the rest are located inside the sub-domain.  
  Numerical experiments indicate that, with the same number of collocation points,
  the result with the quadrature-point distribution is generally more accurate than that with
  the uniform distribution, which in turn is more accurate than that with
  the random distribution of collocation points.
  The quadrature-point distribution however poses some practical issues in
  the current implementation. When the number of quadrature points exceeds $100$,
  the library on which the current implementation is based cannot compute the
  Gaussian quadrature points accurately. This is the reason why in the current work
  we predominantly employ the uniform distribution
  of collocation points in the numerical tests of
  Section \ref{sec:tests}.
  
\end{remark}


% with these settings, how to solve the equations?

With the above setup, we solve the boundary value problem consisting of
equations \eqref{equ_1} and \eqref{equ_2} as follows.
On each sub-domain $e_{mn}$
%($0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-1$)
we enforce the equation \eqref{equ_1} on
all the collocation points $(x_p^{e_{mn}},y_q^{e_{mn}})$,
\begin{equation}
  \begin{split}
    &
    \sum_{j=1}^M \left[LV_j^{e_{mn}}\left(x_p^{e_{mn}},y_q^{e_{mn}} \right) \right] w_j^{e_{mn}}
    = f(x_p^{e_{mn}}, y_q^{e_{mn}}), \\
    &
    \text{for} \ 0\leqslant m\leqslant N_x-1, \ 0\leqslant n\leqslant N_y-1,
    \ 0\leqslant p\leqslant Q_x-1, \ 0\leqslant q\leqslant Q_y-1,
    %&
    %\sum_{j=1}^{M} V_j^{e_{mn}}\left(x_p^{e_{mn}},y_q^{e_{mn}} \right) w_j^{e_{mn}} =
    %g\left(x_p^{e_{mn}},y_q^{e_{mn}} \right),
    %\label{equ_4}
  \end{split}
  \label{equ_3}
\end{equation}
%for $0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-1$,
%$0\leqslant p\leqslant Q_x-1$ and $0\leqslant q\leqslant Q_y-1$,
where we have used equation \eqref{equ_b}.
We enforce equation \eqref{equ_2} on the four boundaries of
the domain $\Omega$,
\begin{subequations}
  \begin{align}
    &
    \sum_{j=1}^M V_{j}^{e_{0n}}\left(a_1,y_q^{e_{0n}} \right)w_j^{e_{0n}} =
    g\left(a_1,y_q^{e_{0n}} \right),
    \ \ 0\leqslant n\leqslant N_y-1, \ 0\leqslant q\leqslant Q_y-1;
    \label{equ_4} \\
    &
    \sum_{j=1}^M V_{j}^{e_{mn}}\left(b_1,y_q^{e_{mn}} \right)w_j^{e_{mn}} =
    g\left(b_1,y_q^{e_{mn}} \right),
    \ \ m=N_x-1, \ 0\leqslant n\leqslant N_y-1, \ 0\leqslant q\leqslant Q_y-1;
    \label{equ_5} \\
    &
    \sum_{j=1}^M V_{j}^{e_{m0}}\left(x_p^{e_{m0}},a_2 \right)w_j^{e_{m0}} =
    g\left(x_p^{e_{m0}},a_2 \right),
    \ \ 0\leqslant m\leqslant N_x-1, \ 0\leqslant p\leqslant Q_x-1;
    \label{equ_6} \\
    &
    \sum_{j=1}^M V_{j}^{e_{mn}}\left(x_p^{e_{mn}},b_2 \right)w_j^{e_{mn}} =
    g\left(x_p^{e_{mn}},b_2 \right),
    \ \ n=N_y-1, \ 0\leqslant m\leqslant N_x-1, \ 0\leqslant p\leqslant Q_x-1,
    \label{equ_7}
  \end{align}
\end{subequations}
where equation \eqref{equ_b} has again been used.

% C^k conditions

The local representations of the field solution are coupled together
by the $C^{\mbs k}$ continuity conditions. Since the equation \eqref{equ_1}
is assumed to be of second order with respect to both $x$ and $y$,
we impose $C^1$ continuity conditions across the sub-domain
boundaries in both the $x$ and $y$ directions.
%as discussed in Section \ref{sec:loc_elm}.
On the vertical sub-domain boundaries $x=X_{m+1}$ ($0\leqslant m\leqslant N_x-2$),
the $C^1$ conditions are reduced to,
%these continuity conditions are reduced to the following equations
%on the collocation points,
\begin{subequations}
  \begin{align}
    &
    \sum_{j=1}^M V_j^{e_{mn}}\left(X_{m+1},y_q^{e_{mn}} \right) w_j^{e_{mn}}
    - \sum_{j=1}^M V_j^{e_{m+1,n}}\left(X_{m+1},y_q^{e_{m+1,n}} \right) w_j^{e_{m+1,n}}
    = 0, \label{equ_8a} \\
    &
    \sum_{j=1}^M \left.\frac{\partial V_j^{e_{mn}}}{\partial x}\right|_{\left(X_{m+1},y_q^{e_{mn}} \right)} w_j^{e_{mn}}
    - \sum_{j=1}^M \left.\frac{\partial V_j^{e_{m+1,n}}}{\partial x}\right|_{\left(X_{m+1},y_q^{e_{m+1,n}} \right)} w_j^{e_{m+1,n}}
    = 0, \label{equ_8b} \\
    &
    \text{for}\ 0\leqslant m\leqslant N_x-2, \
    0\leqslant n\leqslant N_y-1, \ 0\leqslant q\leqslant Q_y-1, \nonumber
  \end{align}
\end{subequations}
%for $0\leqslant m\leqslant N_x-2$, $0\leqslant n\leqslant N_y-1$
%and $0\leqslant q\leqslant Q_y-1$,
where it should be noted that $y_q^{e_{mn}}=y_q^{e_{m+1,n}}$.
On the horizontal sub-domain boundaries $y=Y_{n+1}$ ($0\leqslant n\leqslant N_y-2$),
the $C^1$ continuity conditions are reduced to,
%the following equations on the collocation points,
\begin{subequations}
  \begin{align}
    &
    \sum_{j=1}^M V_j^{e_{mn}}\left(x_p^{e_{mn}},Y_{n+1} \right) w_j^{e_{mn}}
    - \sum_{j=1}^M V_j^{e_{m,n+1}}\left(x_p^{e_{m,n+1}},Y_{n+1} \right) w_j^{e_{m,n+1}}
    = 0, \label{equ_9a} \\
    &
    \sum_{j=1}^M \left.\frac{\partial V_j^{e_{mn}}}{\partial y}\right|_{\left(x_p^{e_{mn}},Y_{n+1} \right)} w_j^{e_{mn}}
    - \sum_{j=1}^M \left.\frac{\partial V_j^{e_{m,n+1}}}{\partial y}\right|_{\left(x_p^{e_{mn+1}},Y_{n+1} \right)} w_j^{e_{m,n+1}}
    = 0, \label{equ_9b} \\
    &
    \text{for} \ 0\leqslant m\leqslant N_x-1, \
    0\leqslant n\leqslant N_y-2, \ 0\leqslant p\leqslant Q_x-1, \nonumber
  \end{align}
\end{subequations}
%for $0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-2$
%and $0\leqslant p\leqslant Q_x-1$,
where it should be noted that
$x_p^{e_{mn}} = x_p^{e_{m,n+1}}$.

The set of equations consisting of \eqref{equ_3}--\eqref{equ_9b}
is a system of linear algebraic equations
about the training parameters
$w_j^{e_{mn}}$ ($0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-1$,
$1\leqslant j\leqslant M$).
%
% how to compute V, dV/dx, dV/dy, LV etc
In these equations, $V_j^{e_{mn}}(x,y)$, $LV_j^{e_{mn}}(x,y)$,
$\frac{\partial V_j^{e_{mn}}}{\partial x}$ and
$\frac{\partial V_j^{e_{mn}}}{\partial y}$
are all known functions, 
once the weight/bias coefficients
in the hidden layers are randomly set.
These functions can be evaluated on the collocation
points, including those on the domain boundaries and
the sub-domain boundaries.
The derivatives involved in these functions can be computed
by auto-differentiation.

This linear algebraic system consists of
$ %\begin{equation*}
N_xN_y(Q_xQ_y+2Q_x+2Q_y)
%+ 2N_xN_y(Q_x+Q_y)
$ %\end{equation*}
equations, and $N_xN_yM$ unknown variables of $w_j^{e_{mn}}$.
We seek the least squares solution to this system
with the minimum norm. Linear least squares routines are available
in a number of scientific libraries,
%linear algebra packages,
and we take advantage of these numerical libraries in our implementation.
In the current
work we employ the linear least squares routine
from LAPACK, available through wrapper functions
in the scipy package in Python.
%and the pseudo-inverse (or the Moore-Penrose inverse, also available in scipy).
Therefore, the adjustable parameters $w_j^{e_{mn}}$
in the neural network
are trained by this linear least squares computation.
%not by gradient descent-type routines.


% implementation details

In the current work we have employed Tensorflow and Keras to implement
the neural network architecture as outlined above.
Each local neural network consists of several ``dense'' Keras layers.
The set of $N_e=N_xN_y$ local neural networks collectively forms
an overall logical neural network, in the form of
a multi-input multi-output Keras model. 
The input data to the model consist of the coordinates of
the collocation points for all sub-domains,
$(x_p^{e_{mn}},y_q^{e_{mn}})$, for $0\leqslant m\leqslant N_x-1$,
$0\leqslant n\leqslant N_y-1$, $0\leqslant p\leqslant Q_x-1$
and $0\leqslant q\leqslant Q_y-1$.
The output of the Keras model consists
of the solution $u^{e_{mn}}(x,y)$ on the collocation points
for all the sub-domains.
The output of the last hidden layer of each sub-domain, $V_j^{e_{mn}}(x,y)$,
are obtained by creating a Keras sub-model using the Keras functional
APIs (application programming interface). The derivatives
of $V_j^{e_{mn}}(x,y)$, and those involved in $LV_j^{e_{mn}}(x,y)$,
are computed using auto-differentiation with
these Keras sub-models.
After the parameters $w_j^{e_{mn}}$ are obtained by
the linear least squares computation, the weight coefficients in
the output layer of the Keras model are then set based on
these parameter values.

% comment on accuracy, cost

\begin{remark}
  \label{rem_2}
  %It is worth emphasizing that
  %with the current method the adjustable parameters in the neural networks
  %are trained by a least squares computation.
  We observe from numerical experiments that
  the simulation result obtained using the current method is considerably more accurate,
  typically by orders of magnitude,
  than those obtained using DNN-based PDE solvers,
  %under an identical or comparable architecture, and
  trained using gradient descent-type algorithms. 
  Furthermore, the current method is computationally fast. Its computational
  cost is essentially the cost of the linear least squares computation.
  We observe that %Numerical experiments show that 
  the network training time of the current method
  is considerably lower, typically by orders of magnitude,
  than those of the DNN-based PDE solvers trained with gradient descent-type
  algorithms.
  These points will be demonstrated by extensive numerical experiments 
  in Section \ref{sec:tests}, in which we compare the current method with
  %the DNN-based PDE solvers such as
  the deep Galerkin method~\cite{SirignanoS2018}
  and the Physics-Informed Neural Network~\cite{RaissiPK2019}.
  
\end{remark}

\begin{remark} \label{rem_2a}
  The computational performance of the current locELM method,
  in terms of the accuracy and the computational cost,
  is comparable to, and oftentimes exceeds, that of the classical
  finite element method.
  These points will be demonstrated by extensive numerical experiments in
  Section \ref{sec:tests} with time-independent and
  time-dependent problems.
  We observe that, with the same training/computation time, the accuracy of
  the current method is comparable, and oftentimes considerably superior,
  to that of the finite element method. To achieve the same accuracy,
  the training time of the current method is comparable to,
  and oftentimes markedly smaller than, the computation time
  of the classical finite element method.
  
\end{remark}

% what else to discuss here?

% comment on how to do complex domains using random collocation points
% discuss how to deal with three or higher/lower dimensions



\subsubsection{Time-Dependent Linear Differential Equations}
\label{sec:unsteady}


We next consider initial-boundary value problems involving time-dependent
linear differential equations together with Dirichlet boundary conditions,
and discuss how to solve such problems using the locELM method.
%In the following discussions,
We again concentrate on
two spatial dimensions (with coordinates $x$ and $y$) plus time ($t$), 
and assume second spatial orders
in the differential equation with respect to both $x$ and $y$.


\paragraph{Basic Method}
\label{sec:basic}

We consider the following generic
time-dependent second-order linear PDE, together with
the Dirichlet boundary condition and the initial condition,
\begin{subequations}
  \begin{align}
    &
    \frac{\partial u}{\partial t} = Lu + f(x,y,t),
    \label{equ_10a} \\
    &
    u(x,y,t) = g(x,y,t), \quad \text{for} \ (x,y) \
    \text{on spatial domain boundary},
    \label{equ_10b} \\
    &
    u(x,y,0) = h(x,y), \label{equ_10c}
  \end{align}
\end{subequations}
where %$t$ denotes time,
$u(x,y,t)$ is the unknown field function to
be solved for, $L$ is a second-order linear differential operator
with respect to both $x$ and $y$, $f(x,y,t)$ is a prescribed source
term, $g(x,y,t)$ is the Dirichlet boundary data, and $h(x,y)$
denotes the initial field distribution.
We assume that
%the differential equation, the boundary condition
%and the initial condition are such that
this initial-boundary
value problem is well posed, and would like to solve this problem
by approximating $u(x,y,t)$ using the locELM representation.


We seek the solution on a rectangular spatial-temporal
domain,
$
\Omega = \{
(x,y,t)\ |\ x\in[a_1,b_1], \ y\in [a_2,b_2], \
t\in [0, \Gamma]
\},
$
where $a_i$, $b_i$ ($i=1,2$) and $\Gamma$ are prescribed constants.
The solution procedure is analogous to that discussed in Section \ref{sec:steady}.
We partition $\Omega$ into $N_x$ ($N_x\geqslant 1$) sub-domains along the
$x$ direction, $N_y$ ($N_y\geqslant 1$) sub-domains along the $y$ direction,
and $N_t$ ($N_t\geqslant 1$) sub-domains in time, leading to a total of
$N_{e}=N_xN_yN_t$  sub-domains in $\Omega$.
Let the vectors $[X_0, X_1, \dots, X_{N_x}]$,
$[Y_0, Y_1, \dots, Y_{N_y}]$
and $[T_0, T_1, \dots, T_{N_t}]$ denote the coordinates of
the sub-domain boundaries along the $x$, $y$ and temporal directions,
respectively,
where $(X_0,Y_0,T_0)=(a_1,a_2,0)$
and $(X_{N_x},Y_{N_y},T_{N_t})=(b_1,b_2,\Gamma)$.
We use
$
\Omega_{e_{mnl}} = [X_m,X_{m+1}]\times[Y_n,Y_{n+1}]\times[T_l,T_{l+1}]
$
to denote the spatial-temporal region
occupied by the sub-domain with the index
$
e_{mnl} = mN_yN_t+nN_t+l+1,
$
for $0\leqslant m\leqslant N_x-1$,
$0\leqslant n\leqslant N_y-1$ and $0\leqslant l\leqslant N_t-1$.
%and so $1\leqslant e_{mnl}\leqslant N_e$.


% network structure and parameters

We approximate $u(x,y,t)$ using the locELM representation
from Section \ref{sec:loc_elm}. More specifically,
we employ a local shallow feed-forward neural network for
the solution on each sub-domain $e_{mnl}$.
The local neural network consists of an input layer with
three nodes, representing
the coordinates $x$, $y$ and $t$, respectively,
a small number of hidden layers, and
an output layer consisting of one node, representing
the solution $u^{e_{mnl}}(x,y,t)$ on this sub-domain.
The output layer is linear and contains no bias.
The weight/bias coefficients in all the hidden layers are pre-set
to uniform random values generated on $[-R_m,R_m]$ and are fixed,
as discussed in Section \ref{sec:loc_elm}.
Additionally, in the implementation, we incorporate an affine mapping operation
right behind the input layer to normalize the 
input data, $(x,y,t)\in\Omega_{e_{mnl}}$, to
the interval $[-1,1]\times[-1,1]\times[-1,1]$.
Let $V_j^{e_{mnl}}$ ($1\leqslant j\leqslant M$) denote the
output  of the last hidden layer,
where $M$ denotes the number of nodes in this layer.
Then we have, in accordance with equation \eqref{equ_b},
\begin{equation}
  \begin{split}
    &
    u^{e_{mnl}}(x,y,t) = \sum_{j=1}^M V_j^{e_{mnl}}(x,y,t) w_j^{e_{mnl}}, \\
    & 
    \text{for}\ 0\leqslant m\leqslant N_x-1, \
    0\leqslant n\leqslant N_y-1, \ 0\leqslant l\leqslant N_t-1,
  \end{split}
\end{equation}
where the coefficients $w_j^{e_{mnl}}$ ($1\leqslant j\leqslant M$) are the training
parameters of the local neural network.
Note that $V_j^{e_{mnl}}(x,y,t)$ and its
derivatives are all known functions,
since the weight/bias coefficients of all the hidden layers
are pre-set and fixed. 

% collocation points

On each sub-domain $e_{mnl}$,
let $(x_p^{e_{mnl}}, y_q^{e_{mnl}},t_r^{e_{mnl}})$
($0\leqslant p\leqslant Q_x-1$, $0\leqslant q\leqslant Q_y-1$,
and $0\leqslant r\leqslant Q_t-1$) denote a set of
distinct collocation points,
where $x_p^{e_{mnl}}$ ($0\leqslant p\leqslant Q_x-1$) denotes a set of
$Q_x$ collocation points on $[X_m,X_{m+1}]$
with $x_0^{e_{mnl}}=X_m$ and $x_{Q_x-1}^{e_{mnl}}=X_{m+1}$,
$y_q^{e_{mnl}}$ ($0\leqslant q\leqslant Q_y-1$) denotes a set of
$Q_y$ collocation points on $[Y_n,Y_{n+1}]$ with
$y_0^{e_{mnl}}=Y_n$ and $y_{Q_y-1}^{e_{mnl}}=Y_{n+1}$,
and $t_{r}^{e_{mnl}}$ ($0\leqslant r\leqslant Q_t-1$) denotes
a set of $Q_t$ collocation points on $[T_l,T_{l+1}]$
with $t_0^{e_{mnl}}=T_l$ and $t_{Q_t-1}^{e_{mnl}}=T_{l+1}$.
We primarily consider the uniform distribution of regular grid points
as the collocation points, analogous to that in Section \ref{sec:steady}.
%two types of distributions for
%the collocation points, a uniform distribution
%and a distribution with the Gauss-Lobatto-Legendre quadrature points, 
%similar to in Section \ref{sec:steady}.

% discretization of equations

With these setup,
%for the domain and the local neural networks,
we next enforce the equations \eqref{equ_10a}--\eqref{equ_10c}
on the collocation points inside each sub-domain and on the domain
boundaries.
On the sub-domain $e_{mnl}$,
%($0\leqslant m\leqslant N_x-1$,
%$0\leqslant n\leqslant N_y-1$,
%$0\leqslant l\leqslant N_t-1$),
equation \eqref{equ_10a} is reduced to
\begin{equation}\label{equ_11}
  \begin{split}
    &
  \sum_{j=1}^M \left.\left(
  \frac{\partial V_j^{e_{mnl}}}{\partial t} - LV_j^{e_{mnl}}
  \right)\right|_{(x_p^{e_{mnl}},y_q^{e_{mnl}},t_r^{e_{mnl}})} w_j^{e_{mnl}}
  = f\left(x_p^{e_{mnl}},y_q^{e_{mnl}},t_r^{e_{mnl}}\right), \\
  & \quad
  \text{for}\ 0\leqslant m\leqslant N_x-1, \
  0\leqslant n\leqslant N_y-1, \ 0\leqslant l\leqslant N_t-1, \\
  & \quad\quad\ \
  0\leqslant p\leqslant Q_x-1, \ 0\leqslant q\leqslant Q_y-1, \
  0\leqslant r\leqslant Q_t-1,
  \end{split}
\end{equation}
where $\left(x_p^{e_{mnl}},y_q^{e_{mnl}},t_r^{e_{mnl}}\right)$
are the collocation points.
The boundary condition \eqref{equ_10b},
when enforced on the spatial domain boundaries corresponding to
$x=a_1$ or $b_1$ and $y=a_2$ or $b_2$, is reduced to
\begin{subequations}
  \begin{align}
    \begin{split}
    &
    \sum_{j=1}^M V_j^{e_{0nl}}(a_1,y_q^{e_{0nl}},t_r^{e_{0nl}}) w_j^{e_{0nl}}
    - g(a_1,y_q^{e_{0nl}},t_r^{e_{0nl}})=0, \label{equ_13a} \\
    & \qquad\qquad \text{for} \
    0\leqslant n\leqslant N_y-1, \ 0\leqslant l\leqslant N_t-1, \
    0\leqslant q\leqslant Q_y-1, \ 0\leqslant r\leqslant Q_t-1;
    \end{split} \\
    \begin{split}
    &
    \sum_{j=1}^M V_j^{e_{mnl}}(b_1,y_q^{e_{mnl}},t_r^{e_{mnl}}) w_j^{e_{mnl}}
    - g(b_1,y_q^{e_{mnl}},t_r^{e_{mnl}})=0, \\
    & \qquad\qquad \text{for} \
    m = N_x-1, \ 
    0\leqslant n\leqslant N_y-1, \ 0\leqslant l\leqslant N_t-1, \
    0\leqslant q\leqslant Q_y-1, \ 0\leqslant r\leqslant Q_t-1;
    \end{split} \\
    \begin{split}
      &
      \sum_{j=1}^M V_j^{e_{m0l}}(x_p^{e_{m0l}},a_2,t_r^{e_{m0l}})w_j^{e_{m0l}}
      - g(x_p^{e_{m0l}},a_2,t_r^{e_{m0l}})=0, \\
      & \qquad\qquad \text{for} \
      0\leqslant m\leqslant N_x-1, \ 0\leqslant l\leqslant N_t-1, \
      0\leqslant p\leqslant Q_x-1, \ 0\leqslant r\leqslant Q_t-1;
    \end{split} \\
    \begin{split}
      &
      \sum_{j=1}^M V_j^{e_{mnl}}(x_p^{e_{mnl}},b_2,t_r^{e_{mnl}})w_j^{e_{mnl}}
      - g(x_p^{e_{mnl}},b_2,t_r^{e_{mnl}})=0, \\
      & \qquad\qquad \text{for} \
      n = N_y-1, \
      0\leqslant m\leqslant N_x-1, \ 0\leqslant l\leqslant N_t-1, \
      0\leqslant p\leqslant Q_x-1, \ 0\leqslant r\leqslant Q_t-1. \label{equ_13d}
    \end{split}
  \end{align}
\end{subequations}
%
% initial condition
On the boundary $t=0$ of the spatial-temporal domain,
the initial condition \eqref{equ_10c} is reduced to 
\begin{equation}
  \begin{split}
    &
    \sum_{j=1}^M V_j^{e_{mn0}}(x_p^{e_{mn0}},y_q^{e_{mn0}},0) w_j^{e_{mn0}}
    - h(x_p^{e_{mn0}},y_q^{e_{mn0}}) = 0, \\
    & \qquad\qquad \text{for} \
    0\leqslant m\leqslant N_x-1, \ 0\leqslant n\leqslant N_y-1, \
    0\leqslant p\leqslant Q_x-1, \ 0\leqslant q\leqslant Q_y-1. \label{equ_14}
  \end{split}
\end{equation}


% C^k continuity

Since $L$ is assumed to be a second-order operator with respect to
both $x$ and $y$, we impose $C^1$ continuity conditions across the
sub-domain boundaries in both the $x$ and $y$ directions.
Because equation \eqref{equ_10a} is of first order in time, we impose the
$C^0$ continuity condition across the sub-domain boundaries along
the temporal direction.
On the sub-domain boundaries $x=X_{m+1}$ ($0\leqslant m\leqslant N_x-2$),
the $C^1$ conditions become,
\begin{subequations}
  \begin{align}
    \begin{split}
      &
      \sum_{j=1}^M V_j^{e_{mnl}}(X_{m+1},y_q^{e_{mnl}},t_r^{e_{mnl}}) w_j^{e_{mnl}}
      - \sum_{j=1}^M V_j^{e_{m+1,nl}}(X_{m+1},y_q^{e_{m+1,nl}},t_r^{e_{m+1,nl}}) w_j^{e_{m+1,nl}}
      = 0, \label{equ_15a} \\
      & \qquad
      0\leqslant m\leqslant N_x-2, \ 0\leqslant n\leqslant N_y-1, \
      0\leqslant l\leqslant N_t-1, \ 0\leqslant q\leqslant Q_y-1, \
      0\leqslant r\leqslant Q_t-1; 
    \end{split} \\
    \begin{split}
      &
      \sum_{j=1}^M \left.\frac{\partial V_j^{e_{mnl}}}{\partial x}\right|_{(X_{m+1},y_q^{e_{mnl}},t_r^{e_{mnl}})} w_j^{e_{mnl}}
      - \sum_{j=1}^M \left.\frac{\partial V_j^{e_{m+1,nl}}}{\partial x}\right|_{(X_{m+1},y_q^{e_{m+1,nl}},t_r^{e_{m+1,nl}})} w_j^{e_{m+1,nl}}
      = 0, \\
      & \qquad
      0\leqslant m\leqslant N_x-2, \ 0\leqslant n\leqslant N_y-1, \
      0\leqslant l\leqslant N_t-1, \ 0\leqslant q\leqslant Q_y-1, \
      0\leqslant r\leqslant Q_t-1.
    \end{split}
  \end{align}
\end{subequations}
On the sub-domain boundaries $y=Y_{n+1}$ ($0\leqslant n\leqslant N_y-2$)
the $C^1$ continuity conditions become,
\begin{subequations}
  \begin{align}
    \begin{split}
      &
      \sum_{j=1}^M V_j^{e_{mnl}}(x_p^{e_{mnl}},Y_{n+1},t_r^{e_{mnl}}) w_j^{e_{mnl}}
      - \sum_{j=1}^M V_j^{e_{m,n+1,l}}(x_p^{e_{m,n+1,l}},Y_{n+1},t_r^{e_{m,n+1,l}}) w_j^{e_{m,n+1,l}}
      = 0, \\
      & \qquad
      0\leqslant m\leqslant N_x-1, \ 0\leqslant n\leqslant N_y-2, \
      0\leqslant l\leqslant N_t-1, \ 0\leqslant p\leqslant Q_x-1, \
      0\leqslant r\leqslant Q_t-1; 
    \end{split} \\
    \begin{split}
      &
      \sum_{j=1}^M \left.\frac{\partial V_j^{e_{mnl}}}{\partial y}\right|_{(x_p^{e_{mnl}},Y_{n+1},t_r^{e_{mnl}})} w_j^{e_{mnl}}
      - \sum_{j=1}^M \left.\frac{\partial V_j^{e_{m,n+1,l}}}{\partial y}\right|_{(x_p^{e_{m,n+1,l}},Y_{n+1},t_r^{e_{m,n+1,l}})} w_j^{e_{m,n+1,l}}
      = 0, \\
      & \qquad
      0\leqslant m\leqslant N_x-1, \ 0\leqslant n\leqslant N_y-2, \
      0\leqslant l\leqslant N_t-1, \ 0\leqslant p\leqslant Q_x-1, \
      0\leqslant r\leqslant Q_t-1. \label{equ_16b}
    \end{split}
  \end{align}
\end{subequations}
On the sub-domain boundaries $t=T_{l+1}$ ($0\leqslant l\leqslant N_t-2$),
the $C^0$ continuity conditions become,
\begin{equation}\label{equ_17}
  \begin{split}
    &
    \sum_{j=1}^M V_j^{e_{mnl}}(x_p^{e_{mnl}},y_q^{e_{mnl}},T_{l+1}) w_j^{e_{mnl}}
      - \sum_{j=1}^M V_j^{e_{mn,l+1}}(x_p^{e_{mn,l+1}},y_q^{e_{mn,l+1}},T_{l+1}) w_j^{e_{mn,l+1}}
      = 0, \\
      & \qquad
      0\leqslant m\leqslant N_x-1, \ 0\leqslant n\leqslant N_y-1, \
      0\leqslant l\leqslant N_t-2, \ 0\leqslant p\leqslant Q_x-1, \
      0\leqslant q\leqslant Q_y-1.
  \end{split}
\end{equation}

% comment on least squares solve

The equations consisting of \eqref{equ_11}--\eqref{equ_17}
form a system of linear algebraic equations about
the training parameters $w_j^{e_{mnl}}$ ($1\leqslant j\leqslant M$,
$0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-1$
and $0\leqslant l\leqslant N_t-1$).
In these equations, $V_j^{e_{mnl}}$, $\frac{\partial V_j^{e_{mnl}}}{\partial t}$,
$\frac{\partial V_j^{e_{mnl}}}{\partial x}$,
$\frac{\partial V_j^{e_{mnl}}}{\partial y}$ and
$LV_j^{e_{mnl}}$ are all known functions and can be evaluated on the collocation
points by the local neural networks.
In particular, the partial derivatives therein
can be computed based on auto-differentiation.

This linear system consists of
$ %\begin{equation*}
  N_{equ} = N_xN_yN_t\left[
    Q_xQ_yQ_t + 2(Q_x+Q_y)Q_t + Q_xQ_y
    \right]
$ %\end{equation*}
equations, and is about $N_xN_yN_tM$ unknown variables $w_j^{e_{mnl}}$.
We seek a least squares solution to this system with minimum norm, and
compute this solution by the linear least squares method.
In the implementation we employ the linear least squares routine from LAPACK
%and also employ the pseudo-inverse (Moore-Penrose inverse)
to compute the least squares solution.
%Both methods provide the minimal norm solution when the solution to the system
%is not unique.
The weight coefficients in the output layers of the local neural networks
are then determined by the least squares solution to the above system.
Training the neural network basically consists of
computing the least squares solution.

\paragraph{Block Time-Marching for Long-Time Simulations}
\label{sec:block}

% block time-marching

Since the linear least squares computation, and hence
the neural network training, %with the current method
is computationally fast,
longer-time dynamic simulations of time-dependent PDEs
become feasible using
the current method.
With the basic method,
we observe that 
% from Section \ref{sec:basic},
as the temporal dimension
of the spatial-temporal domain (i.e.~$\Gamma$)
increases, the network training 
generally becomes more difficult, in the sense that
the obtained solution tends to become less accurate
corresponding to the later time instants in the domain.
When $\Gamma$ is large, the solution can contain pronounced errors.
Therefore, using a large dimension in time (i.e.~large $\Gamma$)
 with the basic method is generally not advisable.

To perform long-time simulations,
we will employ the following block time-marching strategy.
Given a spatial-temporal domain with a large dimension in time,
we divide the domain into a number of windows, referred to
as time blocks, along the temporal direction, so that
the temporal dimension of each time block has a moderate size.
We then solve the initial-boundary value problem using
the basic method as discussed above
%from Section \ref{sec:basic}
on the spatial-temporal domain of each time block,
individually and successively.
We use the solution from the previous time block evaluated at
the last time instant as the initial condition for the
computations of the current time block.
We start with the first time block, and march
in time block by block, until the last time block is completed.
%We refer to this strategy
%as the block time-marching scheme.

Specifically, let
$
\Omega = \{
(x,y,t)\ |\ x\in[a_1,b_1], \ y\in [a_2,b_2], \
t\in [0, t_f]
\}
$
 denote the spatial-temporal domain on which the
initial-boundary value problem \eqref{equ_10a}--\eqref{equ_10c}
is to be solved, where $t_f$ can be large.
%let $[0, t_f]$ denote the domain in time
%we would like to solve the equation \eqref{equ_10a} on,
%where $t_f$ can be large, and let
%$[a_1,b_1]\times [a_2,b_2]$ denote the spatial domain.
We divide the domain into $N_b$ ($N_b\geqslant 1$)
uniform blocks in time, with each block the size of
$\Gamma = \frac{t_f}{N_b}$. We choose
$N_b$ such that the block size $\Gamma$
is a moderate value.
%(e.g.~on the order $\Gamma\sim 1$).

On the $k$-th ($0\leqslant k\leqslant N_b-1$) time block,
we introduce a time shift and a new dependent
variable as a function of the shifted time
based on the following transform:
%\begin{subequations}
\begin{align}
  &
  \xi = t - k\Gamma, \ \ U(x,y,\xi) = u(x,y,t),
  \quad t\in [k\Gamma, (k+1)\Gamma], \ \
  \xi \in [0, \Gamma], \label{equ_18}
\end{align}
%\end{subequations}
where $\xi$ denotes the shifted time and
$U(x,y,\xi)$ denotes the new dependent variable.
The equations \eqref{equ_10a} and \eqref{equ_10b}
are then transformed into,
\begin{subequations}
  \begin{align}
    &
    \frac{\partial U}{\partial \xi} = LU + f(x,y,\xi+k\Gamma), \label{equ_19a}
    \\
    &
    U(x,y,\xi) = g(x,y,\xi+k\Gamma),
    \quad \text{for} \ (x,y) \ \text{on spatial domain boundary}.
    \label{equ_19b}
  \end{align}
\end{subequations}
This is supplemented by the initial condition,
\begin{equation}\label{equ_20}
  U(x,y,0) = U_0(x,y),
\end{equation}
where $U_0(x,y)$ denotes the initial distribution on the time block $k$,
given by
\begin{equation}\label{equ_21}
  U_0(x,y) = \left\{
  \begin{array}{ll}
    u(x,y,0) = h(x,y), & \text{if} \ k=0, \\
    u(x,y,k\Gamma) \ \text{computed on time block} \ (k-1), & \text{if} \ k>0.
  \end{array}
  \right.
\end{equation}
Note that $h(x,y)$ is the initial condition for the problem.

The initial-boundary value problem on time block $k$ now
consists of equations \eqref{equ_19a}, \eqref{equ_19b} and
\eqref{equ_20}, to be solved on the
spatial-temporal domain
$ %\begin{equation*}
  \Omega^{st} = \{
  (x,y,\xi) \ |\ x\in[a_1,b_1], \ y\in[a_2,b_2],\
  \xi \in [0,\Gamma]
  \}
$ %\end{equation*}
for the function $U(x,y,\xi)$.
This is the same problem we have considered previously,
% in Section \ref{sec:basic},
and it can be solved using the basic method discussed before. 
With $U(x,y,\xi)$ obtained, the function $u(x,y,t)$ on time block $k$
is recovered by the transform \eqref{equ_18}.
By solving the initial-boundary value problem on successive
time blocks, we can attain the solution $u(x,y,t)$
on the entire spatial-temporal domain $\Omega$.
This is the block time-marching scheme %with the current method
for potentially long-time simulations of time-dependent linear
PDEs.



\subsection{Nonlinear Differential Equations}
\label{sec:nonlinear}

% nonlinear least squares method
% Newton-like iterations with linear least squares method
%
% Q: do we actually need to separate nonlinear ODEs from nonlinear PDEs?
%    probably not
%    will discuss PDE directly, time-independent and unsteady PDEs
%    this will include the case of nonlinear ODEs, with maybe a slight
%       modification

%\subsubsection{Initial Value Problems}
% initial value problem
% dy/dt = f(t,y), y(t0) = y0
% y'' = f(t,y,y'), y(t0) = y0, y'(t0)=y1

%\subsubsection{Initial-Boundary Value Problems}
% BVP and I/B-VP problems


In this section
%we extend the locELM method
%to nonlinear differential equations.
we look into how to solve the initial/boundary value problems
involving nonlinear differential equations using domain
decomposition and the locELM representation
for the solutions.
The overall procedure is analogous to that for linear
differential equations. The main difference lies in that
here the set of local neural networks
needs to be  trained by a nonlinear least squares computation.
%rather than a linear one.

\subsubsection{Time-Independent Nonlinear Differential Equations}
\label{sec:nonl_steady}

% assume highest spatial order term is linear, nonlinear term
%   involves only low-order terms
%
% Lu + F(u, u_x, u_y) = f(x,y)
% Dirichlet BC
% assumed to be well-posed

We first consider the boundary value problems involving nonlinear
differential equations together with Dirichlet boundary conditions, and
discuss how to solve such problems using the locELM method.
We assume that the highest-orer terms in the equation are linear,
and that the nonlinear terms involve
the unknown function and also possibly its derivatives of lower orders.
To make the discussions more concrete, we again
focus on two dimensions (with coordinates
$x$ and $y$), and assume that the highest partial derivatives
with respect to both $x$ and $y$ are of second order in the equation.

Let us consider the following generic second-order nonlinear differential
equation of such a form on domain $\Omega$,
together with the Dirichlet boundary condition on $\partial\Omega$,
\begin{subequations}
  \begin{align}
    &
    Lu + F\left(u,u_x,u_y\right)
    = f(x,y), \label{equ_32a}
    \\
    &
    u(x,y) = g(x,y), \quad \text{on}\ \partial\Omega,
     \label{equ_32b}
  \end{align}
\end{subequations}
where $u(x,y)$ is the field function to be solved for,
$u_x=\frac{\partial u}{\partial x}$,
$u_y=\frac{\partial u}{\partial y}$,
$L$ is a second-order linear differential operator with respect to
both $x$ and $y$, $F$ denotes the nonlinear term,
$f(x,y)$ is a prescribed source term, and
$g(x,y)$ denotes the Dirichlet boundary data.

The overall procedure for 
solving equations~\eqref{equ_32a}--\eqref{equ_32b} using the locELM method
%we seek a locELM representation of the field function $u(x,y)$
is analogous to that in Section \ref{sec:steady}.
We focus on a rectangular domain,
$
\Omega = \{
(x,y)\ |\ x\in[a_1,b_1], \ y\in[a_2,b_2]
\},
$
and partition this domain into $N_x$ and $N_y$ sub-domains along the
$x$ and $y$ directions, respectively, thus leading to a total of
$N_e=N_xN_y$ sub-domains in $\Omega$.
Following the notation of Section \ref{sec:steady},
we denote the sub-domain boundary coordinates along the $x$ and $y$ directions
by two vectors $[X_0,X_1,\dots,X_{N_x}]$ and $[Y_0,Y_1,\dots,Y_{N_y}]$,
respectively.
Let $\Omega_{e_{mn}}=[X_m,X_{m+1}]\times[Y_n,Y_{n+1}]$ denote
the sub-domain with index $e_{mn}$ for $0\leqslant m\leqslant N_x-1$
and $0\leqslant n\leqslant N_y-1$.
We use $(x_p^{e_{mn}},y_q^{e_{mn}})$ ($0\leqslant p\leqslant Q_x-1$,
$0\leqslant q\leqslant Q_y-1$)
to denote a set of uniform collocation points
in the sub-domain $e_{mn}$, where $Q_x$ and $Q_y$ are
the number of collocation points in the $x$ and $y$ directions
on the sub-domain.
%
% local NN
The input layer of the local neural network consists of two nodes ($x$ and $y$),
and the output layer consists of one node (representing $u$).
Let $u^{e_{mn}}(x,y)$ denote the output of the local neural network
on the sub-domain $e_{mn}$, and $V_j^{e_{mn}}(x,y)$ ($1\leqslant j\leqslant M$)
denote the output of the last hidden layer of the local neural network,
where $M$ is the number of nodes in the last hidden layer.
We have the following relations,
\begin{equation}\label{equ_33}
  \begin{split}
    &
    u^{e_{mn}}(x,y) = \sum_{j=1}^M V_j^{e_{mn}}(x,y) w_j^{e_{mn}}, \quad
    \frac{\partial u^{e_{mn}} }{\partial x}
    = \sum_{j=1}^M \frac{\partial V_j^{e_{mn}}}{\partial x} w_j^{e_{mn}}, \quad
    \frac{\partial u^{e_{mn}} }{\partial y}
    = \sum_{j=1}^M \frac{\partial V_j^{e_{mn}}}{\partial y} w_j^{e_{mn}}, \\
    & \qquad \text{for} \
    0\leqslant m\leqslant N_x-1, \
    0\leqslant n\leqslant N_y-1,
  \end{split}
\end{equation}
where the constants $w_j^{e_{mn}}$ ($1\leqslant j\leqslant M$) denote
the weight coefficients in the output layer of the local neural network
on sub-domain $e_{mn}$, and they constitute the training parameters
of the neural network.

% discretization of equations

Enforcing equation \eqref{equ_32a} on the collocation points
$(x_p^{e_{mn}},y_q^{e_{mn}})$ for each sub-domain leads to
\begin{equation}\label{equ_34}
  \begin{split}
    &
  \sum_{j=1}^M \left[LV_j^{e_{mn}}(x_p^{e_{mn}},y_q^{e_{mn}})\right] w_j^{e_{mn}}
  + F\left.\left(u^{e_{mn}},u_x^{e_{mn}},u_y^{e_{mn}}\right)\right|_{(x_p^{e_{mn}},y_q^{e_{mn}})}
  - f(x_p^{e_{mn}},y_q^{e_{mn}}) = 0,
  \\
  & \qquad \text{for}\
  0\leqslant m\leqslant N_x-1, \
  0\leqslant n\leqslant N_y-1, \
  0\leqslant p\leqslant Q_x-1, \
  0\leqslant q\leqslant Q_y-1,
  \end{split}
\end{equation}
where $u^{e_{mn}}$, $u_x^{e_{mn}}$ and $u_y^{e_{mn}}$
are given by \eqref{equ_33} in terms of
the training parameters $w_j^{e_{mn}}$.
%These are a set of nonlinear algebraic equations about
%the training parameters  $w_j^{e_{mn}}$.
Enforcing the boundary condition~\eqref{equ_32b}
on the collocation points of the
four domain boundaries $x=a_1$ or $b_1$
and $y=a_2$ or $b_2$ leads to the equations
\eqref{equ_4}, \eqref{equ_5}, \eqref{equ_6} and \eqref{equ_7}.
Since equation \eqref{equ_32a} is of second-order 
with respect to both $x$ and $y$, we impose $C^1$
continuity conditions across the sub-domain boundaries
along both the $x$ and $y$ directions.
Enforcing the $C^1$ continuity conditions on
the collocation points of the sub-domain boundaries
$x=X_{m+1}$ ($0\leqslant m\leqslant N_x-2$) and
$y=Y_{n+1}$ ($0\leqslant n\leqslant N_y-2$) leads to
the equations \eqref{equ_8a}--\eqref{equ_8b}
and \eqref{equ_9a}--\eqref{equ_9b}.
%It should be noted that in all the above equations
%$V_j^{e_{mn}}(x,y)$ are known and all its partial derivatives 
%can be computed by auto-differentiation.


The set of equations consisting of \eqref{equ_34},
\eqref{equ_4}--\eqref{equ_7}, \eqref{equ_8a}--\eqref{equ_8b}
and \eqref{equ_9a}--\eqref{equ_9b}
is a system of nonlinear algebraic equations about
the training parameters
$w_j^{e_{mn}}$ ($1\leqslant j\leqslant M$,
$0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-1$).
In these equations the functions $V_j^{e_{mn}}(x,y)$ are all known
and their partial derivatives can be computed by auto-differentiation.
%
%form the system of algebraic equations that need to be solved for the
%determination of the training parameters
%$w_j^{e_{mn}}$ ($1\leqslant j\leqslant M$,
%$0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-1$).
This nonlinear algebraic system consists of
$ 
  N_xN_y(Q_xQ_y + 2Q_x + 2Q_y)
$ 
equations with $N_xN_yM$ unknowns.

This system is to be solved for the determination of the training parameters.
In this paper we consider two methods for
solving this system.
In the first method
we seek a least squares solution to this system for
the training parameters $w_j^{e_{mn}}$, thus leading to
a nonlinear least squares problem.
In the second method we adopt a simple Newton's method combined with
a linear least squares computation for solving this system.


\begin{algorithm}[tb]
  \DontPrintSemicolon
  \SetKwInOut{Input}{input}\SetKwInOut{Output}{output}

  \Input{constant $\delta>0$, initial guess  $\mbs x_0$}
  \Output{solution vector $\mbs x$, associated cost $c$}
  \BlankLine\BlankLine
  call scipy.optimize.least\_squares routine using $\mbs x_0$ as the initial guess\;  
  set $\mbs x\leftarrow$ returned solution\;
  set $c\leftarrow$ returned cost\;
  \If{c is below a threshold}{return\;}
  \BlankLine\BlankLine
  \For{$i\leftarrow 0$ \KwTo maximum number of sub-iterations}{
    generate a random number $\xi_1$ on the interval $[0,1]$\;
    set $\delta_1 \leftarrow \xi_1\delta$\;
    generate a uniform random vector $\Delta\mbs x$ of the
    same shape as $\mbs x$ on the interval [$-\delta_1$, $\delta_1$]\;
    \BlankLine\BlankLine
    generate a random number $\xi_2$ on the interval $[0,1]$\;
    set $\mbs y_0 \leftarrow \xi_2\mbs x + \Delta\mbs x$\;
    \BlankLine\BlankLine
    call scipy.optimize.least\_squares routine using $\mbs y_0$ as the initial guess\;
    \If{the returned cost is less than $c$}{
      set $\mbs x\leftarrow$ the returned solution\;
      set $c\leftarrow$ the returned cost\;
    }
    \If{the returned cost is below a threshold}{
      return\;
    }
  }
  \caption{NLSQ-perturb (nonlinear least squares with perturbations)}
  \label{alg:alg_1}
\end{algorithm}



With the first method, to solve the nonlinear least squares problem,
we take advantage of the nonlinear least squares implementations
from the scientific libraries.
%since efficient nonlinear least squares
%implementations are available in scientific libraries,
In the current implementation, we employ the nonlinear least squares
routine ``least\_squares''
from the scipy.optimize package.
%We have implemented two methods for solving this system of
%equations. The first method is
%the same nonlinear least squares
%method~\cite{BranchCL1999} as in Section \ref{sec:nonl_steady},
%available as the ``least\_squares'' routine in
%the scipy.optimize package.
This method typically works quite well, and exhibits a
smooth convergence behavior.
However, we observe that
in certain cases, e.g.~when the simulation resolution is not sufficient or
sometimes in longer-time simulations with time-dependent nonlinear equations,
%(see Section \ref{sec:tnleq} below),
this method at times can be attracted to
and trapped in a local
minimum solution. While the method indicates that the nonlinear iterations
have converged,
%(based on any of the several stopping criteria within), 
the norm of the converged equation residuals can turn out to be quite pronounced
in magnitude. 
In the event this takes place,  the obtained solution
can contain significant errors and the simulation 
loses accuracy from that point onward. This issue is typically encountered
when the resolution of the computation
(e.g.~the number of collocation points in the domain
or the number of training parameters in the neural network) decreases 
to a certain point.
This has been a main issue with the nonlinear least squares
computation using this method.

To alleviate this problem and make the nonlinear least squares
computation more robust,
we find it necessary to incorporate a sub-iteration procedure with random
perturbations to the
initial guess when invoking the nonlinear
least squares routine.
The basic idea is as follows.
If the nonlinear least squares routine converges with
the converged cost (i.e.~norm of the equation residual) exceeding a threshold,
the sub-iteration procedure will be triggered. Within
each sub-iteration a random initial guess for the solution
is generated, based on e.g.~a perturbation to the current
approximation of the solution vector,
and is fed to the nonlinear least squares routine.

Algorithm \ref{alg:alg_1} illustrates
the nonlinear least squares computation combined with the sub-iteration procedure,
which will be referred to as the NLSQ-perturb
(Nonlinear Least SQuares with perturbations) method hereafter.
In this algorithm the parameter $\delta$ controls the maximum range on
which the random perturbation vector is generated.
Numerical experiments indicate that the method works better
if $\delta$ is not large. A typical value is $\delta=0.5$, which is observed
to work well in numerical simulations.
%
Combined with an appropriate resolution (the number of collocation
points in domain, and the number of training parameters in the neural network)
for a given problem,
the NLSQ-perturb method turns out to be very effective.
The solution can typically be attained with only
a few (e.g.~around $4$ or $5$) sub-iterations if such an iteration
is triggered.
For the numerical tests reported in Section \ref{sec:tests},
we employ a threshold value $10^{-3}$ in the lines $4$ and $18$
of Algorithm \ref{alg:alg_1}. The final converged cost value
is typically on the order $10^{-13}$.
%This sub-iteration procedure is critical to this method
%for long-time simulations.

\begin{remark}\label{rem_9}
In Algorithm \ref{alg:alg_1}
the value $\xi_2$ controls around which point the random perturbation 
will be generated. In Algorithm \ref{alg:alg_1}, $\xi_2$
is taken to be a random value from $[0,1]$.
An alternative to this is
to fix this value at $\xi_2=0$ or $\xi_2=1$, which has been observed
to work  well in actual simulations.
By using $\xi_2=0$, one is effectively generating a random perturbation
around the origin and use it as the initial guess.
By using $\xi_2=1$, one is effectively setting the initial guess
as a random perturbation
to the best approximation obtained so far.
  
\end{remark}


The second method for solving the nonlinear algebraic system
is a combination of Newton iterations
with linear least squares computations, which we will refer to
as the Newton-LLSQ (Newton-Linear Least SQuares) method hereafter.
The convergence behavior of this method is not
as regular as the first method, but it appears less likely
to be trapped to local minimum solutions.
%in longer-time nonlinear simulations.
To outline the idea of the method, let
\begin{equation}
  {\mbs G}(\mbs W) = 0, \quad
  \text{where}\ \mbs G=(G_1,G_2,\dots,G_m), \
  \mbs W=(w_1,w_2,\dots,w_n)
\end{equation}
denote a system of $m$ nonlinear algebraic equations about
$n$ variables $\mbs W$. Let the superscript in ${\mbs W}^{(k)}$ denote
the approximation of the solution at the $k$-th iteration, and
$\Delta \mbs W$ denote the solution increment.
We update the solution iteratively as follows in a way similar to the
Newton's method,
\begin{align}
  &
  \mbs J(\mbs W^{(k)})\Delta \mbs W = -\mbs G(\mbs W^{(k)}), \label{equ_39}
  \\
  &
  \mbs W^{(k+1)} = \mbs W^{(k)} + \Delta \mbs W,
\end{align}
where $\mbs J(\mbs W^{(k)})$ is the Jacobian matrix given by
$
J_{ij} = \frac{\partial G_i}{\partial w_j}
$
($1\leqslant i\leqslant m$, $1\leqslant j\leqslant n$)
and evaluated at $\mbs W^{(k)}$.
The departure point from the standard Newton method lies in
that the linear algebraic system \eqref{equ_39}
involves a non-square coefficient
matrix (Jacobian matrix).
We seek a least squares solution to the linear system \eqref{equ_39},
and solve this system for
the increment $\Delta \mbs W$ using the linear least squares routine
from LAPACK.

% comment/comparison between NLSQ-perturb and Newton-LLSQ

\begin{remark}
  \label{rem_3}
  It is observed that the computational cost of the
  Newton-LLSQ method is typically considerably smaller than that of the
  NLSQ-perturb method in training the locELM neural networks. On the other hand,
  the locELM solutions obtained with the Newton-LLSQ method are
  in general markedly less accurate than those obtained using
  the NLSQ-perturb method.

  
\end{remark}

%and can be computed by efficient routines in scientific
%libraries. In the current work we employ the
%``least\_squares'' routine in the scipy.optimize package
%for the nonlinear least squares solution, which implements
%a trust-region reflective Newton method~\cite{BranchCL1999,Teunissen1990}.
%The adjustable parameters $w_j^{e_{mn}}$ of the neural
%network is thus trained by this nonlinear least squares computation.

% how to deal with failure of convergence for nonlinear least squares solve?

% some implementation details

In the current work, we implement the local neural networks
for each sub-domain $e_{mn}$
using one or several dense Keras layers, with the collocation
points $(x_p^{e_{mn}},y_q^{e_{mn}})$ as the input data and
$u^{e_{mn}}$ as the output. In the implementation, an affine mapping is
incorporated into each local neural network
behind the input layer to normalize the input
$(x,y)$ data to the interval $[-1,1]\times[-1,1]$ for each sub-domain.
The set of local neural networks logically forms
a multiple-input multiple-output Keras model.
The weight/bias coefficients in all the hidden layers are set to
uniform random values generated on $[-R_m,R_m]$.
The weight coefficients of the output layers ($w_j^{e_{mn}}$) of
the local neural networks are determined and
set by the solution to the nonlinear algebraic system
obtained using the NLSQ-perturb or Newton-LLSQ methods.
The partial derivatives
involved in the formulation are computed by  auto-differentiation
from the Tensorflow package.



\subsubsection{Time-Dependent Nonlinear Differential Equations}
\label{sec:tnleq}


We next consider the initial-boundary value problems
involving time-dependent nonlinear different equations together with Dirichlet
boundary conditions, and discuss how to solve such problems using the locELM method.
We make the same assumptions about the differential equation
as in Section \ref{sec:nonl_steady}: The highest-order terms
are assumed to be linear, and the nonlinear terms may involve
the unknown function or its partial derivatives of lower orders.
We again focus on two spatial dimensions, plus time $t$,
%in the following discussions,
and assume that
the equation is of second order with respect to both
spatial coordinates ($x$ and $y$).

Consider the following generic nonlinear partial
differential equation of such a form on a spatial-temporal domain $\Omega$,
supplemented by the Dirichlet boundary condition and an initial condition,
\begin{subequations}
  \begin{align}
    &
    \frac{\partial u}{\partial t} = Lu + F(u,u_x,u_y) + f(x,y,t),
    \label{equ_35a}
    \\
    &
    u(x,y,t) = g(x,y,t), \quad
    \text{for} \ (x,y)\ \text{on the spatial domain boundary}, \label{equ_35b}
    \\
    &
    u(x,y,0) = h(x,y), \label{equ_35c}
  \end{align}
\end{subequations}
where $u(x,y,t)$ is the unknown field function to be solved for,
$L$ is a second-order linear differential operator with
respect to both $x$ and $y$, $F$ denotes the nonlinear term,
$f(x,y,t)$ is a prescribed source term, $g(x,y,t)$ denotes
the Dirichlet boundary data, and $h(x,y)$ is the initial
field distribution.

Our discussion below largely parallels that of Section \ref{sec:unsteady}.
We first discuss the basic method on a spatial-temporal domain,
and then develop the block time-marching idea for longer-time simulations
of the nonlinear partial differential equations.

\paragraph{Basic Method}

We focus on a rectangular spatial-temporal domain
$
\Omega = \{
(x,y,t)\ |\ x\in[a_1,b_1], \ y\in[a_2,b_2], \ t\in[0,\Gamma]
\},
$
and solve the initial-boundary value problem consisting of
equations \eqref{equ_35a}--\eqref{equ_35c} on this domain.

Following the notation of Section~\ref{sec:basic}, we
use $N_x$, $N_y$ and $N_t$ to denote the number of sub-domains
along the $x$, $y$ and $t$ directions,
where the locations of the sub-domain boundaries along the three directions
are given by
the vectors $[X_0,X_1,\dots,X_{N_x}]$, $[Y_0,Y_1,\dots,Y_{N_y}]$
and $[T_0,T_1,\dots,T_{N_t}]$,
respectively. A sub-domain with the index $e_{mnl}$ corresponds to
the spatial-temporal region
$
\Omega_{e_{mnl}}=[X_m,X_{m+1}]\times[Y_n,Y_{n+1}]\times[T_l,T_{l+1}],
$
for $0\leqslant m\leqslant N_x-1$, $0\leqslant n\leqslant N_y-1$
and $0\leqslant l\leqslant N_t-1$.
Let $(x_p^{e_{mnl}},y_q^{e_{mnl}},t_r^{e_{mnl}})$
($0\leqslant p\leqslant Q_x-1$, $0\leqslant q\leqslant Q_y-1$,
$0\leqslant r\leqslant Q_t-1$)
denote the set of
$Q=Q_xQ_yQ_t$ collocation points on each sub-domain $e_{mnl}$.
%
% NN
Let $u^{e_{mnl}}(x,y,t)$ denote the output of the local neural network
corresponding to  the sub-domain $e_{mnl}$,
and $V_j^{e_{mnl}}(x,y,t)$ ($1\leqslant j\leqslant M$)
denote the output of the last hidden layer of the local neural network,
where $M$ is the number of nodes in the last hidden layer.
The following relations hold,
\begin{equation}\label{equ_36}
  \left\{
  \begin{split}
    &
    u^{e_{mnl}}(x,y,t) = \sum_{j=1}^M V_j^{e_{mnl}}(x,y,t) w_j^{e_{mnl}}, \quad
    u_x^{e_{mnl}}(x,y,t) = \sum_{j=1}^M \frac{\partial V_j^{e_{mnl}}}{\partial x} w_j^{e_{mnl}},
    \\
    &
    u_y^{e_{mnl}}(x,y,t) = \sum_{j=1}^M \frac{\partial V_j^{e_{mnl}}}{\partial y} w_j^{e_{mnl}},
    \quad
    \frac{\partial u^{e_{mnl}}}{\partial t}
    = \sum_{j=1}^M \frac{\partial V_j^{e_{mnl}}}{\partial t} w_j^{e_{mnl}},
    \\
    &
    \text{for}\ 0\leqslant m\leqslant N_x-1, \
    0\leqslant n\leqslant N_y-1, \
    0\leqslant l\leqslant N_t-1,
  \end{split}
  \right.
\end{equation}
where $w_j^{e_{mnl}}$ denote the weight coefficients
in the output layers of the local neural networks
and they constitute the training parameters of the network.


% discretization

Enforcing equation \eqref{equ_35a} on the collocation points
$(x_p^{e_{mnl}},y_q^{e_{mnl}},t_r^{e_{mnl}})$ of each sub-domain $e_{mnl}$
leads to
\begin{equation}\label{equ_37}
  \begin{split}
    &
    \sum_{j=1}^M\left.\left[
      \frac{\partial V_j^{e_{mnl}}}{\partial t}
      -L V_j^{e_{mnl}}
      \right]\right|_{(x_p^{e_{mnl}},y_q^{e_{mnl}},t_r^{e_{mnl}})}w_j^{e_{mnl}}
    - \left.F(u^{e_{mnl}},u_x^{e_{mnl}},u_y^{e_{mnl}}) \right|_{(x_p^{e_{mnl}},y_q^{e_{mnl}},t_r^{e_{mnl}})}
    \\
    & \qquad
    - f(x_p^{e_{mnl}},y_q^{e_{mnl}},t_r^{e_{mnl}}) = 0, \\
    &
    \text{for} \
    0\leqslant m\leqslant N_x-1, \
    0\leqslant n\leqslant N_y-1, \
    0\leqslant l\leqslant N_t-1, \
    0\leqslant p\leqslant Q_x-1, \
    0\leqslant q\leqslant Q_y-1, \\
    & \quad \ \
    0\leqslant r\leqslant Q_t-1,
  \end{split}
\end{equation}
where $u^{e_{mnl}}$, $u_x^{e_{mnl}}$ and $u_y^{e_{mnl}}$
are given by \eqref{equ_36} in terms of the known function
$V_j^{e_{mnl}}$ and its partial derivatives.
This is a set of nonlinear algebraic equations about
the training parameters $w_j^{e_{mnl}}$.
Enforcing the boundary condition \eqref{equ_35b}
on the collocation points of the four spatial
boundaries at $x=a_1$ or $b_1$ and $y=a_2$ or $b_2$
leads to the equations \eqref{equ_13a}--\eqref{equ_13d}.
Enforcing the initial condition \eqref{equ_35c}
on the spatial collocation points at $t=0$
results in equation \eqref{equ_14}.
We impose the $C^1$ continuity conditions on the unknown field 
$u(x,y,t)$ across the sub-domain boundaries
along the $x$ and $y$ directions, since $L$ is assumed to be
a second-order operator with respect to both $x$ and $y$.
We impose the $C^0$ continuity condition across the
sub-domain boundaries in
the temporal direction, since equation \eqref{equ_35a} is
first-order with respect to time.
Enforcing the $C^1$ continuity conditions
on the collocation points on the sub-domain boundaries
$x=X_{m+1}$ ($0\leqslant m\leqslant N_x-2$) and
$y=Y_{n+1}$ ($0\leqslant n\leqslant N_y-2$) leads to
the equations \eqref{equ_15a}--\eqref{equ_16b}.
Enforcing the $C^0$ continuity condition on
the collocation points on the sub-domain boundaries
$t=T_{l+1}$ ($0\leqslant l\leqslant N_t-2$) leads
to the equation \eqref{equ_17}.

The set of equations consisting of \eqref{equ_37} and
\eqref{equ_13a}--\eqref{equ_17} is a nonlinear algebraic system
of equations about the training parameters $w_j^{e_{mnl}}$.
%
% comment on the system, how to solve it,
% two methods: scipy routine, and straightforward Newton iteration
%    plus linear least squares solve
%
This system consists of
$
N_xN_yN_t[Q_xQ_yQ_t+2(Q_x+Q_y)Q_t + Q_xQ_y]
$
coupled nonlinear algebraic equations with
$N_xN_yN_tM$ unknowns.
%$w_j^{e_{mnl}}$.
This system can be solved using the NLSQ-perturb or
Newton-LLSQ methods from Section \ref{sec:nonl_steady}
to determine the training parameters $w_j^{e_{mnl}}$.


% what else to discuss for the basic method?


\paragraph{Block Time-Marching}

% for longer-time simulations

For longer-time simulations of time-dependent nonlinear
differential equations, we employ a block time-marching
strategy analogous to that of Section \ref{sec:block}.
Let
$
\Omega = \{
(x,y,t) | x\in[a_1,b_1],\ y\in [a_2,b_2],\ t\in [0,t_f]
\}
$
denote the spatial-temporal domain on which the problem is to be solved,
where $t_f$ can be large.
We divide the temporal dimension into $N_b$ uniform time blocks,
with the block size $\Gamma = \frac{t_f}{N_b}$ being a moderate value,
and solve the problem on each time block separately and successively.
On the $k$-th ($0\leqslant k\leqslant N_b-1$) time block, we introduce
a shifted time $\xi$ and a new dependent variable $U(x,y,\xi)$ as given by
equation \eqref{equ_18}.
Then equation \eqref{equ_35a} is transformed into
  \begin{align}
    &
    \frac{\partial U}{\partial \xi} = LU + F(U, U_x, U_y)+ f(x,y,\xi+k\Gamma),
    \label{equ_41}
  \end{align}
where $U_x=\frac{\partial U}{\partial x}$ and $U_y=\frac{\partial U}{\partial y}$.
Equation~\eqref{equ_35b} is transformed into \eqref{equ_19b}.
The initial condition for time block $k$ is given by \eqref{equ_20},
in which the initial distribution data is given by \eqref{equ_21}.

The initial-boundary value problem consisting of equations
\eqref{equ_41}, \eqref{equ_19b} and \eqref{equ_20},
on the spatial-temporal domain
$
\Omega^{st}=[a_1,b_1]\times[a_2,b_2]\times[0,\Gamma],
$
is the same problem we have considered before,
and can be solved for $U(x,y,\xi)$ using the basic method.
The solution $u(x,y,t)$ on time block $k$
can then be recovered by the transform \eqref{equ_18}.

% block by block marching

Starting with the first time block, we can solve the initial-boundary
value problem on each time block successively.
After the problem on the $k$-th block is solved,
the obtained solution can be evaluated at $t=(k+1)\Gamma$ 
and used as the initial condition for the computation on
the subsequent time block.

% local minimum solution when using least_squares routine
% sub-iteration

\begin{remark}
  \label{rem_6}
  We observe from numerical experiments that the time block size $\Gamma$
  can play a crucial role
  in long-time simulations of time-dependent
  nonlinear differential equations.
%  \begin{itemize}
%  \item
  %The time block size $\Gamma$.
  In general, reducing $\Gamma$ can improve the
    convergence of the nonlinear iterations on the time blocks.
    If $\Gamma$ is too large, the nonlinear iterations
    can become hard to converge.
    With the other simulation parameters (such as the number of collocation
    points in the time block and the number of training parameters in
    the neural network) fixed, reducing the time block size effectively amounts to
    an increase in the resolution of the data on each time block.

 % \item
 %   When using the nonlinear least squares routine ``least\_squares''
 %   (from the scipy.optimize package) for the computation on
 %   each time block, the sub-iteration procedure
 %   discussed earlier  is critical to the success of long-time
 %   simulations. Without the sub-iteration procedure, we observe that
 %   this method from time to time produces converged
 %   solutions with not so small residual norms, which completely destroys
 %   the simulation accuracy.
  
 % \end{itemize}

\end{remark}

% comment on time-stepping scheme with nonlinear least squares
%   but this only applies to certain PDEs, general PDEs may not apply
%   maybe add this to section on tests with Burgers' equation, and
%   discuss the semi-implicit time-stepping scheme there

% what else to discuss here?

% what is the advantage of the local ELM method? compared with
%   single-domain ELM? is there any advantage?
%   is there any advantage compared with DNN methods?
% is the extension to nonlinear PDE a new aspect of this paper? has it
%   been done before?
%   local refinement can be done easily, better for more complicated function
%   distributions, 

% comment on comparison with DGM/PINN
% comment on comparison with classical FEM

\begin{remark}\label{rem_dd}
  We will present numerical experiments with nonlinear 
  PDEs in Section \ref{sec:tests} to compare
  the current locELM method with the deep Galerkin method (DGM) and
  the physics-informed neural network (PINN), and also compare
  the current method with the classical finite element method (FEM).
  We observe that for these problems the locELM method is considerably superior
  to DGM and PINN,
  with regard to both the accuracy and the computational cost.
  %(network training time)
  In terms of the computational performance, % (accuracy/cost),
  the locELM method is on par with the finite element method,
  and oftentimes the locELM performance  exceeds the FEM performance.
  

\end{remark}


% what else to discuss here?






