\chapter{Temporal Discretization}

The system of equations to be advanced in time are given in \Cref{timeeqn}. Expanding the equation we get:
\begin{subequations}
\begin{align}
\label{temp_eqn}
 \dot{\vz} &= -\Szz \vz -\Szp \vp -\Szs \vs \\
 \dot{\vp} &= -\Spz \vz -\Spp \vp - \Sps \vs + P_s
\end{align}
The equation for $\varphi$ can be further simplified taking into account the structure of the stencils.
\begin{align}
\dot{\vp} &=  -\Spp \vp -g \vz + P_s
\end{align}
where g is the gravitational constant. Above equations in the form of \Cref{timeeqn} are solved using the explicit Leap-Frog scheme in \cite{Jong} and \cite{Wout}. The method is described in \ref{section_leap_frog}.

The equation for variable $\psi$, which does not contain the time derivative, but still depends on the other variables is given by 
\begin{align}
 \Sss \vs &=  -\Ssp \vp
\end{align}
\end{subequations}



\section{Explicit Time Stepping}
In explicit schemes, the fluxes and the sources are computed at the $n^{th}$ time level and their contribution is added to the current value of the variable. In \cite{Wout} and \cite{Jong}, explicit leap frog method has been used which has a second order accuracy in time, and is only conditionally stable. Explicit time integration is simple and fast, and is easily parallelized for parallel computers. 

The main disadvantage is that the time step is limited by the numerical stability requirements. For the equations represented by \Cref{timeeqn}. the stability condition requires that the time step is shorter than the crossing time of the grid cells by the faster wave:

\begin{equation}
 \dt < \frac{\Delta x_i}{c^{max}_i}
\end{equation}

for all grid cells and all directions $i={x,y}$ and $c$ represents the velocity of the wave. This is the famous Courant-Friedrich-Levy (CFL) condition valid for explicit time integration of arbitrary hyperbolic PDE's. In the current implementation, with the uniform-grid, $\Delta x$ is constant for each cell, and average current velocity ($\vec{U}$) is chosen as the maximum wave velocity to determine the time step such that it satisfies the CFL condition. A safety margin is added to ensure stability without needing to check the stability criteria at each time step.

The intention to use lower mesh sizes to capture details of wave interaction with ships at finer levels will require a further reduction in the time step, hence increasing the computation time. Thus it becomes more and more important to explore time integration methods which are more stable, hence allowing us to use larger time steps, and still giving good accuracy.


\section{Implicit Time Stepping}

\subsection{Fully Implicit Scheme}
Stability of the time integration can be significantly improved by employing fully implicit time integration techniques. The most common and simplest of the fully implicit scheme is the Backward Euler scheme. For the set of equations given by \Cref{timeeqn}, we get:
\begin{subequations}
\begin{align}
\dfrac{q_{n+1} -q_n}{dt} &= (L q_{n+1} -\Szs \vs_{n+1} )
\end{align}
\end{subequations}
where q = $\begin{bmatrix} \vz \\ \vp \end{bmatrix}$. 
The method is first order accurate in time (can be derived from the Taylor series expansion) and is unconditionally stable. 

While the stability of the implicit time discretization is a great improvement over the explicit schemes, one has to solve the implicit equations for the unknowns $q_{n+1}$ and $\vs_{n+1}$, which requires solving a system of equations (Differential Algebraic Equations) represented by :
\begin{subequations}
\begin{align}
\label{basic}
(I-L)q_{n+1} &=q_n -\Szs \vs_{n+1}\\
\Sss \vs_{n+1} &=  -\Ssp \vp_{n+1}
\end{align}
\end{subequations}

As the method is first order accurate in time, and the spatial discretization was second order accurate, overall accuracy of method is only first order. Such methods are usually good to achieve steady-state results, but not very accurate. The disadvantages of using the fully implicit scheme includes:

\begin{itemize}
 \item Moving from explicit to implicit schemes incurs extra computational efforts as it requires solving the linear system of equations given by \Cref{basic}. For the system represented above, the iterative linear solvers are used, which are described in more detail in Chapter 5.
 \item First order implicit methods simply under-relax more to maintain the stability of the iterative solution. It is this increased damping, with the increase in time-step size, which induces more inaccuracies in transient behavior.
 \item Spatial discretization plays an important role in the stability of the numerical solutions of unsteady state hyperbolic equations. Generally for the Euler equations, the central difference scheme is more accurate than the first order upwind schemes. Also stability in the case of the central differencing scheme is not an issue as the diffusive forces are not active. In the case flux limiters are used for spatial discretization which have the capability of capturing shocks, discontinuities or sharp changes in the solution domain, the implicit scheme results in the formation of non-linear system of equations, which then requires more computational effort (computation of Jacobi.
\end{itemize}


\subsection{ $\beta$ - Implicit Scheme}

Semi-implicit methods try to combine the stability of the implicit methods with the efficiency of the explicit methods. The system of equations is discretized using a one parameter ($\beta$) implicit scheme to advance in time:
\begin{subequations}
\begin{align}
q_{n+1} &= q_n + \dt (\beta ( L q_{n+1} - \Szs \vs_{n+1} ) + (1-\beta) ( L q_n - \Szs \vs_n)) 
\end{align}
\end{subequations}

The parameter $\beta$ can vary between 0 and 1. For $\beta=1$, we get the Backward-Euler method (fully implicit) and the so-called trapezoidal scheme for  $\beta=0.5$

For $\beta =0.5$ , we can rewrite the equations as:
\begin{subequations}
\begin{align}
q_{n+1} &= q_n +0.5 \dt (  \dot{q_{n+1}} + \dot{q_n}  )\\
	&= q_n + \dt \dot{q_n} + 0.5 \dt ( \dot{q_{n+1}} - \dot{q_n} )\\
	& = q_n + \dt \dot{q_n} + \frac{1}{2} \dt^2 \frac{d\dot{q}}{dt} + O (\dt^3)
\end{align} 
\end{subequations}

which gives the trapezoidal method of second order temporal accuracy.

\subsection{Stability}

\textbf{The Scalar Test Equation}

To understand the stability of time integration methods, we consider the scalar, complex test equation :
\begin{equation}
 w^{'} (t) = \lambda w (t)
\end{equation}

where $\lambda \in C$ .

Application of the time integration scheme (Explicit or Implicit) gives :
\begin{equation}
 w_{n+1} = R ( \dt \lambda) w_n , \
\end{equation}

$R (\dt \lambda)$ is called the Stability Function. Let $z = \dt \lambda$. For the explicit schemes of order $s$, $R(z)$ is a polynomial of degree $\leq s$. For implicit methods it is a rational function with degree of both numerator and denominator $\leq s$. The stability region is defined in terms of $R(z)$ as:
\begin{equation}
S = { z \in C : | R(z)| \leq 1}
\end{equation}

The scheme that has the property that $S$ contains entire left half plane $ C^{-} = {x \in C : Re (z) \leq 0}$ is called A-Stable. A scheme is said to be Strongly A-Stable if it is A-stable with $|R(\infty)| <1$, and it is said to be L-stable if in addition $|R(\infty)| =0$.

For the semi-implicit scheme with parameter $\beta$, the stability function is $R(z) = \dfrac{1+ (1-\beta)z}{1-\beta z}$. The implicit trapezoidal rule given by $\beta =0.5$ is A-stable, whereas the fully implicit Backward Euler method is L-stable with $\beta =1$.

\textbf{Stability for Linear Systems}

Let the linear system of equations ($m$ equations, $m$ unknowns) given as:
\begin{equation}
 w^{'} (t) = A w (t) + g(t)
\end{equation}

with $A \in \RR^{mxm}$. Application of the semi-implicit scheme with parameter $\beta$ gives:
\begin{equation}
 w_{n+1} = R ( \dt A) w_n + (I - \beta \dt A)^{-1} \dt g_{n+ \beta}
\end{equation}

where 
\begin{equation}
R(\dt A) = (I - \beta \dt A)^{-1}) (I + (1- \beta) \dt A)
\end{equation}

and $g_{n+ \beta} =  (1-\beta)g(t_n) + \beta g (t_{n+1})$. Starting from initial solution of $w_0$, we obtain:

\begin{equation}
 w_n = R ( \dt A)^n w_0 +  \dt \sum_{i=0}^{n-1} R(\dt A)^{n-i-1}(I - \beta \dt A)^{-1} g_{i+ \beta}
\end{equation}

If we perturb the initial solution $\hat w_0$, we get the formula for the perturbed solution at $n^{th}$ time step  as:

\begin{equation}
 \hat w_n - w_n = R(\dt A)^n (\hat w_0-w_0)
\end{equation}

Hence, the powers $R(\dt A)^n$ determine the growth of the initial errors.

Let $\lambda_j$ with $ 1 \leq j \leq m $ denote the eigenvalues of the matrix $A$, and let A be diagonizable such that  $A = U \Lambda U^{-1}$ where $\Lambda =diag(\lambda_j)$.  \cite{Hundsdorfer}
Let $K$ be the condition number of $U$. Then for $\dt \lambda_j \in S, 1 \leq j \leq m \implies ||R(\dt A)^n|| \leq K \text{  }\forall n \geq 1$ where $S$ represents the stability region described above in the scalar test equation case.

Stability of the semi-implicit methods requires a moderate bound for these powers. In case the powers are not bounded, $\dt \lambda_j$ does not belong to the Stability Region S, creating the method unstable. 

\subsection{Backward Differentiation Formula}

Another approach to achieve second order temporal accuracy is by using information from multiple previous time steps. This gives rise to a two-parameter three-level time integration scheme:

\begin{align}
\label{BDF2}
q_{n+1} &=  q_n + \dt \left [ \alpha \dfrac{q_n - q_{n-1}}{\dt} - \alpha \dot{q_n} + \beta \dot{q_{n+1}}  + (1- \beta) \dot {q_n} \right ]
\end{align}

where $\dot({q}_n$ represents the derivative of $q$ at the $n^{th}$ time interval.

This scheme is three-level whenever the parameter $\alpha \neq 0$. When $\alpha =1/3 $ and $\beta =2/3$, we obtain the second order Backward Differentiation Formula (BDF2) for constant time step $\dt$.

The analysis of the accuracy for a multi-step method is presented below. Let us assume that $q_n$ was computed from $q_{n-1}$ with a second order temporal accuracy. This implies

\begin{equation}
  \dfrac{q_n - q_{n-1}}{\dt} = \dot{q_n} - (\dt/2) \dfrac{d\dot q}{dt} + O(\dt^2)
\end{equation}

Substituting this into \Cref{BDF2}
\begin{subequations}
\begin{align}
q_{n+1}& =  q_n + \dt \left [ \dot{q_n} -\alpha \dfrac{\dt}{2} \dfrac{d\dot q}{dt}  + \beta \dt \dfrac{d\dot q}{dt} + O(\dt^2)\right ] \\
& = q_n + \dt \dot{q_n}  + \dt^2 (\beta - \dfrac{\alpha }{2}) \dfrac{d\dot q}{dt} + O(\dt^3)
\end{align}
\end{subequations}
The method is second order temporal accurate when $2\beta - \alpha =1$. At the start of simulation,  one could use trapezoidal scheme for second order accuracy, or more stable backward Euler method (as the value at $n-1^{th}$ time step is not available). The BDF2 method has better stability properties than trapezoidal rule, and is regularly used for stiff problems. It can also be viewed as an implicit counterpart of the explicit Leap-Frog scheme.

\subsection{Predictor Corrector Methods}
 Simply stating, a predictor-corrector method is an algorithm that proceeds in two steps. First, the prediction step computes a rough approximation of the desired quantity using not so expensive algorithms like explicit method. Second, the corrector step refines the initial approximation using other means for example an implicit method.
 
 There are various variants of a predictor-corrector method, depending upon how is the corrector algorithm applied, and how many times. 
 
 \begin{itemize}
  \item A predictor formula is used to get a first estimate of the next value of the dependent variables, and the corrector formula is applied iteratively until convergence is obtained. In this case, the stability properties of the algorithm are completely determined by the corrector formula alone and the predictor formula only influences the number of iterations required.
  \item The values of the dependent variables obtained from one application of the corrector formula are regarded as the final values. The predicted and corrected values are compared to obtain an estimate of the truncation error associated with the integration step. Based on the allowable error limits, the corrected value is either accepted, or the time step reduced starting from the last accepted point. For such method, stability analysis of the corrector equation alone is not sufficient. The analysis must include the predictor equation, the corrector equation and the manner in which they are used.
 \end{itemize}
 
 The application of predictor corrector method here may require another set of iteration, which is the computation of $\vs$ by solving the linear system of equation given by \Cref{basic}.


\subsection{Minimum Residual Predictor Corrector (MR-PC) Time Stepping \cite{Toth}} 

One time step by the MR-PC scheme consists of an explicit predictor step (with a time step larger than normally allowed by the CFL condition) and a corrector step where the linear system of an implicit scheme is solved by a few minimum residual-type (e.g. GMRES or BiCGSTAB) iterations with the initial guess from the predictor step. A nice aspect of the GMRES method is that it constructs implicitly an integration polynomial of which the coefficients are adjusted to the specific right-hand side of the system of equations (generated from the implicit scheme). If after some time steps, components of high frequencies in the solution have not damped out, then they are present in the right-hand side. As soon as they become large, they are automatically damped out by the GMRES polynomial, provided that the time step is not too large with respect to the number of GMRES iterations. \cite{Botchev}

Any of the implicit schemes described above can be taken for the correction step. To obtain good temporal accuracy, the choice of the predictor step is important. For second order accuracy, an initial guess from an explicit second order scheme (e.g. Leap-Frog/ trapezoidal) can be used. If the corrector step is also second order, the overall MR-PC will also be second order.

The stability region of the MR-PC is wider than that of the explicit schemes discussed. For example, even with a single iteration in the corrector step, a second order MR-PC with BDF2 corrector has an effective time step (time step accounted for additional computational effort) three times that of the explicit scheme. Also, it is possible to control the time step to ensure stability based on the information received from the GMRES process. 

The use of residual iterations is mainly to smoothen the error generated by the explicit scheme. Also, no preconditioning is required, as we do not intend to solve the implicit set of equations.

We will present this idea for the fully implicit backward Euler method. It is given by 

\begin{align}
q_{n+1B} -q_n &= \dt (L q_{n+1B} -\Ssp \vs_{n+1B})\\
\Sss \vs_{n+1B} &=  -\Ssp \vp_{n+1B}
\end{align}

In order to solve this equation by MR-PC, we apply the Euler forward (explicit) method instead to get the initial guess.

\begin{subequations}
 \label{forward_MRPC}
\begin{align}
q_{n+1F} -q_n &= \dt (L q_n - \Ssp \vs_{n})\\
\Sss \vs_{n+1F} &=  -\Ssp \vp_{n+1F}
\end{align}
\end{subequations}

And let $\Delta q = q_{n+1B}- q_{n+1F}$ and $\Delta \vs = \vs_{n+1B} - \vs_{n+1F}$. Thus we obtain:
\begin{subequations}
\begin{align}
q_{n+1F} + \Delta q &= q_n + \dt (L (q_{n+1F} + \Delta q)  - \Ssp (\vs_{n+1F} + \Delta \vs))\\
\Sss \Delta \vs &= -\Ssp \Delta \vp
\end{align}
\end{subequations}

This leads to

\begin{subequations}
\begin{align}
\Delta q &= q_n + \dt (L q_{n+1F} + L \Delta q) - q_{n+1F} - \dt \Ssp (\vs_{n+1F} + \Delta \vs)\\
\label{deltaq}
(I - L \dt )\Delta q & =  q_n - (I -L \dt) q_{n+1F} - \dt \Ssp (\vs_{n+1F} + \Delta \vs)\\
\label{deltavs}
\Sss \Delta \vs &= -\Ssp \Delta \vp
\end{align}
\end{subequations}

We intend to perform GMRES iterations on the \Cref{deltaq}. After performing one explicit step (\Cref{forward_MRPC}) and obtaining most of the terms of the right-hand side, we are still left with $\Delta \vs$, which is linked to $\Delta q$ through \Cref{deltavs}. On the first pass, we could assume $\Delta \vs$ to be zero, perform few  iterations of GMRES and then solve for \Cref{deltavs}. Based on the value of $\Delta \vs$ obtained, we could decide if another round of correction is required or not.

On similar ground, a higher order method (2nd order Trapezoidal order) can be constructed.
\vspace{0.3cm}

\textbf{GMRES iterations}

The iterations are applied to the linear system with matrix $L^{'}=I-L\dt$ for the Euler backward scheme. The matrix is non-symmetric and general in nature, and thus we cannot apply the Conjugate Gradient method for the iterations. More general Krylov space methods (GMRES, BiConjugate Stab ) can be used to perform the minimal residual iterations. 

For $k$ steps of GMRES iterations, $k$ matrix-vector multiplication with $L^{'}$ and $\dfrac{k(k+1)}{2}+k$ innerproducts will be required. It might be possible to use the existing CUDA or C++ methods of computation of matrix-vector product with minimal adaptation.


\subsection{Semi-Implicit schemes}

Any implicit method requires solving a linear system of equation. In \cite{Jong}, linear solver has been developed for the case when the system of equation is represented by a pentadiagonal matrix. From the \Cref{basic}, the linear system of equations formed for the variables $q_{n+1}$ is block pentadiagonal instead of pentadiagonal as $q = \begin{bmatrix} \vz \\ \vp\end{bmatrix}$, and each $\vz$ and $\vp$ are represented by their own five point stencils.

As previously mentioned, it is our intention to make use of the linear solver in \cite{Jong} with minimum modifications possible. For this reason, we split the \Cref{basic} into \Cref{temp_eqn} and derive the corresponding equations for various implicit time integration procedures. 

In the first approach, we implicitly advance variable $\vz$, and use the previous time step values of variables $\vp$ and $\vs$. After advancing $\vz$, equation for $\vp$ is advanced implicitly, and then the linear system of equations for $\vs$ is solved. The equations for the same are given below:

\begin{subequations}
\label{temp_eqn1}
\begin{align}
 \vz_{n+1}  &=  \vec{\zeta_n} + \dt (-\Szz \vz_{n+1} -\Szp \vec{\varphi_n} -\Szs \vec{\psi_n}) \\ 
 (\dfrac{I}{\dt} + \Szz)\vz_{n+1}  &= \dfrac{1}{\dt}\vec{\zeta_n} - (S_{\zeta \varphi} \vec{\varphi_n} +\Szs \vec{\psi_n})
\end{align}
 \end{subequations}
 
Please note that variables $\varphi$ and  $\psi$ here are treated explicitly.
\begin{subequations}
\begin{align}
 \vp_{n+1} &= \vp_n + \dt ( -\Spz \vz_{n+1} -\Spp \vp_{n+1} - \Sps \vs_n)\\
 (\dfrac{I}{\dt} +\Spp)\vp_{n+1} &= \dfrac{1}{\dt}\vp_n - ( \Spz \vz_{n+1} + \Sps \vs_n)
\end{align}
 \end{subequations}
 
Only variable $\psi$ here is treated explicitly. As the value of $\zeta$ at $n+1^{th}$ time step is available, we use it. As it is a combination of implicit and explicit scheme, Stability of the method is not guaranteed. 


Another possibility here is to use a predictor-corrector kind of scheme, described below:

\begin{itemize}
 \item Advance $\vp$ using an explicit time integration scheme.
  \item Based on the new value of $\vp$, compute $\vs$ by solving linear system of equations.
  \item Implicitly advance $\vz$ where the values of $\vp$ and $\vs$ on the right-hand side of \Cref{temp_eqn1} are substituted by above computed values.
  \item Perform implicit correction of $\vp$ and compute $\vs$. 
\end{itemize}
This will definitely demand more number computational effort, but will be more stable than \Cref{temp_eqn1}. Similar derivations can be done for higher order methods.

\section{Symplectic integration}

The Symplectic integrators are designed for the numerical solution of Hamiltonian equations given by \Cref{Hamiltonian}. Symplectic integrators ensures time-reversibility and preservation of the symplectic (Ergodic) nature of the equation. 


The so-called symplectic Euler method (first order) can be constructed as follows:

\begin{subequations}
\label{Symplectic_Euler}
\begin{equation}
 \zeta_{n+1} = \zeta_n + \dt \nabla H_{\varphi}(\zeta_{n+1}, \varphi_n)
 \end{equation}
 \begin{equation}
 \varphi_{n+1} = \varphi_n + \dt \nabla H_{\zeta}(\zeta_{n+1}, \varphi_n)
\end{equation}
\end{subequations}

The methods are implicit for general Hamiltonian systems. However if $H(\zeta,\varphi)$ is separable as $H(\zeta,\varphi) =T(\zeta) + U(\varphi)$, it turns out to be explicit.

The Stormer-Verlet schemes are Symplectic methods of order 2. They are composed of the two symplectic Euler methods with step size $\dfrac{\dt}{2}$.

\begin{subequations}
\label{Symplectic_Verlet}
\begin{equation}
  \zeta_{n+1/2} = \zeta_n + \dfrac{\dt}{2} \nabla H_{\varphi}(\zeta_{n+1/2}, \varphi_n)
 \end{equation}
\begin{equation}
  \varphi_{n+1} = \varphi_n + \dfrac{\dt}{2} ( \nabla H_{\zeta}(\zeta_{n+1/2}, \varphi_n) + \nabla H_{\zeta}(\zeta_{n+1/2}, \varphi_{n+1})  
\end{equation}
\begin{equation}
  \zeta_{n+1} = \zeta_{n+1/2} + \dfrac{\dt}{2} \nabla H_{\varphi}(\zeta_{n+1/2}, \varphi_{n+1})
 \end{equation}
\end{subequations}

\textbf{Implicit mid-point rule:}

For a fully implicit method, two variables are combined in one equation represented by variable $q$, the implicit mid-point rule is given by:

\begin{equation}
  q_{n+1} = q_n + \dt \nabla H(\dfrac{q_{n+1}+q_{n}}{2})
 \end{equation}


