\chapter{Solvers for Non-Symmetric Matrices}

As we have seen, most of the implicit time integration methods described in Chapter 4 (except MR-PC) will require solving a set of equations where the matrices have stencils as given in \Cref{section_spatial}. The matrices involved are not symmetrical as compared to the matrix given by Stencil $\Sss$. For this reason, the PCG method as described in Chapter 3 cannot be applied to solve the linear system of equations arising from the implicit time integration. 

The objective is therefore to determine the solution vector for a large, sparse and linear system of equations which is not symmetric. An overview of these methods can be found in \cite{Trefethen}, which forms the basis for this Chapter.

As mentioned before in Chapter 3, the idea of some iterative methods is to project an $m$-dimensional problem into a lower-dimensional Krylov subspace. Given a matrix $A$ of size $m \times m$ and a vector $b$ of size $m \times 1$, the associated Krylov sequence is the set of vectors $b, Ab, A^2b, A^3b , \cdots$ , which can be computed by matrix-vector multiplications in the form $b, Ab, A(Ab), A(A(Ab))), \cdots$. The corresponding Krylov subspaces are the spaces spanned by successively larger group of these vectors.

\section{From Symmetric to Non-Symmetric Matrices}

\begin{figure}[ht]
\label{figure_cg}
\centering
\includegraphics[width=1.0\textwidth]{CG1}~\\[1cm]
\caption{Classification of Krylov Subspace iterations}
\end{figure}

Figure 5.1 shows the classification of Krylov Subspace methods as we move from symmetric matrices to non-symmetric matrices. The Conjugate Gradient (CG) method results in the tridiagonal orthogonalization of the original matrix, which can be described as $A = QTQ^T$ , where $Q$ is the Unitary matrix and T is a tridiagonal matrix. When A is non-symmetric, this result cannot be obtained from a CG iteration. Two approaches can be followed :

\begin{itemize}
\item	Use of the so-called Arnoldi Iteration, a process of Hessenberg orthogonalization. This results in $A = QHQ^T$ where $Q$ is a Unitary matrix and $H$ is an upper Hessenberg matrix. (An upper Hessenberg matrix has zero entries below the first sub-diagonal). 
\item	Bi-orthogonalization methods are based on the opposite choice. If we insist on obtaining a tridiagonal result, then we have to give up the unitary transformations, which gives us tridiagonal biorthogonalization : $A =VTV^{-1}$ , where V is non-singular but generally not Unitary. The term 'biorthogonal' refers to the fact that though all the columns of V are not orthogonal to each other, they are orthogonal to the columns of $(V^{-1})^T = (V^T)^{-1}$. 

Let V be a non-singular matrix such that $A =VTV^{-1}$ with T tridiagonal and define $W = (V^T)^{-1}$. Let $v_j$ and $w_j$ denote the $j^th$ columns of $V$ and $W$ respectively. These vectors are biorthogonal in the sense that $w_i^T v_j = \delta _{ij}$ where $\delta_{ij}$ is the Kronecker delta function. For each $n$ with $1 \leq n \leq m$, define the $m \times n$ matrices such that:

\begin{equation}
 V_n = \begin{bmatrix} v_1 \vline v_2 \vline \cdots \vline v_n \end{bmatrix},  \quad W_n = \begin{bmatrix} w_1 \vline w_2 \vline \cdots \vline w_n \end{bmatrix}
\end{equation}

In matrix form, the biorthogonality can be written as $W_n^TV_n = V_n^TW_n = I_n$, where $I_n$ is the identity matrix of dimension $n$. The iterations in biorthogonalization methods can be summarized as:
\begin{subequations}
 \begin{align}
  AV_n &= V_{n+1} \tilde T_n \\
  A^TW_n &= W_{n+1} \tilde S_n \\
  T_n = S^T_n &= W^T_nAV_n 
 \end{align}
\end{subequations}

Here $V_n$ and $W_n$ have dimensions $m \times n$, $\tilde T_{n+1}$ and $\tilde S_{n+1}$ are tridiagonal matrices with dimensions $(n+1) \times n$, and $T_n = S_n^T$ is the $n \times n$ matrix obtained by deleting the last row of $\tilde T_{n+1}$. BiConjugate gradient algorithms are described in Section 5.3.

\end{itemize}

\section{Arnoldi Iteration and GMRES}

Arnoldi iteration can be understood as the analogue of Gram-Schmidt type iteration for similary transformations to upper Hessenberg form. It has the advantage that it can be stopped part-way, leaving one with a partial reduction to Hessenberg form that is exploited when dimensions upto $n$ (dimension of the Krylov subspace) are considered. A simple algorithm of Arnoldi iteration is given below:

\begin{algorithm}                      % enter the algorithm environment
\caption{Arnoldi Iteration}          % give the algorithm a caption
\label{alg3}                           % and a label for \ref{} commands later in the document
\begin{algorithmic}                    % enter the algorithmic environment
    \STATE $b =$ arbitrary, $q_1 = b/||b||$ 
    \FOR {$n= 1,2,3, \cdots$}    	
	    \STATE $v = A q_n$
			 \FOR {$j =1 $}
			 	\STATE $h_{jn} = q^*_jv$
	      \STATE $v = v - h_{jn}q_j$
       \ENDFOR
      \STATE $h_{n+1,n}=||v||$
      \STATE $q_{n+1} = v / h_{n+1,n}$
      \ENDFOR
\end{algorithmic}
\end{algorithm}

The above algorithm can be condensed in the following form:

\begin{itemize}
\item The matrices $Q_n = \begin{bmatrix} q_1 \vline q_2 \vline \cdots \vline q_n \end{bmatrix}$ generated by the Arnoldi iteration are reduced QR factors of the Krylov matrix: \begin{equation} K_n = Q_n R_n \end{equation} where $K_n$ is the $m \times n$ Krylov matrix.
\item The Hessenberg matrices $H_n$ are the corresponding projections : $H_n = Q^T_n A Q_n$
\item The successive iterates are related by the formula: $A Q_n = Q_{n+1} H^{'}_n$, where $H^{'}_n$ is the $(n+1) \times n$ upper-left section of $H$.
\end{itemize}

The idea of GMRES is straightforward. At step $n$, the exact solution ($x^o = A^{-1}b$) is approximated by the vector $x_n \in K_n$ that minimizes the norm of the residual $r_n=b-Ax_n$, hence the name : Generalized Minimal Residuals (GMRES). It was proposed in 1986 and is applicable to a system of equations when the matrix is a general (Non-singular) square matrix.  Arnoldi iteration is used to construct a sequence of Krylov matrix $Q_n$ whose columns $q_1, q_2, \cdots$ successively span the Krylov subspace $K_n$. Thus we can write $x_n = Q_ny$ , where $y$ represents the vector such that 

\begin{equation}
||AQ_ny - b || = \text{minimum}
\end{equation}
Using the similarity transform, this equation can be written as:

\begin{equation}
||Q_{n+1}H^{'}_ny - b || = \text{minimum}
\end{equation}

Multiplication by a Unitary matrix does not change the 2-norm, thus we can rewrite above equation as:
\begin{equation}
||Q^{*}_{n+1}Q_{n+1}H^{'}_ny - Q^{*}_{n+1}b || = \text{minimum}
||H^{'}_ny -Q^{*}_{n+1} b || = \text{minimum}
\end{equation}

Finally, by construction of the Krylov matrices $Q_n$, $Q^{*}_{n+1} b = ||b||e_1$ where $e_1 = (1,0,0,\cdots)^T$. Thus we obtain:

\begin{equation}
||H^{'}_ny - ||b||e_1 || = \text{minimum}
\end{equation}

The GMRES algorithm (unPreconditioned) can be written as :

\begin{algorithm}                      % enter the algorithm environment
\caption{GMRES}          % give the algorithm a caption
\label{alg4}                           % and a label for \ref{} commands later in the document
\begin{algorithmic}                    % enter the algorithmic environment
    \STATE $q_1 = b/||b||$ 
    \FOR {$n= 1,2,3, \cdots$}    	
	    \STATE $<$step n of Arnoldi iteration, \Cref{alg3}$>$
			\STATE Find $y$ to minimize $||H^{'}_ny - ||b||e_1 || (= ||r_n||)$
	     \STATE $x_n = Q_n y$
      \ENDFOR
\end{algorithmic}
\end{algorithm}

In order to find $y$, a $QR$ factorization can be used which requires $O(n^2)$ flops, because of the Hessenberg structure of the matrix $H^{'}$. Here $n$ is the dimension of the Krylov subspace. Also it is possible to get the QR factorization of $H^{'}_n$ from that of $H^{'}_{n-1}$ by using Given's Rotation.

One of the disadvantages of the GMRES method is the storage requirements. As it requires storing the whole sequence of the Krylov subspace, a large amount of storage is required as compares to the Conjugate Gradient method. For this reason, restarted versions of this method are used, where computational and storage costs are limited by specifying a fixed number of vectors to be generated. 

\section{BiConjugate Gradient methods}

The BiConjugate Gradient method (BiCG) is the other extension for non-symmetric matrices. As we saw in the previous section, the principle of GMRES is to pick vector $x_n$ such that the residual corresponding to $x_n$ is minimized. The principle of BiCG algorithm is to pick $x_n$ in the same sub-space, i.e. $x_n \in K_n$ , but to enforce that the residual is orthogonal to  ${w_1 , A*w_1 , \cdots , (A*)^{n-1}w_1}$, where $w_1 \in R^m$ is an arbitrary vector satisfying $w_1*v_1 =1$. Its advantage is that it can be implemented with three-term recurrences rather than the (n+1) - term recurrences of GMRES (Difference arising from Hessenberg form of matrix vs. tridiagonal form).

There are two major problems with BiCG method :

\begin{itemize}
\item Convergence is slower as compared to GMRES and often erratic. Also, it may have the consequence of reducing the ultimately attainable accuracy because of rounding errors.
\item It required multiplication with $A^T$ (transpose) as well as A. Computing the transpose brings serialization to the code and thus is not preferred.
\end{itemize}

To address this problem, other variants of BiCG method were developed. One of them is stabilized BiCG method (Bi-CGSTAB). As for the Conjugate Gradient method, any other Krylov subspace method needs a good preconditioner to ensure fast and robust convergence. The algorithm for the preconditioned BiCGSTAB method is given below. One of the future task is to adjust this algorithm so as to use the same building blocks as developed by Martijn in \cite{Jong} for the RRB-k method.


\begin{algorithm}                      % enter the algorithm environment
\caption{BiCGSTAB}          % give the algorithm a caption
\label{alg5}                           % and a label for \ref{} commands later in the document
\begin{algorithmic}                    % enter the algorithmic environment
    \STATE Solve the system of equation given by $Ax=b$ BiCGSTAB
    \STATE Compute $r^{0} = b - Ax^0$ for some initial guess $x^0$. 
    \STATE Choose $\tilde{r}$ (for example $\tilde{r} = r^0$)
    \FOR {$i= 1,2,3, \cdots$}
	\STATE $\rho_{i-1} = \it is important to have tilde{r}^T r^{i-1}$
	 \IF{$\rho_{i-1} = 0 $ } 
            \STATE Method Fails
        \ENDIF        
	\IF{$i=1$}
	  \STATE $p^i = r^{i-1}$
        \ELSE
	  \STATE $\beta_{i-1} = \dfrac{\rho_{i-1}}{\rho_{i-2}} \dfrac{\alpha_{i-1}}{\omega_{i-1}}$
	  \STATE $p_{i} = r^{i-1} + \beta_{i-1}(p^{i-1} - \omega_{i-1} v^{i-1}$
	\ENDIF
	\STATE \textbf{Solve }$M \tilde{p} = p^i$
	\STATE $v^i = A \tilde{p}$
	\STATE $\alpha_i = \dfrac{\rho^{i-1}}{\tilde{r}^T}v^i$
	\STATE $s = r^{i-1} - \alpha_i v^i$
	\STATE Check norm of s; if small enough, set $x^i = x^{i-1} + \alpha_i \tilde{p}$
	\STATE \textbf{Solve }$ M\tilde{s}=s$
	\STATE $t = A \tilde{s}$
	\STATE $ \omega_i = \dfrac{t^Ts}{t^Tt}$
	\STATE $x^{i} = x^{i-1} + \alpha_i \tilde{p} + \omega_i \tilde{s}$
	\STATE $r^i = s - \omega_i t$
	\STATE Check Convergence, continue if necessary
	\STATE For continuation it is required that $\omega_i \neq 0$
      \ENDFOR
\end{algorithmic}
\end{algorithm}



