\documentclass[a4paper,10pt]{report}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\parindent0mm
\title{Deliverable Task 1}
\author{Akash Mittal Matrikelnr. 21774951 \\ Bich Ngoc Vu Matrikelnr. 21781143}


\begin{document}
\maketitle

\section*{Task A: Gauss-Seidel method}
\begin{figure}[h]
   \centering
   \includegraphics[width=\textwidth]{./plot_taskA.png}
   \caption{Comparison of Gauss-Seidel method for different grid sizes}
   \label{GS}
\end{figure}
Figure \ref{GS} shows the number of iterations to reach an error below 1e-06 for varying number of control volumes. We can also see that the Gauss-Seidel method
damps the higher errors relatively fast in comparison to lower errors. 
Table \ref{table_A} shows that to reduce the residual by one order of magnitude (from 1.0 to 0.1) takes less iterations than reducing it from 0.1 to 0.01.
Furthermore it is seen that the required number of iterations to reduce the residual to 0.1 increases approximately linearly with
the number of control volumes.
\begin{table}[h]
   \centering
   \begin{tabular}{|l||l||l|}
      \hline
      &\multicolumn{2}{l|}{number of iterations}\\
      \hline
      N&residual $\approx$ 0.1 & residual $\approx$ 0.01\\
      \hline
      $20^2$ & 40 & 370\\
      $40^2$ & 164& 1516\\
      $80^2$ & 669& 6184\\
      $160^2$ & 2714& 25030\\
      \hline
   \end{tabular}
   \caption{Residual reduction using Gauss-Seidel}
   \label{table_A}
\end{table}


\newpage
\section*{Task B}
\begin{figure}[h]
   \centering
   \includegraphics[width=\textwidth]{./plot_B1.png}
   \caption{Convergence histories of GS und MGGS using $160^2$ CVs}
   \label{B1}
\end{figure}

As we've seen in Task A, the Gauss-Seidel method can be well used to damp higher errors. Since the multigrid Gauss-Seidel method (MGGS) is based on the concept
that only these errors are smoothed on different grid levels, one can obtain a faster convergence and faster reduction of the residual on the finest grid than 
the single Gauss-Seidel method (GS). The results plotted in Figure \ref{B1} emphasize these differences.

\newpage
\begin{figure}
   \centering
   \includegraphics[width=\textwidth]{./plot_B2.png}
   \caption{Convergence history of all eight different solvers}
   \label{B2}
\end{figure}

\begin{table}[h]
   \centering
   \begin{tabular}{|l||l|||l|l|}
      \hline
      \textbf{Solver} & \textbf{Iterations}&\multicolumn{2}{l|}{\textbf{Parameter settings}} \\
      \hline
      Gauss-Seidel & 119993 & max. value of iter:& 200000\\
      Line-by-Line TDMA & 29943 & max. value of residual:& 1e-6\\
      Stone SIP & 6559 & No. of grid levels: & 5\\
      Conjugate gradient& 306 & No. of iterations on finest grid:   &   2 2 2 2 2 2 2 2 2\\
      ADI & 30354 & No. of iterations during restriction:&  3 3 3 3 3 3 3 3 3 \\
      Multigrid with GS& 587 & No. of iterations during prolongation:& 3 3 3 3 3 3 3 3 3 \\
      Multigrid with SIP& 32 & & \\
      Multigrid with ICCG& 71 & & \\
      \hline
   \end{tabular}
   \caption{Parameter settings for Multigrid with GS}
\end{table}

Figure \ref{B2} shows the convergence history of the eight solvers. It is seen that traditional solvers like Gauss-Seidel require
more iterations to achieve the same tolerance than the Multigrid solvers. Also, they depend on the mesh size and the number 
of iterations required to achieve same error level increases with the lower grid/mesh size.
A multigrid method with Gauss-Seidel smoother requires only ~600 iterations whereas a Gauss-Seidel requires ~1200000 iterations.\\
Similarly improving the smoothing algorithm (changing GS to SIP or Conjugate Gradient) significantly improves the performance of the Multigrid algorithm.\\
SIP and Conjugate Gradient utilize the sparsity of the original matrix and thus are more efficient.

\newpage
\section*{Task C}
\begin{figure}[h]
 \centering
 \includegraphics[width=\textwidth]{./Plot_TaskC.png}
 \caption{Convergence analysis for different Multigrid methods}
\end{figure}

An improvement in convergence is seen when we change the smoother of the Multigrid method from Gauss-Seidel to SIP(Strongly Implicit Procedure) to ICCG(Conjugate Gradient). 
This is owing to the fact that ICCG has superior smoothing properties than SIP or GS. 
The SIP method performs incomplete LU factorization implicitly by first factoring the original matrix into two matrices. 
SIP and ICCG are more efficient as they have been developed to deal with sparse or banded matrices. ICCG is an iterative solution method based 
on CG (Conjugate Gradient) method. In ICCG, the calculation speed of the CG method is enhanced with the Incomplete Cholesky Factorization
(instead of incomplete LU factorization used in SIP). Compared with the CG method, that has no pre-processing, the ICCG method is faster and more stable. 

\newpage
\section*{Task D}

Parametric analysis of the SIP multigrid method has been carried out. Following cases have been analyzed:\\

\textbf{1. Variation with number of levels-grid size}\\
In general, the convergence of the Multigrid methods is independent of the grid size (h). 
This can be seen as we move from  level 5 to level 7 in Figure \ref{D.1}. 
The reason is that Multigrid reduces the error by a constant rate, irrespective of the grid size. 
In case the number of levels is 3 or 4, we can see that more number of iterations are required to achieve the same level of tolerance. 
This could be attributed to the fact that there are not many levels to perform the coarsening and error correction at the coarse levels.

\begin{figure}[h]
 \centering
 \includegraphics[width=\textwidth]{./Plot_TaskD_Grid.png}
 \caption{Variation with number of levels-grid size}
 \label{D.1}
\end{figure}

\newpage
\textbf{2. Variation with pre-smoothing steps (at finest grid Level)}\\
It is seen that increasing the iterations at the finest level, while keeping the other variables constant, reduces the performance. 
This can be attributed to the fact that iterations at the finest level are polluting the corrections done at the coarse grid levels 
due to inaccurate projection of the restriction and prolongation matrices.

\begin{figure}[h]
 \centering
 \includegraphics[width=\textwidth]{./Plot_TaskD_FinestIterations.png}
 \caption{: Variation with Iterations at the Finest Grid level}
\end{figure}

\newpage
\textbf{3. Variation with Smoothening Steps (At Intermediate Grid Levels)}\\
The number of iterations reduce slightly as we increase the intermediate smoothing iterations, which is also expected as the error at each level is smoothened. 
But recall that work done in each iteration (cycle) increases with increase in the number of intermediate smoothing steps.
\begin{figure}[h]
 \centering
 \includegraphics[width=\textwidth]{./Plot_TaskD_Restriction_Iterations.png}
 \caption{Variation with Iterations at the Restriction (Intermediate Levels) }
\end{figure}

\end{document}          
