\documentclass[12pt,oneside]{amsart}
\usepackage{geometry}
\geometry{a4paper}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{xcolor}
\usepackage{float}
\lstset{stepnumber=1, frame=single,language=matlab}

\title{Advanced Image Processing - Lab 2 - Image Compression/Quantization - Report}
\author{Waldemar Franczak}

\begin{document}

\maketitle
\tableofcontents

\section{Introduction}
The aim of this laboratory was to implement and investigate examples of the two-dimensional discrete transforms used in compression. Moreover the effects of quantization were taken into consideration and analyzed. This report presents conducted experiments and comments on the results.

\section{2D Haar Wavelet Transformation} 
The first task was to implement 2-D Discrete Haar Wavelet Transformation and it's inverse. A discrete Haar wavelet transform matrix takes form $W$ = [$H\over{G}$]. The H matrix is a low pass filter matrix and G matrix is responsible for high pass filtering. In order to apply the transformation to particular matrix A(image in our case) following equation is used:\newline
\begin{center}
$outMat$ = $W_M$$A$$W^T _N$  
\end{center}
Where M and N are number of rows and columns respectively.
Matlab code that performes this operation is presented below.
\begin{figure}[H]
  \begin{lstlisting}
function [output, Wm, Wn] = lab2dht(inImg)

M = size(inImg,1);
N = size(inImg,2);

Wm = zeros(M);
Wn = zeros(N);

haarCoeff = 1/2;
colNum = 1;

for i=1:2:M
    Wm(colNum,i)=haarCoeff;
    Wm(colNum,i+1)=haarCoeff;
    Wm(colNum+N/2,i)=-haarCoeff;
    Wm(colNum+N/2,i+1)=haarCoeff;
    colNum = colNum + 1;
end

colNum = 1;
for i=1:2:N
    Wn(colNum,i)=haarCoeff;
    Wn(colNum,i+1)=haarCoeff;
    Wn(colNum+N/2,i)=-haarCoeff;
    Wn(colNum+N/2,i+1)=haarCoeff;
    colNum = colNum + 1;
end

output = Wm*double(inImg)*Wn';
imshow(mat2gray(output));
end
  \end{lstlisting}
  \caption{Matlab code for Haar wavelet transform.}
\end{figure}
For a \emph{circuit} image from the dataset result of the transform is presented below.
\begin{figure}[H]
  \begin{tabular}{c c}
  \includegraphics[scale=0.4]{images/circ}
  \includegraphics[scale=0.4]{images/circH}    
  \end{tabular}
  \caption{From the left: original image and transformed image with use of Haar transform}
\end{figure}
As we can see as a result we obtain image of the same size as the input image but with 4 distinct components. Upper left component gives us an approximation of the original image - blur image. The upper right corner is a result of calculating differences between columns of original matrix. This way we obtain an image of vertical differences or vertical information of the image such as edges. Exactly the same type of matrix can be observed in lower left component where horizontal components are emphesized. The last part which is lower right corner hold an information about diagonal differences on the image. As we can see on the original image vertical and horizontal components are the majority that is why in the transform the fourth component is almost plain.

We can conclude that the transform is a good way for compression due to the fact that most of the image energy is preserved in upper left corner, while other three components contain information in form of zero/non-zero elements.

We can restore our image using following formula:\newline
\begin{center}
outImg = $2W_M ^T$$inImg$$2W_N ^T$  
\end{center}
Next we calculate the iterated Haar transform by applying our tranformation to blur part of first transform. Results of inverse and iterated transform are shown below.

\begin{figure}[H]
\begin{tabular}{c c}
  \includegraphics[scale=0.4]{images/circHI}
  \includegraphics[scale=0.4]{images/circH2}  
\end{tabular}
\caption{From left: inversed Haar transform and iterated Haar transform}
\end{figure}
The iterated transform allows us to get even better compression by transforming the high energy component(blur) again. This high energy component is decreased again and majority of the transformed image consists of low energy components.
In order to get the inverse of iterated transform we have to first perform inverse transform of previousy transformed blur component and then again the whole image.
\section{Quantization}
The second part of the laboratory was focused on quantization effects on the image compression algorithms. The first task was to implement a quantization code. I choose the Lloyd algorithm discussed during the lecture. Lloyds algorithm divides initial input into some number of sets. For each set the representative element is calculated. In the following step each element of the set is assigned to the closest representative forming a new set where representative element is recalculated and the process is repeated. Process stops if the equilibrium is obtained(elements from the set can not be reasigned no more) or some given condition is reached. Matlab provides us with a function that allows us to compute Lloyd algorithm for a given vector and number of levels. Below code of the function that for input image, outputs quantized image with predefined number of levels.
\begin{figure}[H]
\begin{lstlisting}
function output = lab2lloyds(inImg,levels)

[partition, codebook] = lloyds(double(inImg(:)), levels);
output = zeros(size(inImg));
partition = [-inf partition inf];

for i=1:length(partition)-1
    mask = inImg > partition(i) & inImg < partition(i+1);
    output(mask) = codebook(i);
end

end
\end{lstlisting}  
\caption{Function quantizes input image using Lloyds algorithm}
\end{figure}
From the \emph{lloyd()} function we obtain partitions which are the value ranges for each partition and the codebook consists of values that should be assigned to elements that fall into each partition. We iterate throught all partitions and using a mask to get indexes of elements falling into the range, we assing values. Figure below presents the resunts of using this algorithm on original lena image, using different levels and without any transformation.
\begin{figure}[H]
  \begin{tabular}{c c c}
      \includegraphics{images/lena4}
      \includegraphics{images/lena8}
      \includegraphics{images/lena16}
  \end{tabular}
  \caption{Lena image quantized with use of 4, 8 and 16 levels}
\end{figure}

\begin{figure}[H]
  \begin{tabular}{|c|c|c|}
  \hline
    \textbf{Number of levels} & \textbf{MSE} & \textbf{PSNR} \\
    \hline
        \textbf{4} & 133.98 & 2.79\\
    \hline
        \textbf{8} & 37.84 & 8.28 \\
    \hline
        \textbf{16} & 10.47 & 13.86 \\
    \hline
  \end{tabular}
      \newline\newline
  \caption{Mean squared error and Peak Signal to Noise Ratio for quantized lena image}
\end{figure}
As we can observe, with number of levels we get closer to the original image as the mean square error decreases and Peak Signal to Noise Ratio increases. Next we apply our quantization to images transformed using DCT and DWT transform.
% DCT
\begin{figure}[H]
  \begin{tabular}{c c c}
      \includegraphics{images/lenaDCT4}
      \includegraphics{images/lenaDCT8}
      \includegraphics{images/lenaDCT16}
  \end{tabular}
  \caption{Lena image DCT transform quantized with use of 4, 8 and 16 levels}
\end{figure}

\begin{figure}[H]
  \begin{tabular}{|c|c|c|}
  \hline
    \textbf{Number of levels} & \textbf{MSE} & \textbf{PSNR} \\
    \hline
        \textbf{4} & 1.2846e+03 & -7.0222\\
    \hline
        \textbf{8} & 540.94 & -3.2661 \\
    \hline
        \textbf{16} & 194.72 &  1.17 \\
    \hline
  \end{tabular}
  \newline\newline
  \caption{Mean squared error and Peak Signal to Noise Ratio for DCT transformed and quantized lena image}
\end{figure}
Both the measures and visual results clearly show significant impact of the quantization on the decoded image. For 4 levels quantization, the image information is lost and even with higher number of levels both visual results and MSE/PSNR measures are quite bad.
%DWT
\begin{figure}[H]
  \begin{tabular}{c c c}
      \includegraphics[scale=0.97]{images/lenaDWT4}
      \includegraphics{images/lenaDWT8}
      \includegraphics{images/lenaDWT16}
  \end{tabular}
  \caption{Lena image DWT transform quantized with use of 4, 8 and 16 levels}
\end{figure}

\begin{figure}[H]
  \begin{tabular}{|c|c|c|}
  \hline
    \textbf{Number of levels} & \textbf{MSE} & \textbf{PSNR} \\
    \hline
        \textbf{4} & 162.69 & 2.21 \\
    \hline
        \textbf{8} & 46.04 & 7.43 \\
    \hline
        \textbf{16} &  13.89 &  12.63 \\
    \hline
  \end{tabular}
      \newline    \newline
  \caption{Mean squared error and Peak Signal to Noise Ratio for DWT transformed and quantized lena image}
\end{figure}
Wavelet transform used in figure 9 is Dabuechies 1. From the lowest levels it maintains satisfactory visual information. It outperforms DCT both visualy and taking into consideration MSE and PSNR levels.
The last task was a comparison of 3 different transforms for previously developed quantization technique. Figure 11 and 12 present results of this comparison for 4 levels quantization.
%Iterative etc
\begin{figure}[H]
  \begin{tabular}{c c c}
      \includegraphics{images/lenaDB44}
      \includegraphics{images/lenaDB64}
      \includegraphics{images/lenaHI4}
  \end{tabular}
  \caption{From left: Daubechies 4, Daubechies 6, Iterative Haar. All quantized with use of 4 levels.}
\end{figure}

\begin{figure}[H]
  \begin{tabular}{|c|c|c|}
  \hline
    \textbf{Transform type for 4 levels} & \textbf{MSE} & \textbf{PSNR} \\
    \hline
        \textbf{Daubechies 4} & 164.60 &  1.9 \\
    \hline
        \textbf{Daubechies 6} & 166.96 &  1.83 \\
    \hline
        \textbf{Iterative Haar} &  316.62 &  -0.94 \\
    \hline
  \end{tabular}
      \newline    \newline
  \caption{Mean squared error and Peak Signal to Noise Ratio for different transform types and the same number of levels.}
\end{figure}
Visualy it's hard to spot a difference betweenn Daubechies 4 and 6 results, however taking into consideration results Daubechies 4 outperformed slightly Daubechies 6 and turned out to be best out of three transforms. Iterative Haar presented worst results in terms of keeping visual information, low MSE and high PSNR. However if we will take into consideration the entropy of the diagonal differences in case of Iterative Haar is \textbf{0.57} while in case of Daubechies it's \textbf{0.87}. Lower entropy means less randomness in the matrix. The less random the matrix is, the better compression we can get since we will have more elements that are the same and can be easily encoded. Therefore we can conclude that eventhough lot information is lost in case of Iterative Haar, we can obtain a bettetr compression rate.

\end{document}
