\documentclass[12pt]{article} % default is 10 pt
\usepackage[english]{babel} 
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{color}
\usepackage{xcolor}
\usepackage{listings}

\usepackage{caption}
\DeclareCaptionFont{white}{\color{white}}
\DeclareCaptionFormat{listing}{\colorbox{gray}{\parbox{\textwidth}{#1#2#3}}}
\captionsetup[lstlisting]{format=listing,labelfont=white,textfont=white}

\title{TDDD56 Lab2 - Parallel sorting}
\author{Johan Nilsson (johni592), Tobias Pettersson (tobpe416)}
\date{\today}

\addtolength{\voffset}{-0.3in}
\addtolength{\textheight}{50pt}

\begin{document}
\maketitle
\thispagestyle{empty}
\newpage
\tableofcontents
\setcounter{page}{1}
\setlength{\parindent}{0in}
\newpage

\section{Introduction}
\label{sec:background}
This lab is about implementing two sequential sorting algorithms and
parallel versions of these two. The task is to identify speedup
limitations introduced by sequential parts of the algorithm and try to
load balance the parallel sorting algorithms as good as possible. 

\section{Background}
\label{sec:background}
\subsection{SampleSort}
In a parallel implementation with $p$ threads, $p-1$ pivot values
are chosen from the list that is to be sorted. The pivot values are then
sorted themselves. Each thread partitions one subpart of the entire
list into $p$ individual buckets based on the pivot values. Each
thread then merges a certain bucket index from each thread into
a single list. After this the merged lists is sorted sequentially with
quicksort. When the sorting is finished for every thread the entire
list is sorted. Figure \ref{fig:samplesort_ex} illustrates an example
of samplesort with $p=3$ threads and the length of the list is
$12$.       
\begin{figure}[h!]
   	\includegraphics[scale=0.5]{pics/samplesort_ex.png}
	\centering
  	\caption{Illustration of samplesort with 3 threads and a list
          of length 12} 
	\label{fig:samplesort_ex}
\end{figure}

\subsection{Parallel Mergesort}
A sequential mergesort first divides the original list into two
sublists. Then each sublist is divided recursively until the length of
list is 1. The lists are merged and sorted when going up in the
recursion tree. In a parallel version of mergesort every thread is
assigned a sublist and performs sequential mergesort. After
each thread has performed the sequential sorting, there are a few
merge and sort steps left. The remaining steps are not easily
parallelized, therefore these steps are executed by a fewer number of
threads and in the end there is only one thread merging the entire
list. An illustration of this is in figure \ref{fig:mergesort}.  
\section{Implementation}
\label{sec:impl}
\subsection{SampleSort}
An illustration of the parallel implementation can be seen in figure
\ref{fig:samplesort}. The implementation consists of the steps
illustrated in figure \ref{fig:samplesort}. First there is a
sequential startup part where the program is initialized and pivot
values are chosen. Then there is a parallel part where each value is
placed in the appropiate bucket. The threads are then synchronized to
ensure that the partitioning is done. When all threads have been
synchronized, sequential quicksort is performed by each thread. At
last the threads are joined and the sorting is finished. 
\begin{figure}[h!]
   	\includegraphics[scale=0.4]{pics/samplesort_par.png}
	\centering
  	\caption{Illustration of the Parallel Samplesort Implementation}
	\label{fig:samplesort}
\end{figure}
To improve the performance of the algorithm a set of pivot candidates
(substantially larger than the amount of pivot values) is chosen. 
The pivot candidates are selected in
different parts of the list (iteration over the list with a certain
step size). These values are themselves sorted and a small subset of these
values are chosen (iteration over the pivot candidate list with a
certain step size) as the pivot values. This feature adds some extra
time in the sequential part of the program but in most cases this pays
off because the pivots values have a better distribution and the work
that each thread has to do is more balanced.
\\\\
The sequential version of the algorithm is a sequential quicksort. It
has less overhead than the parallel version because it does not have
to divide the list into buckets or choose pivot values for the buckets.  

\subsection{Parallel Mergesort}
The figure \ref{fig:mergesort} illustrates how the parallel mergesort
implementation works. In the first steps the list is divided between
the different threads and sequential mergesort is performed by every
thread. The sorted lists from different threads are merged and sorted
by a fewer number of threads as seen in figure \ref{fig:mergesort} 
and in the list step thread 0 is merging the entire list.  
\\\\
Lists with ascending and constant values will be sorted very quick
compared to a list with random values. The reason for this is the
condition that if two sublists, already internally sorted, are to be 
merged and the last value of the first sublist is less than the first 
value of the second sublist no merging needs to be performed. By doing
this single integer comparison you can avoid doing many unnecassary
comparisons between already sorted elements.
\\\\
The sequential version of the implementation is basically the same as
the parallel but has less overhead such as thread creation and
synchronization.
\begin{figure}[h!]
   	\includegraphics[scale=0.3]{pics/mergesort_par.png}
	\centering
  	\caption{Example of the Parallel Mergesort Implementation with 4 threads}
	\label{fig:mergesort}
\end{figure}
\newpage

\section{Experimental Setup}
\label{sec:exp_setup}
The computer that has been used in the simulations is listed below.
\begin{itemize}
\item Intel Core 2 Quad CPU Q9550, 2.83GHz (4 cores)
\item 4 GiB DIMM DDR2 Synchronous DRAM 800MHz    
\end{itemize}
Because it has four cores, the potential absolute speedup is $4$ (if
e.g. cache effect are ignored). Different types of data inputs has been
used for each sorting algorithm such as random, constant, descending
and ascending input. The different data inputs are tested with three
lengths, a large file with $1,000,000$ numbers, a medium file with
$100,000$ numbers and a small file with $10.000$ numbers.  

\newpage
\section{Results}
\label{sec:results}
\subsection{SampleSort}
The results from the simulations with different types of inputs data and
different input sizes can be seen in the figure \ref{fig:global_10k},
\ref{fig:global_100k}and \ref{fig:global_1m}. In this sorting algorithms
random, ascending and decending inputs has been used. When having
ascending and decending input the execution time of the algorithm is
significantly shorter. This is because better pivot values are chosen
and this will generate a better distribution of elements in the
buckets which leads to a better load balance. The maximum absolute speedup
achieved is around $2.5$. With 4 cores this result is quite good.  
\\\\
Constant input data has not be used and the reason for that is that it
is the worst possible case for the algorithm. Every element will be
placed in the same bucket which leads to a sequential quicksort in one
thread with constant values which is the worst case for quicksort and
it will lead to a $\mathcal{O}(N^{2})$ complexity. One simulation with
10000 constant elements took approximately $150 ms$. A
simulation with $1,000,000$ random inputs took approximately the same
time. To improve this issue a random bucket selection could have been 
used. For the case when an element is equal to a pivot element, this
would probably  
decrease the execution time of the algorithm when there is constant
input data.
\\\\
The best case for this algorithm is when the numbers are equally
distributed in the buckets. The amount of work that each thread has to
do is almost the same which leads to good load balance. 

\begin{figure}[h!]
   	\includegraphics[scale=0.4]{pics/global10k.pdf}
	\centering
  	\caption{Global times for $10,000$ numbers}
	\label{fig:global_10k}
\end{figure}

\begin{figure}[h!]
   	\includegraphics[scale=0.4]{pics/global100k.pdf}
	\centering
  	\caption{Global times for $100,000$ numbers}
	\label{fig:global_100k}
\end{figure}

\begin{figure}[h!]
   	\includegraphics[scale=0.4]{pics/global1m.pdf}
	\centering
  	\caption{Global times for $1,000,000$ numbers}
	\label{fig:global_1m}
\end{figure}
\clearpage
\newpage

\subsection{Parallel Mergesort}
The results from the simulations with different input data and
different input sizes can be seen in the figure
\ref{fig:globalmerge_10k}, \ref{fig:globalmerge_100k} and
\ref{fig:globalmerge_1m}. In this sorting algorithm random,
ascending and decending has been used. Constant input data has not 
been used because it will perform the same as the ascending
version. The maximum absolute speedup achieved is around $2.3$. With 4 
cores this result is quite good. 
\\\\
There is no speedup if the input is ascending or constant. 
This is because there is a condition in the merge step of the algoritm
that stops the merging if the sequences to merge is already
sorted. This means that the parallel time is almost zero and therefore
there is no speedup. 
\\\\
Worst case for the mergesort algorithm is a descending input. In this
case all elements have to be swapped in the merge stages.

\begin{figure}[h!]
   	\includegraphics[scale=0.4]{pics/globalmerge10k.pdf}
	\centering
  	\caption{Global times for $100,00$ numbers}
	\label{fig:globalmerge_10k}
\end{figure}

\begin{figure}[h!]
   	\includegraphics[scale=0.4]{pics/globalmerge100k.pdf}
	\centering
  	\caption{Global times for $100,000$ numbers}
	\label{fig:globalmerge_100k}
\end{figure}

\begin{figure}[h!]
   	\includegraphics[scale=0.4]{pics/globalmerge1m.pdf}
	\centering
  	\caption{Global times for $1,000,000$ numbers}
	\label{fig:globalmerge_1m}
\end{figure}
\clearpage
\newpage

\section{Discussion}
Figure \ref{fig:complexityrand} shows the scaling of the two
parallel sorting algorithms with random input, the graphs for other
input types looks basically the same. Both algorithms scales linearly,
which is good because it means that the sorting algorithm has the
complexity of $\mathcal{O}(n)$ which is not achievable with a
sequential sorting algorithm. With random input elements samplesort
performs better than mergesort. However if the the elements are constant or
ascending mergesort will perform a lot faster than samplesort. So one
can not say which sorting algorithm that performs the best. It depends
on the input data.
\begin{figure}[h!]
   	\includegraphics[scale=0.4]{pics/complexityrand.pdf}
	\centering
  	\caption{Scaling of the two parallel sorting algoritms with random input (4 threads)}
	\label{fig:complexityrand}
\end{figure}
\\\\
The results shows that samplesort in many cases has the shortest
execution time of the parallel algorithms. However it is very expensive in
terms of memory, especially when allocating the bucket arrays. The worst
case that can happen is that every element is placed in one bucket in
every thread. This means that every thread must allocate
$\frac{totalelements}{threads}$ elements in every bucket. The total
number of allocated elements will then be $\frac{total
  elements}{threads} \cdot threads \cdot buckets$. The number of
buckets is equal to the number of threads so the formula can be
simplified to: $total elements \cdot threads$. This means that the 
size of the bucket lists will always be number of $threads$ times
larger than the list that is to be sorted.      
\\\\ 
There is no speedup when having $10,000$ numbers for both sort
algorithms as can be seen in figure \ref{fig:global_10k} and
\ref{fig:globalmerge_10k}. This is because the thread overhead takes
more time than the actual sorting which leads to no speedup. 

\section{Conclusion}
On a modern multicore desktop computer the time for sorting a large
set of elements can be significantly reduced by a parallel sorting
algorithm. In the implementations described in this document the
maximum absolute speedup achieved was around 2.6. The performance of
these implementations depends very much by the characteristics of the
data. One thing you should not ignore when sorting large sets of data
is the memory cost for storing intermediate data.
  
\end{document}





