\documentclass[12pt]{article}

\topmargin -0.5in
\footskip 0.7in
\textwidth 6.5in
\textheight 9.0in
\oddsidemargin 0.1in
\evensidemargin 0.1in
\parindent0pt\parskip1ex

\usepackage{amsmath,algorithmic,comment,subfigure,graphicx,ifthen,epsfig}
\usepackage[ruled,vlined]{algorithm2e}
\newcommand{\tight}{\baselineskip=8pt}


\tolerance=750


\title{CS 267 Homework 3}

\author{ Benjamin Lipshitz (lipshitz@berkeley.edu) \\
Edgar Solomonik (solomon@eecs.berkeley.edu) \\
Brian Van Straalen (bvs@eecs.berkeley.edu) \\ 
} 

\begin{document}

\maketitle
\section{Original Performance}

The given example does not scale, except in terms of memory usage.  This might be all a given user needs from a supercomputer.  Increasing the item count or capacity rapidly exhausts the available memory on a typical laptop. 
\begin{table}[htdp]
\caption{Provided {\tt parallel.upc} implementation scaling performance}
\begin{center}
\begin{tabular}{|c|c|l|}
\hline
Threads (cores) & Items & Time(s) \\
\hline
1  & 5000  &0.289\\
2  & 10000 & 18.99\\
4  & 20000 &44.90\\
8  & 40000 & 72.54\\
16 & 80000 & 293.66\\
32 & 160000 & 367.57\\
64 & 320000 & 592.44\\
\hline
\end{tabular}
\end{center}
\label{example}
\end{table}%

\section{Implementation 1: optimizing the given algorithm}

The given parallel implementation was block cyclic, a decomposition which, given the algorithm, does not exploit any locality.
So we implemented versions that parallelized by block-rows and block columns. The block-row parallelization reduces communication
signficantly, but in effect gets almost no parallelism, since there are row dependencies in the algorithm. The block-column parallelization,
still has parallelism but does not reduce communication significantly.

In order to reduce the amount of communication done by the block-column version, we sorted the weight/value pairs from largest weight to smallest
weight.  With such an ordering you know that all values to the left of $w_j$ are zero and 
there was no need to do any reads of data.  Furthermore, all the values from $w_j$ to  $2w_j$ are are just $v_j$, which also eliminates a number 
of possibly remote data reads. The sorting was done using a parallel radix sort implemented as follows
\begin{enumerate}
\item We allocate a table $T$ with NUM\_THREADS rows and $r$ columns, where $r$ is the radix.
\item Each thread iterates through the weights it owns and uses one row of the table $T$ to make a histogram of the radix of each weight.
\item We do prefix sums in parallel along the columns of $T$, to get the total sums for each weight as well as the offsets for each thread to write the weights later.
\item We do a parallel prefix sum along the last row of $T$, to get the global offsets of each radix.
\item We iterate through the weights in parallel in the same order as step 2, and permute them according to their radix and the offsets as determined by $T$.
\item We repeat steps 2-5 for $b/\log_2 r$ steps operating on a different $\log_2 r$ bits, where $b$ is the log of the max weight.
\end{enumerate}

The sorting did improve the performance of the block-column version, but as can be observed in Table~\ref{code1} the simple block-row version
still performs better (sorting does not help the block-row version). Thus we did not succeed in reducing communication significantly in the original code.
Our versions of the code are faster than the given parallel version, but they do not scale well overall.

\begin{table}[htdp]
\caption{Blocked and sorted implementation scaling performance}
\begin{center}
\begin{tabular}{|c|c|l|l|l|}
\hline
Threads (cores) & Items & Row-blocked (s) & Column-blocked (s) & Column-blocked and sorted (s) \\
\hline
1  & 5000  &0.35 & 0.34  & 0.38  \\
2  & 10000 & 1.65 & 21.7  & 6.24  \\
4  & 20000 & 4.12 & 57.1  & 24.78 \\
8  & 40000 & 12.6 & 94.0  & 40.01 \\
16 & 80000 & 35.1 & 151.8 & 97.08 \\
32 & 160000 & 105.5 & 233.2  & 181.57 \\
64 & 320000 & 333.9 & 564.13 & 540.65 \\
\hline
\end{tabular}
\end{center}
\label{code1}
\end{table}%

\section{Implemetation 2: a new algorithm}
To get better performance, we decided to switch to a different
algorithm that does a bit more computation to avoid a lot of
communication.  Note that if \(T_1[i]\) and \(T_2[i]\) are the best
values that can be obtained with weight \(i\) for two different sets
of items, the best value for the combined set of items can be
calculated in \(O(\texttt{capacity}^2)\) time as:
\[T[i] = \max_{j}(T_1[j]+T_2[i-j]).\]
To allow backtracking, we must store the value of \(j\) that
maximizes for each \(i\).

This suggests an algorithm where each processor independently solves
the problem for a fraction of the items, then they are merged
together.  The first part is perfectly parallelized without any
communication between processors, running in
\(O(\texttt{nitems}/\texttt{proc})\) time.  For the second part, only
\(\texttt{capacity}+1\) integers must be communicated before each
merge, and the parallel time is \(O(\texttt{capacity}^2\log(\texttt{proc}))\).

As shown, this implementation scales as expected.  Once we get to 8
processors the logarithmic slowdown of the merging phase is visible.
In the test case given this algorithm performs well, but If we were 
solving problems with \(\texttt{capacity}\gg\texttt{nitems}\), it
would not be able to keep up with the original algorithm.

\begin{table}[htdp]
\caption{Implementation 2 scaling performance}
\begin{center}
\begin{tabular}{|c|c|l|}
\hline
Threads (coreS) & Items & Time(s) \\
\hline
1  & 5000  &0.0416\\
2  & 10000 & 0.0403\\
4  & 20000 & 0.0498\\
8  & 40000 & 0.0995\\
16 & 80000 & 0.1606\\
32 & 160000 & 0.2133\\
64 & 320000 & 0.2841\\
\hline
\end{tabular}
\end{center}
\label{default}
\end{table}%


\section{Conclusions}
\subsection{Using UPC}
As the given parallel implementation shows, it is easy to write UPC
code that does not scale at all.  Although UPC hides the communication
code that we see in MPI, it is still necessary for the programmer to
understand where all the data is to get reasonable performance, so
there doesn't seem to be a big improvement in development time.
Additionally, we were unable to got UPC to dynamically allocated
blocked memory, so we had to make all the parameters of the problem
compile-time constants and use statically blocked memory.  However in the end
we were able to get code that scales well on both shared and
distributed memory systems.

The other frustrating aspect of working with UPC is the lack of performance tools or a functional debugger.  Totalview doesn't really understand the Berkeley UPC code, and gdb only really understands the underlying c code. You can't follow a shared pointer to it's remote value.  For this assignment there isn't a compelling reason we couldn't make the given algorithm get *some* scaling, except we can't tell what is going slow, or how to fix it. If the new algorithm was also remarkably slow, we would have been stuck. 


\end{document}
