
\section{Parallelism Strategy}

\label{sec:strategy}

First, we create a serial implementation for CPU, since there was none available
to begin with. Once the input to the algorithm is set up appropriately, the main
loop of the algorithm is:

\begin{center}
\begin{minipage}{0,3\textwidth}
\begin{algorithmic}
\While{converges(F)}\\
\\
\hspace{\algorithmicindent}$F(\lambda) = \frac{1}{2} \lambda^T Q \lambda + \lambda^T h$\\
\hspace{\algorithmicindent}$\lambda_i \leftarrow  \lambda_i \left[ \frac{h_i^- + (Q^- \lambda)_i}{h_i^+ + (Q^+ \lambda)_i} \right]$
\\
\EndWhile
\end{algorithmic}
\end{minipage}
\end{center}

Calculating $\lambda$ for the next iteration can be performed in parallel
easily, since individual $\lambda_i$'s are independent of each other. The cost
function $F(\lambda)$ can be split into $F_i(\lambda_i)$, where $F(\lambda) =
\sum_{i}F_i(\lambda_i)$. These individual $F_i((\lambda_i)$'s can also be
calculated independently in parallel, followed by a summation. This can be
represented as follows:
\begin{subequations}
\label{eq:costDecomposition}
\begin{align}
F(\lambda) & = \frac{1}{2} \lambda^T Q \lambda + \lambda^T h \\
F_i(\lambda_i) & = \lambda^T_i ( \frac {1}{2} (Q \lambda)_i + h_i) \\
F(\lambda) & = \sum_{i} F_i (\lambda_i)
\end{align}
\end{subequations}

Thus, such a computation will be the ideal candidate for a GPU with large number
of threads available. However, the matrix sizes are large and sub-optimal memory
accesses for a parallel implementation on GPU can result in a worse performance
than a serial implementation on CPU. 

The device memory operations that can be performed from a host (e.g.
{\tt cudaMalloc}, {\tt cudaMemcpy}, etc) use the global memory of a GPU, which is
very slow. The fast shared memory that is shared among the threads in a block is
limited in size. Thus, in order to avoid the communication from the global
memory, we fetch the data that is used multiple times into the shared memory at
the beginning of execution of a thread block. This is $\lambda$ and $h$ in the
present case. At the end of each iteration, we also need to perform the
summation of $F_i(\lambda_i)$'s; we store the partial sums in shared memory too
and perform a sum reduction at the end of each iteration. We fetch individual
elements of $Q$ from the global memory into the registers as and when needed,
since $Q$ is too large to fit in shared memory in our case.

We also take advantage of the sparsity of $\lambda$ as the algorithm progresses.
We begin with all $\lambda_i$'s being non-zero, but majority of $\lambda_i$'s
quickly converge to $0$ as we observed (75\% to 95\%). If $\lambda_i$ is $0$,
then both the updated value of $\lambda_i = 0$ and $F_i(\lambda_i) = 0$. Thus,
we do not need to perform any global memory operations in such a case.


