
\documentclass[10pt, conference, compsocconf]{IEEEtran}

\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{listings}
\usepackage{color}
\usepackage{paralist}
\usepackage[dvips]{graphicx}
\graphicspath{{C:/Users/girish/Desktop/gdipdps/figures/}}

\lstset{ %
language=C,                % choose the language of the code
basicstyle=\footnotesize,     % the size of the fonts that are used for the code
%numbers=left,                   % where to put the line-numbers
numberstyle=\tiny,      % the size of the fonts that are used for the line-numbers
keywordstyle=\color{black}\bfseries,
stepnumber=1,                   % the step between two line-numbers. If it is 1 each line will be numbered
numbersep=5pt,                  % how far the line-numbers are from the code
backgroundcolor=\color{white},  % choose the background color. You must add \usepackage{color}
showspaces=false,               % show spaces adding particular underscores
showstringspaces=false,         % underline spaces within strings
showtabs=false,                 % show tabs within strings adding particular underscores
frame=lines,                   % adds a frame around the code
tabsize=2,              % sets default tabsize to 2 spaces
captionpos=b,                   % sets the caption-position to bottom
breaklines=true,        % sets automatic line breaking
breakatwhitespace=false,    % sets if automatic breaks should only happen at whitespace
escapeinside={\%}{)}          % if you want to add a comment within your code
}

\hyphenation{op-tical net-works semi-conduc-tor}


\begin{document}

\title{Ensemble K-means on multi-core architectures}

\author{\IEEEauthorblockN{Girish Ravunnikutty, Rejith George Joseph, Sanjay Ranka, Alin Dobra}
\IEEEauthorblockA{University of Florida\\
girishr, rjoseph, sranka, adobra @ufl.edu}
}


\maketitle


\begin{abstract}
Ensemble problems uses multiple models generated from a data set to improve the correctness and ensure faster convergence. The use of multiple models makes ensemble problems computationally intensive. In this paper, we explore the parallelization of ensemble problems on modern multi-core hardware like CPUs and GPUs. We use the K-means clustering algorithm as a case study to explain our parallelization methodologies. We introduce the novel concatenated parallelization methodology and detail the performance tweaks to be considered when developing a parallel algorithm for modern hardware. We benchmark our implementations on multi-core hardware from different vendors and our approach gives significant performance improvement over traditional parallelization methodologies.

\end{abstract}

\begin{IEEEkeywords}
K-means, OpenCL, GPU

\end{IEEEkeywords}

\IEEEpeerreviewmaketitle

\section{Introduction}
A number of traditional machine learning and data mining algorithms
generate multiple models using the same dataset. These models are
generated by choosing different initializations that are generated
randomly or using a systematic method. These models are then used to
generate a single model or a ensemble of models. For example, in most
clustering algorithms, multiple runs with different randomly generated
initializations are processed and the one with the minimum mean square
error or number of clusters or a combination of both may be
chosen. Similarly, for classification algorithms, multiple models may
be built, where each model predicts the class of a given input. A
majority based output is then used to predict the class for a new
input. In these scenarios, the key challenge is to build multiple
models. The computational intensiveness of the algorithms makes it a
significant challenge especially for large data sets. We assume that
the models in the ensemble are independently generated; and we believe
that our methods can be extended for the case when a sequence of
ensembles need to be generated as well.

There are two broad ways of using the multi core processor
for this purpose:

\begin{enumerate}
\item Task parallelism: Generate each model separately on each of the
  cores of the processor. This prevents the amount of communication
  between each of the cores but has the potential for load imbalance
  if the different models need different amounts of time.
\item
 Data parallelism: Use multiple (or all) cores to utilize
  the data parallelism that is present in many of these model
  generation algorithms. This has the advantage of achieving good
  balance but may generate communication overhead between the cores.
\end{enumerate}

 For the problem considered, we show that both these approaches
suffer from the same shortcoming: they require multiple passes of the
entire dataset. This requires extensive data traffic from the global
memory to the local memory of the cores. Given the two to three orders
of magnitude performance difference between global and local memory
and the limited overall bandwidth, the additional overhead
generated by multiple passes can deteriorate the performance
significantly. We present a novel approach called concatenated
parallelism that effectively utilizes task as well as data parallelism
to reduce the number of passes of data between local memory and global
memory.

We demonstrate the effectiveness of our approach using
$K$-means clustering. $K$-means clustering partitions data records
into related groups based on a model of randomly chosen features
without prior knowledge of the group definitions. It is a classic
example of unsupervised machine learning. Multiple models are
typically generated by using different random initialization or by
varying the number of clusters. For generation of a single model using
the $K$-means algorithm, a random set of $K$ records are chosen from
the given records (data points). These $K$ records form the initial
cluster centroids. Each data point is compared against the $K$ cluster
centroids and is assigned to the closest centroid. In ensemble
clustering, instead of single prototype of $K$ centroids, $M$ models
or prototypes each of $K$ centroids are chosen. Thus data points are
compared with $K*M$ cluster centroids and the best model or prototype
is selected. 

We consider two approaches for building the models: a basic algorithm
in which each centroid is updated based on all data points and an
indexing based approach in which a KD-tree is used to partition the
data based on proximity to centroids and update  centroids using only
data that is close enough. The indexing based approach is expected to
reduce the amount of computation at least for small number of
dimensions since it reduces the number of comparisons required to
determine which model to update. Concatenated parallelism solutions to
each of these approaches results in the following benefits:
\begin{enumerate}
\item
 It significantly reduces the amount of the data access costs
  by limiting the number and amount data that is transferred from the
  global memory to any of the cores.
\item
 It effectively utilizes the SIMT (Single Instruction Multiple
  Thread) nature of the underlying hardware by exploiting the data
  parallelism.
\end{enumerate}

 To ensure a highly optimized solution, we explore refinements of
the concatenated parallelism idea. In particular, 
we propose several approaches to minimize the update
contention that is generated by the SIMT nature of each core. We
reduce the global memory access latency by using local memory and
memory coalescing. Our implementation also incorporates performance
enhancing techniques like loop unrolling, reducing local memory bank
conflicts, optimized register usage and using constant memory. 
The key outcome is that the resultant code amortizes a larger number
of computations per data access and is an order of magnitude faster
than straightforward parallelization techniques. We believe that this
Multiple Instruction Single Data (MISD) data is extremely important to
derive performance on modern multi core architectures for a large class of data intensive and data
mining operations. Although our techniques have been described in the
context of ensemble clustering, they are quite general and should be
applicable to a variety of ensemble based applications.

The rest of the paper is organized as follows. In
section~\ref{sec:preliminaries}, we give a brief overview of the
Clustering, Ensemble Clustering problems, $K$-means algorithm for
clustering, OpenCL device
architecture. Section~\ref{sec:parallelKMeans} provides a detailed
discussion of parallelization of $K$-means algorithm on OpenCL
hardware, issues to be addressed for parallelizing any algorithm on
OpenCL hardware and different methodologies to parallelize $K$-means
algorithm on OpenCL hardware.  In section~\ref{sec:parallelKDTree}, we
discuss an enhancement over the $K$-means algorithm using a
KD-tree. In section~\ref{sec:experiments}, we present the results of
our experiments followed by a brief conclusion.
 
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{Clustering Problem}
Clustering or cluster analysis problem involves finding groups of
records from a huge collection such that each record in a group will
be more similar to others in the group and different from the records
in other groups. Clustering uses distance between records. The
distance is typically calculated as a linear combination of the
distance along each of the many attributes. Thus, finding an
appropriate distance measure becomes an important step in
clustering. The distance measure decides the shape of clusters because
two records can be close to one another in one dimension where as they
will be far according to another dimension. We choose the Euclidean
distance metric for our work. We also assume that all attributes or
dimension are equally weighted. Our algorithms and approaches can be
suitably generalized to other distance measures.

 Lloyd's algorithm~\cite{Lloyd}, also known as $K$-Means algorithm is one of the
 most popular algorithms used in clustering. The algorithm partitions
 $N$ records to K clusters. In $K$-means, a random set of $K$
 records are chosen from the given records (data points). These $K$
 records form the initial cluster centroids. Each data point is
 compared against the $K$ cluster centroids and is assigned to the
 closest centroid. 
 
 Given a set of $N$ records $(n_1, n_2, \cdots
 n_n)$, where each record is a $d$-dimensional vector, the $K$-means
 clustering partitions the $N$ records into $K$ clusters $(K < N)$ $S
 = (S_1, S_2,\cdots , S_k )$ such that intra cluster distance is
 minimized and inter cluster distance is maximized. The number of
 clusters is fixed in $K$-means clustering. Let the initial
 centroids be $(w_1, w_2, \cdots, w_k)$ be initialized to one of the
 $N$ input patterns. The quality of the clustering is determined by
 the following error function.

\begin{equation}
E =  \sum_{i = 1}^{k} \sum_{n_{l}  \epsilon  C_{j}} \parallel n_l - w_j \parallel ^{2} 
\end{equation}
where C$_{j}$ is the j$^{th}$ cluster whose value is a disjoint subset of input patterns.

$K$ means algorithm works iteratively on a given set of $K$ clusters. Each iteration consists of two steps:
\begin{enumerate}
\item Each data item is compared with the $K$ centroids and associated with the closest centroid creating $K$ clusters.
\item The new sets of centroids are determined as the mean of the points in the cluster created in the previous step.
\end{enumerate}

 The algorithm repeats until the centroids do not change or when the
error reaches a threshold value. The computational complexity of
algorithm is $O(NKdI)$ where $I$ is the number of iterations.


\subsection{Ensemble $K$-means algorithm}
Ensemble $K$-means algorithm~\cite{ensem2008} is an extension to $K$-means algorithm
with the goal of refinement of results~\cite{ensem2009} and provide faster
convergence. Instead of a single model with $K$ centroids, we choose
$M$ models of $K$ cluster centroids. Each iteration of $K$ means
operates on $K*M$ centroids. Each data point compared with the
centroids of each model is assigned to the closest centroid in each of
the $M$ models. The computational complexity of the algorithm is $O
(NKdMI)$.
 
\subsection{Index Based Ensemble $K$-means}
\label{sec:kdtree}
In basic ensemble $K$-means clustering algorithm, each centroid in a
model is compared with all the data points. This requires time
proportional to the product of number of models and number of
clusters. As an enhancement to this, we can use index based data
structures to reduce the computations performed. The data structure
can partition the data records so that centroids can identify a subset
of records to which they are close instead of scanning the whole data
records. In this work, we use KD-tree~\cite{Andrew} as the indexing data
structure. A KD-tree is created for data points with each leaf node
containing a set of data points. In KD-tree each internal node is an
axis orthogonal hyper plane which divides the set of points in the
space into two hyper rectangles. Once the KD-tree creation is done,
all the data points will be in the leaf nodes. Each centroid in a
model traverses the KD-tree and gets mapped to the data points in the
leaf nodes. KD-tree based ensemble clustering is explained in detail
in Section~\ref{sec:parallelKDTree}.

\subsection{ OpenCL Device Architecture} 
\label{sec:gpuArch}
\begin{figure}
\centering
\includegraphics[width=2.5in]{cpugpu.eps}
\caption{CPU vs GPU Architecture}
\label{fig:cpugpucomparison}
\end{figure}


In this paper, we use GPUs and multi core CPUs as OpenCL devices. GPU is a dedicated computing device to address problems that have high arithmetic intensity i.e. high ratio of arithmetic operations to memory operations. Each of the GPU cores has a SIMT (Single Instruction Multiple Thread) architecture. The core of a GPU has a small cache and limited flow of control - both of these take up most of the transistors in a CPU whereas in a GPU, more transistors are used up for computing cores. Figure~\ref{fig:cpugpucomparison} shows a high level architecture view of CPU and GPU. 

Parallel portions of an application are expressed as device kernels which run on many threads. GPU threads are extremely lightweight and the creation overhead is very small. The scheduling is performed by the hardware unlike the operating system in a CPU. A typical GPU needs hundreds of threads for full utilization of hardware where as CPU can be saturated with only a few threads.

ATI (Owned by AMD) and NVIDIA are the leading vendors that provide GPUs with general purpose computing capabilities. NVIDIA provides the Compute Unified Device Architecture (CUDA) Framework~\cite{cuda} with a new parallel programming model and Instruction Set Architecture to leverage the capabilities of the massive parallel computing hardware in NVIDIA GPUs. Similarly AMD provides ATI stream technology that enable AMD CPUs and GPUs to accelerate applications.

OpenCL (Open Computing Language)~\cite{oclspec} is an open royalty free standard to write parallel programs that execute across heterogeneous computing environment consisting of CPUs, GPUs. OpenCL framework provides a ISO C99 based language for writing portable code that executes on heterogeneous platforms. NVIDIA provide OpenCL drivers for their GPUs and AMD provides for the CPUs and ATI GPUs.

An OpenCL device is identified as a collection of compute units. Each compute unit can contain many Processing Elements (PEs). OpenCL program executes in two parts: Kernels that execute on OpenCL devices and a host program that executes on the host. The instance of the kernel that is executing on a compute unit is called work-item. Work-items are organized into work groups Work-items in a work-group executes the same code in SIMD fashion on all the processing elements of a compute unit. 

\begin{figure}
\centering
\includegraphics[scale=.40]{coalescednoncoalesced.eps}
\caption{Coalesced vs Non-coalesced Access}
\label{fig:coalesced}
\end{figure}

Work items executing an OpenCL kernel has access to four distinct memory hierarchies~\cite{oclprgramming, oclbestpractices}.
\begin{itemize}
\item{Global Memory: All the work-items in all work-groups has read and write access to this memory. In a GPU, Global memory is implemented as DRAM. Hence the access latency is high. The peak global memory read performance occurs when all the work items access continuous global memory locations. This is known as coalesced memory access. Coalesced and Non-coalesced global memory access is shown in Figure~\ref{fig:coalesced}.}
\item{Constant Memory: Part of the global memory that is read only to all work-items in all work-groups. Some devices provide fast cached access to constant memory.}
\item{Local Memory: The memory region local to a work-group, shared by all the work-items in that work-group. OpenCL devices like NVIDIA and ATI GPUs provide dedicated local memory on the device which is as fast as registers. In some other devices local memory is mapped to global memory. Local memory is implemented as banks. When multiple work items (threads) in a work group (block) access the same bank, bank conflicts occur which results in the serialization of access. Figure~\ref{fig:bankconflict} shows the bank conflicts in Banks 0, 2 and 5. Local memory provides very fast broadcast when all the work-items read the same location.}
\item{Private Memory: The lowest level in the memory hierarchy which stores the local variables of a work-item. These are mostly hardware registers.}
\end{itemize}

\begin{figure}
\centering
%\epsfig{file=bankconflict.eps}
\includegraphics[scale=.40]{bankconflict.eps}
\caption{Bank Conflicts in Local memory}
\label{fig:bankconflict}
\end{figure}

A high level view of OpenCL device architecture is shown in Figure~\ref{fig:computeDevice}. OpenCL provides two levels of synchronization
\begin{compactenum}
\item{Synchronization of work-items in a work group: This is a barrier synchronization which ensures all the work-items execute the barrier before proceeding to execution beyond the barrier.}
\item{Synchronization between commands enqueued in a co\-mmand queue: This ensures that all the commands queued before the barrier have finished execution and the resulting memory is visible to the commands after the barrier before they begin execution.}
\end{compactenum}

\begin{figure}
\centering
\includegraphics[width=2in]{computedevice.eps}
\caption{OpenCL compute device}
\label{fig:computeDevice}
\end{figure}

\subsection{Related Work}
The $K$-means clustering method has been implemented on GPU. Before the introduction of NVIDIA CUDA framework, clustering problem has been implemented on GPU by mapping to texture and shader used for graphics computation~\cite{Shalom2008, Hall2004, Cao}. These implementations use the OpenGL APIs. After the introduction of NVIDIA CUDA framework, there are several GPU implementations of $K$-means clustering problem. Chang et.al. ~\cite{darjen, darjen1} computed pairwise Euclidean distance, Manhattan distance and Pearson correlation coefficient with speed-up over CPU. This was extended to the first complete hierarchical clustering on GPU~\cite{darjen2}. Cao et.al.~\cite{Cao} describes the GPU clustering method for data sets that are too large to fit in the GPU's global memory. In the CUDA clustering implementation done by Che et.al.~\cite{Che2008}, CPU computes the new cluster centroids which necessitates the transfer of data between CPU and GPU in each clustering iteration.  Li et.al.~\cite{Li} describes the CUDA implementation of clustering using registers for low dimensional data and shared memory for higher dimensions. Fang et.al.~\cite{Fang2008} proposes a GPU based clustering approach using a bitmap to speed up the counting in each iteration. This approach does not scale well for ensemble clustering as bitmap increases the memory foot print. The algorithms described in the paper does not require data transfers between successful clustering iterations. As proposed by Lee et.al.~\cite{Debunk}, we correlate our algorithm performance on different heterogeneous hardware. Our implementations used OpenCL framework to facilitate this.
 
\section{Parallel $K$-means Algorithm}
\label{sec:parallelKMeans}
As mentioned before, K-means algorithm works in iterations. Each iteration is dependent on the results (new cluster centroids) from the previous iteration. Hence the iterations cannot be parallelized. What we can parallelize is the computations done with in an iteration. Algorithm~\ref{alg:genAlg} shows the sequential version of $K$-means clustering 

\begin{algorithm}\caption{General Algorithm}\label{alg:genAlg}
  \begin{algorithmic}[1]
    \STATE \texttt {dataPoints = read\_data\_points()}
    \STATE \texttt{models = random}(dataPoints)
    \FOR{$i=0$ to Number of Clustering iterations}
    \STATE \texttt{centroidStats = \\ compute\_statistics}(dataPoints, models)
    \STATE \texttt{models = \\ compute\_new\_models}(centroidStats)
    \ENDFOR
  \end{algorithmic}
\end{algorithm}


The initial set of centroids is randomly chosen from the data points. The $compute\_statistics()$ function iterates over all the data points and finds the closest centroid in each model. $compute\_new\_models()$ aggregates the statistics from the $compute\_statist\-ics()$ function to find the new models for next iteration. The $compute\_statistics()$ function is the main work horse as it does all the computation. In this paper this function is our major target for parallelization on an OpenCL device. We also implemented a kernel for the aggregation step so that we do not have to transfer data between CPU and the device across clustering iterations. 

\subsection{Parallelization Issues}
For parallelizing an algorithm on an OpenCL device, we need to address the following issues:

\subsubsection{Concurrency}

The two main entities that can be partitioned in a K-means clustering problem are data points and models. Each model can be mapped to a problem instance which operates on data.
OpenCL standard provides 2 levels of parallelism: parallel work-groups (PWG)
and parallel work-items within work-groups (PWI).

As shown in Table~\ref{table}, we devised three parallelization methodologies based on how data and models uses these two levels of parallelism. These methodologies are for the implementation of $compute\_statistics()$ function. $compute\_new\-\_models()$ function remains the same for all the three strategies. 

\begin{table}
\caption{Parallelization Methodologies}
\centering
\label{table}
\begin{tabular}{ccc} 
 Methodology & Models & Data \\
\hline
Task Parallelism & PWG & PWI \\
Data Parallelism & - & PWG, PWI \\
Concatenated Parallelism & PWI & PWG, PWI \\
\end{tabular}
\end{table}

\subsubsection{Memory constraints} 

As mentioned in section~\ref{sec:gpuArch}, OpenCL device provides a hierarchical memory structure. Before deciding on which hierarchy to place a data structure, we need to carefully identify the memory access pattern. The objects that are accessed over and over again will result in performance penalty if stored in global memory and the fast local memory is limited by size. The options to store the input (data records and models) are as follows:

\begin{itemize}
\item \textit{Models and data points in global memory.}
\item{\textit{Data points in global memory and models in local
  memory.} Each data point is read from global memory to the private
  register and computation is performed against all the models.}
\item{\textit{Models in global memory and data points in local memory.} For each data point in local memory performs computations against the models.}
\end{itemize}

We also need to keep the data structures that store the centroid statistics after each computation. The options to store the result data structures are
\begin{itemize}
\item{Update the statistics in global memory.}
\item{Update the statistics in local memory and after partial execution update the results to global memory.}
\end{itemize}


\subsubsection{Synchronization and atomic operations}

Our implementation of clustering has two OpenCL kernels. Even though kernel calls are asynchronous with the host, the kernel calls in a particular stream are serialized. Hence there is no need for explicit synchronization between the two kernel invocations. We need to identify synchronization points within a work-group and use barrier synchronization for synchronizing the work-items. 

In parallel implementation of $K$-means algorithm, the work-items compute the distances in parallel. We have memory consistency issues because many work-items can concurrently update the statistics of centroids. We use the atomic operations provided by the OpenCL standard to achieve memory consistency. Excessive use of atomic operations can slow down the execution. Since we have two options to store the statistics data structures, we can have atomic operations in
\begin{itemize}
\item{Global memory: This requires a large number of cycles.}
\item{Local memory: This is faster than global memory atomic updates. However, it can result in contention when many work-items in a work-group update the same location in local memory.}
\end{itemize}

\subsubsection{Data transfer between host and device}
 Before launching a kernel, all the required data has to be copied to device global memory. After execution the results are copied back to the host.  If there are multiple kernels, there can be frequent transfer of data between host and device. This reduces performance.

\section{OpenCL Implementation}
\label{sec:oclimp}
In this section, we present the design and implementation of the three parallelization strategies mentioned in Table~\ref{table}. For efficient indexing, we store data points and models in linear arrays. The two arrays copied to the OpenCL device global memory as part of pre-processing step. To keep track of the statistics after $compute\_statistics()$ kernel, we keep two additional arrays in global memory. One array for keeping track the number of closest points to centroid in the models and second array for the sum of points ($\sum{x}, \sum{y}, \sum{z}$, etc) closest to each centroid in the models. The $compute\_new\_models()$ kernel uses these two arrays to compute the new set of models for use in the next iteration. There is no data-transfer between the OpenCL device and host between the kernel invocations (clustering iterations). For each methodology, we analyze the computational and data access complexities.  We use the following conventions for our analysis. Data set has $N$ records where each record is of $D$ dimension. We choose random $M$ models, each with $K$ cluster centroids from $N$ records. We also discuss the Algorithm and OpenCL code for each methodology. For simplicity of description, we present the OpenCL code for two-dimensional data points. All of our kernels use $256$ work-items in a work-group. This is denoted by the macro $WORK\_ITEMS\_PER\_WORK\_GROUP$ which is abbreviated as $WIPWG$ in the code listings. 

\subsection{Task Parallelism}
In task parallelism, the tasks are partitioned among the compute units. Each model is mapped to a work-group. Each work-group reads the whole data-record from the device global memory. The work-group computes the closest centroid of each data point in the respective model. Figure~\ref{fig:TaskParallelism} depicts task parallelism.

\begin{figure}
\centering
\includegraphics[scale=.30]{taskpara.eps}
\caption{Task Parallelism}
\label{fig:TaskParallelism}
\end{figure}

With in a work-group, the computation is partitioned among the work-items. Each work-item reads a chunk of data record and finds the closest centroid in the model. In this way all the work-items in a work-group reads the mapped model for computation for all the data points. Reading the model every time from global memory affects performance. This makes model an ideal candidate for keeping in local memory. At the start of kernel, the work-items load the models to local memory. All the work-items have to wait till the model is completely loaded in local memory. This requires the use of a barrier synchronization. 
Each work-group executes the algorithm depicted in Algorithm~\ref{alg:taskP}. 

\begin{algorithm}\caption{Task Parallelism}\label{alg:taskP}
    \begin{algorithmic}[1]
        \STATE \texttt {Load the model $m$ corresponding to this work-group to local memory}
        \STATE \texttt {barrier\_synchronize()}
         \FOR{each data point $p$ assigned to a work-item} 
         \STATE \texttt {statistics = find\_closest\_centroid(p, m)}
				 \STATE \texttt {atom\_update(statistics) in local memory}
         \ENDFOR
				 \STATE \texttt {barrier\_synchronize()}
				 \STATE \texttt {write back statistics from local to global}
     \end{algorithmic}
\end{algorithm}

We now discuss in detail the OpenCL implementation of Algorithm~\ref{alg:taskP} for two dimensional records.  The work-items in a work-group loads the points in a particular model from global memory to local memory. The work-group barrier synchronization function ensures that the required model is loaded before proceeding to computation phase. We keep two additional linear arrays ($sCount$ and $sSum$) in local memory to store the computational result. $sCount[i]$ stores the number of data points closest to the $i^{th}$ centroid in the model. $sSum[i]$ stores the cumulative sum ($\sum{x}, \sum{y}$ for two dimensions) of each data point closest to the $i^{th}$ centroid in the model. Listing~\ref{tcomputation} shows the distance computation phase of the kernel.

\begin{lstlisting}[caption={Computation phase of task parallelism}, label=tcomputation]
// dData in Global Memory
// sModels, sCount, sSum in Local Memory
// work-item id
size_t tx = get_local_id(0); 
unsigned pointsPerWorkItem = NO_DATA / TPWG
for (unsigned t = 0; t < pointsPerWorkItem; ++t) 
{
   unsigned dataIndex = t * WIPWG + tx;
	 // Coalesced access from global memory
   Point point = dData[dataIndex];
   int minDist = MAX_DISTANCE;
   int minIndex = 0;
   for (unsigned j = 0; j < MODEL_SIZE; ++j) 
	 {
      Point c = sModels[j];
		  int dist = computeDistance(point, c);
			minIndex = min(dist, minIndex);	
   }
   atom_add(sCount + minIndex ,1);
   atom_add(&(sSum[minIndex].x), point.x);
   atom_add(&(sSum[minIndex].y), point.y);
}
barrier(CLK_LOCAL_MEM_FENCE);
\end{lstlisting}

Each work-item in a work-group reads a portion of data from $dData$ array stored in device global memory. The data read uses the coalesced pattern mentioned in Section~\ref{sec:gpuArch}. In each iteration of the inner loop, a data point is compared against the model stored in local memory array $sModels$. At each time-frame all the work-items in a work-group reads from the same location of $sModels$ array in local memory. This makes the hardware broadcast the data read to all the work-items in one shot. Once the closest point is identified, the sum and the count is updated (atomic) in the local memory. Atomic update is required because many threads can update $sCount$ and $sSum$ arrays corresponding to the same centroid in a model. The barrier synchronization step ensures all the computation with in this work-group is completed before we proceed to the write back stage.

\begin{lstlisting}[caption={Write back phase of task parallelism}, label=twriteback]
unsigned index = bx * MODEL_SIZE + tx;
if (tx < MODEL_SIZE) 
{
   dCount[index] = sCount[tx];
   dSum[index] = sSum[tx];
}
\end{lstlisting}

Listing~\ref{twriteback} shows the data write back phase. The sum and the count obtained from the computation phase is updated from local memory to global memory. Since each block is mapped to a work-group, there are no conflicts between the blocks. Hence global memory update does not need atomic functions.

In task parallelism, data is read $M$ times from global memory. This access contributes to the majority of run time. If $M$ is less than the number of computing units in the hardware, then we will not be using the hardware efficiently. The local memory conflicts will be large if $K$ is small as all work-items in a work-group will be trying to update one of the centroids in the model.  The total execution time is calculated as the sum of data access time and time for computation. Analysis: For each model, data is read once. There are $M$ such models. Data accesss complexity is $O (M N)$ and the computational complexity is $O (M N K)$.

\subsection{Data Parallelism}

In data parallelism, data is partitioned across work groups and work-items with in a work-group. All the work-groups work on the same model. i.e. each invocation of OpenCL kernel will be working on one model and the whole dataset. Figure~\ref{figDataParallelism} depicts data parallelism.
With in a work-group, each work-item reads a sub-chunk from the chunk of data assigned to this work-group and computes the closest centroid in the model. As in task parallelism, the model array is accessed by all work-items, hence stored in local memory. All the work-groups in a kernel loads the same model to local memory. The barrier synchronization step remains the same as in task parallelism. Each work-group executes the algorithm depicted in Algorithm~\ref{alg:dataP}.

\begin{figure}
\centering
\includegraphics[scale=.30]{datapara.eps}
\caption{Data Parallelism}
\label{figDataParallelism}
\end{figure}

\begin{algorithm}\caption{Data Parallelism}\label{alg:dataP}
    \begin{algorithmic}[1]
       \FOR{every model $m$}
        \STATE \texttt {Load the model $m$ to local memory}
        \STATE \texttt {barrier\_synchronize()}
         \FOR{each data point $p$ assigned to a work-item} 
         \STATE \texttt {statistics = find\_closest\_centroid(p, m)}
				 \STATE \texttt {atom\_update(statistics) in local memory}
         \ENDFOR
        \STATE \texttt {barrier\_synchronize()}
				\STATE \texttt {atom\_update(statistics) from local to global}
       \ENDFOR
     \end{algorithmic}
\end{algorithm}


We now discuss the OpenCL implementation of Algorithm~\ref{alg:dataP}. Listing~\ref{dcomputation} shows the computational phase. Each work-group reads chunk of the data array. The chunk is split among the work-items in the work-group. Work-items access the data array in coalesced pattern. The number of sub-chunks read by a work-item is controlled by the macro $POINTS\_PER\_WORK\_ITEM$ ($PPWI$). $PPWI$ controls the number of work-groups. When $PPWI$ is more, the number of work-groups is less and vice versa. The local memory access broadcast, atomic updation of $sSum$ and $sCount$ arrays remains the same as in the computation phase of task parallelism (Listing~\ref{tcomputation}). 

\begin{lstlisting}[caption={Computation phase of data parallelism}, label=dcomputation]
// dData in global memory
// work-group id
size_t bx = get_group_id(0);
// work-item id
size_t tx = get_local_id(0);
unsigned bIndex = WIPWG * PPWI * bx + tx;
for (unsigned p = 0; p < PPWI ; ++p) 
{
    unsigned dataIndex = bIndex + (p * WIPWG);
    Point point = dData[dataIndex];
		// Same code from task parallelism follows here
}
barrier(CLK_LOCAL_MEM_FENCE);		
\end{lstlisting}
The write bach phase of data parallelism is different from task parallelism. As shown in Listing~\ref{dwriteback}, write back phase involves atomic updates in global memory. As all the work-groups in a kernel work on the same model, they updates the statistics in the same location of the arrays in the global memory.

\begin{lstlisting}[caption={Write back phase of data parallelism}, label=dwriteback]
if (tx < SET_SIZE) 
{
   atom_add(dCount + cIndex, sCount[tx]);
   atom_add(&(dSum[cIndex].x), sSum[tx].x);
   atom_add(&(dSum[cIndex].y), sSum[tx].y);
}
\end{lstlisting}

The data access and computational complexities remains the same as in task parallelism. Multiple kernel invocations and atomic updates in global memory contribute to an additional overhead in data parallelism. 

\subsection{Concatenated Parallelism}
\label{sec:concat}
Both task parallelism and data parallelism require accessing data multiple times in a single iteration. We devised a new methodology, called concatenated parallelism, with the primary aim of reducing this data access. In concatenated parallelism, data is partitioned across work-groups and work-items and computation is partitioned across work-items in a work-group. All the work-groups work on all the models. Concatenated parallelism is shown in Figure~\ref{fig:ConcatenatedParallelism}. Like data parallelism, each work-item with in a work-group reads a sub-chunk of data assigned to this work-group. Each data-point from the sub-chunk is compared against all the models and the closest centroid in each model is identified. Thus each work-item in a work-group accesses all the models for each data point. Keeping all the models in local memory ensures fast access by all the threads. Each work-group executes the algorithm shown in Algorithm~\ref{alg:concatP}. 

\begin{figure}
\centering
\includegraphics[scale=.30]{concatpara.eps}
\caption{Concatenated Parallelism}
\label{fig:ConcatenatedParallelism}
\end{figure}

\begin{algorithm}\caption{Concatenated Parallelism}\label{alg:concatP}
    \begin{algorithmic}[1]
        \STATE \texttt {Load all the models to local memory}
        \STATE \texttt {barrier\_synchronize()}
         \FOR{each data point $p$ assigned to a work-item} 
         \FOR{each model $m$ in local memory}	
         \STATE \texttt {statistics = find\_closest\_centroid}(p, m)
				 \STATE \texttt {atom\_update(statistics) in local memory}
			   \ENDFOR
         \ENDFOR
			\STATE \texttt {barrier\_synchronize()}
			\STATE \texttt {atom\_update(statistics) from local to global}
     \end{algorithmic}
\end{algorithm} 

Listing~\ref{ccomputation} shows the OpenCL implementation of computational phase of concatenated parallelism. $sCount$ and $sSum$ arrays now store the count and sum respectively for all the models. Like data parallelism the work-items read sub-chunks of data in coalesced pattern. The local memory stores all the models. The local memory access pattern as in task and data parallelism result in each work-item starting computation with the centroids of first model. This results in a fast broadcast during the read. But during the atomic updation of $sCount$ and $sSum$ all the work-items update the locations corresponding to the same model. This results in high contention causing serialized access which in turn degrades performance. As shown in Figure~\ref{fig:highcontention}, there are three work-items ($W0, W1, W2$)  in a work-group. All the work-items in the work-group update the same model in a particular time frame.

\begin{figure}
\centering
\includegraphics[scale=.42]{highreducedcontention.eps}
\caption{Local memory atomic update}
\label{fig:highcontention}
\end{figure}

We devise a reduced contention methodology using distributed access pattern. In distributed access pattern, the $i^{th}$ work-item in a work-group starts computation with the model having index $i\ mod\ NO\_MODELS$. In this way, $i^{th}$ work-item will start updating the locations corresponding to model $i\ mod\ NO\_MODELS$ in the $sCount$ and $sSum$ arrays. This reduces the contention due to conflicting access. One drawback of this method is that we cannot utilize the broadcast feature provided by the local memory hardware. Our experiments show that contention can be a major bottleneck compared to the performance improvement from broadcast. Figure~\ref{fig:highcontention}, shows the reduced contention update.

\begin{lstlisting}[caption={Computation phase of concatenated parallelism}, label=ccomputation]
// dData in global memory
// sModels, sCount, sSum in shared memory
// work-group id
size_t bx = get_group_id(0);
// work-item id
size_t tx = get_local_id(0);
unsigned bIndex = TPWG * PPWI * bx + tx;
for (unsigned p = 0; p < PPWI; ++p) 
{
   unsigned dataIndex = bIndex + (p * TPB);
	 // coalesced read
   Point point = dData[dataIndex];
   for (unsigned i = 0; i < NO_MODELS; ++i) 
	 {
      int minDist = MAX_DISTANCE;
      int minIndex = 0;
			// Distributed model access
      unsigned gSetIndex =  ((i + tx) mod NO_MODELS) * MODEL_SIZE;
      for (unsigned j = 0; j < MODEL_SIZE; ++j) 
			{
         unsigned setIndex = gSetIndex + j;
         Point c = sModels[setIndex];
				 int dist = computeDistance(point, c);
			   minIndex = min(dist, minIndex);	
      }
      atom_add(sCount + minIndex ,1);
      atom_add(&(sSum[minIndex].x), point.x);
      atom_add(&(sSum[minIndex].y), point.y);
   }
}
barrier(CLK_LOCAL_MEM_FENCE);
\end{lstlisting}

As shown in Listing~\ref{cwriteback}, the write back phase of concatenated parallelism remains similar to data parallelism with each work-group doing an atomic write back of all the model statistics to global memory.

\begin{lstlisting}[caption={Write back phase of concatenated parallelism}, label=cwriteback]
for (unsigned i = 0; i < centPerThread; ++i) 
{
   const unsigned cIndex = tx * centPerThread + i;
   if (cIndex >= NO_CENT) break;
   atom_add(dCount + cIndex, sCount[cIndex]);
   atom_add(&(dSum[cIndex].x), sSum[cIndex].x);
   atom_add(&(dSum[cIndex].y), sSum[cIndex].y);
}
\end{lstlisting}

One of the major improvement in concatenated parallelism is the number of times data is read. Data is read only once in each clustering iteration. 
Data access complexity is $O(N)$ unlike the data and task parallelism where the complexity is $O(M N)$. The computations remains the same as in task and data parallelism. Concatenated parallelism does not involve the multiple kernel invocation overhead as in data parallelism. The  overhead due to atomic updates in global memory is much less when compare to the time spent in reading data multiple times as in task parallelism.

\section{Parallel $K$-means Using KD tree}
\label{sec:parallelKDTree}
The algorithm described in Section~\ref{sec:parallelKMeans} requires the distance computation with each data point for all the models. As introduced in Section~\ref{sec:kdtree}, we use a KD-tree to reduce this computation by partitioning the data points. A KD tree is created from data points. Alsabti et.al~\cite{Alsabti} describes in detail the creation of KD tree for multi dimensional data. The KD tree creation is done only once in the whole clustering process. Once the KD-tree is created, the centroids in each model traverse the KD-tree and gets mapped to data points in the lead nodes. Algorithm~\ref{alg:kdGenAlg} shows the modified version of Algorithm~\ref{alg:genAlg} and uses KD-tree for partitioning the data points and Figure~\ref{kdtree} shows the data points KD-tree with mapped centroids from models. The creation and traversal of KD-tree is performed by the functions $create\_kd\_tree()$ and $create\_leaf\_node\_to\_centroid\_mapping()$ respectively. \\ These two steps are done in CPU because the irregular structure of KD-tree is not well suited for the SIMT architecture of the GPU. Once the KD-tree is created and mapping of centroids to data points are created, the mapping information is copied to the device. After computation, the new centroids are copied to the host machine to create the mapping information for next iteration. The copying of mapping information to the device and the new centroids from device to host is performed for each clustering iteration and creates additional overhead.

\begin{algorithm}\caption{Pruning using KD-tree}\label{alg:kdGenAlg}
  \begin{algorithmic}[1]
    \STATE \texttt {dataPoints = read\_data\_points()}
    \STATE \texttt{models = random}(dataPoints)
    \STATE \texttt {kd\_tree = create\_kd\_tree}(dataPoints)
    \FOR{$i=0$ to Number of Clustering iterations}
    \STATE \texttt {mapping = \\
    create\_leaf\_node\_to\_centroid\_mapping} \\
                (kd\_tree, models)
    \STATE \texttt{centroidStats = \\ compute\_statistics}(mapping)
    \STATE \texttt{models = \\ compute\_new\_models}(centroidStats)
    \STATE \texttt{copy\_models\_to\_cpu}()
    \ENDFOR
  \end{algorithmic}
\end{algorithm}

 
\begin{figure}
\label{kdtree}
\centering
%\epsfig{file=kdtree.eps}
\includegraphics[scale=.35]{kdtree.eps}
\caption{KD-tree with data points and centroids}
\end{figure}

Similar to the parallel implementations described in Section~\ref{sec:oclimp}, we can have task, data and concatenated parallelism for the KD-tree based implementation as well. 

\begin{enumerate}
\item{Task Parallelism: Each device work-group performs computations for a model. The work-items in a work-group read the data points and computation is performed for the mapped centroids of this model. The centroids in different models gets mapped to same data points. Hence each work-group must read the entire data.}
\item{Data Parallelism: Each kernel invocation performs co\-mputation for a model. The data points are partitioned among the work-groups. i.e. each leaf node of a KD-tree gets mapped to a work-group. The work-items in a work group reads the data points and computation is performed for the mapped centroids of the model. As in task parallelism, data is read multiple times.}
\item{Concatenated Parallelism: As in data parallelism, each leaf-node of KD-tree is mapped to a work-group. The work-items in a work-group performs reads the data points and performs computations for the mapped centroids of all the models. In this approach, the data will be read only once and there is no overhead of multiple kernel calls for each model as in data parallelism. Let $\beta$ be the pruning factor which is the fraction by which the computation is reduced after pruning by traversing the KD-tree. We implemented three different approaches for concatenated parallelism which is discussed in Section~\ref{sec:concat}. Data access complexity is $O (N)$ and computational complexity is $O ( \frac{M  K  N}{ \beta } )$.} 
\end{enumerate}

\subsection{Concatenated Parallelism using KD-tree}
\label{sec:kdconcat}
In this section we discuss three different implementations of concatenated parallelism using a KD-tree. Each of these implementations differs in the local/global memory access patterns.
\subsubsection{Direct Access}
 Each leaf node is mapped to a device work-group and it performs computations for all the data points in this leaf node to the corresponding set of mapped centroids from the models. Figure~\ref{directmap} shows the mapping from leaf nodes to work items.

\begin{figure}
\label{directmap}
\centering
\includegraphics[scale=.30]{directmap.eps}
\caption{KD Tree: Direct Mapping}
\end{figure}

The centroid data structures (sum and the count arrays) are stored in the local memory. Likewise normal $K$-means, we use the atomic update function provided by device hardware to achieve memory consistency when work-items in a work-group update the same location. If the number of centroids mapped to a particular leaf node of KD-tree less, all the work-items can update (atomic) the same locations of the sum and the count arrays in the local memory. This increases the contention and results in serialized access degrading the performance. The next two approaches describes methods to reduce the contention.

\subsubsection{Distributed Access}
In this approach we reduce the local memory update contention by using a distributed access pattern. Instead of mapping a leaf node to a work-group, the work-group spans across many leaf nodes. In this way, the work-items update in a work-group update different locations in the local memory. This reduces the contention in updating the local memory. Figure~\ref{distributedmap} shows the mapping from leaf nodes to work items.

\begin{figure}
\label{distributedmap}
\centering
\includegraphics[scale=.30]{distributedmap.eps}
\caption{KD Tree: Distributed Mapping}
\end{figure}

One drawback of this approach is that the number global memory access for data increases. This is because computation in a leaf node is performed by multiple work-groups. 

\subsubsection{Eliminating Memory Conflicts Using Replication}
In this approach, we replicate the data structures for each work-group and work-item in the device global memory. Like direct mapped approach, each leaf in the KD-tree is mapped to a work-group. Each work-item operates on its own memory location so that there won't be any global memory conflicts. After computation, another device kernel aggregates the results from all the replicated models. One drawback with this approach is that replication results in an increase in the memory footprint. 

\section{Empirical Evaluation}
\label{sec:experiments}

We analyze the three clustering approaches Task Parallelism (TP), Data Parallelism (DP), and Concatenated Parallelism. We determine the best parallelization strategy on multi-core hardware. We also study the performance impact of using index based data structures.

\subsection{Datasets}
We used randomly generated data set of size $1$ Million points with each data point of varying dimensions. The initial centroids for the models are chosen randomly from the data set. We fix the number of cluster centroids to be $10$ the number of clustering iterations is fixed to $10$. We measure the time taken by the for loop in Algorithm ~\ref{alg:genAlg}. This doesn't involve memory allocation and data transfer times. In case of KD-tree implementation, the time for tree traversal and intermediate transfer of centroids is taken into account. 

\subsection{Benchmark Machines}
We conducted our experiments on there different hardware. This includes NVIDIA FERMI Generation GeForce GTX 480 GPU, ATI FirePro V7800 GPU and a machine with $8$ quad core AMD Opteron CPUs (32 cores). Table~\ref{htable} shows the hardware specifications of the devices. $CU$ and $PE$ shows the number of compute units and processing elements respectively where as $GM$ and $LM$ identifies the global and local memory available on the device. All the machines run on $64bit$ Linux operating system.

\begin{table}
\caption{Benchmark Hardware Specification}
\centering
\label{htable}
\begin{tabular}{p{.5in}p{1.2in}ll} 
Device & Cores (CU x PE)& GM &  LM\\
\hline
FERMI  & 480 (15 x 32) & 1.6GB & 32KB \\
ATI & 1440 (18 x 80) & 1GB & 32KB \\
CPU & 32 (32 x 1) & 3.2GB & 32KB \\
\end{tabular}
\end{table}

\subsection {Comparison of Three Approaches on a 32-core CPU}
\label{sec:expr1}
Figure~\ref{fig:comparisonCPU} shows the comparison of TP, DP and CP on 32 core CPU machine. We used two dimensional data and study the results by varying the number of models. The graph clearly shows CP gives much better performance compared to DP and TP. DP is faster than TP initially because in TP when the models are less, the number of work-groups are less. Hence the all the compute units of hardware is not used effectively. As the number of models increase, the number of work-groups increase utilizing all the compute units of hardware. In DP as the number of models increase, so does the number of kernel calls. This causes additional overhead that reduces the performance.

\begin{figure}
\includegraphics[scale=.6]{comparisonCPU.eps}
\caption{Comparison of three clustering approaches on CPU}
\label{fig:comparisonCPU}
\end{figure}

\subsection {Comparison of Three Approaches on FERMI}
\label{sec:expr2}
Figure~\ref{fig:comparisonFERMI} shows the comparison of TP, DP and CP on FERMI GPU. The data set and models configuration remains the same as in ~\ref{sec:expr1}. Similar to 32 core CPU, CP gives the best performance. TP performing better than DP as the number of models increase. The additional overhead due to multiple kernel invocations reduces the performance of DP.

\begin{figure}
\centering
\includegraphics[scale=.6]{comparisonFERMI.eps}
\caption{Comparison of three clustering approaches on FERMI}
\label{fig:comparisonFERMI}
\end{figure}

\subsection {Performance Comparison of various Multi-core Hardware}
\label{sec:expr3}
From sections~\ref{sec:expr1} and ~\ref{sec:expr2}, we conclude that concatenated parallelism gives the best performance on a multi core hardware. Now we determine which of the three hardwares gives the best performance for concatenated parallelism. Figure~\ref{fig:comparisonConcat} shows the comparison of CP on 32 core CPU, NVIDIA FERMI GPU and ATI GPU. GPUs provide close to $10X$ performance improvement over the 32-core CPU. ATI GPU and FERMI GPU gives comparable performance. Even though ATI has more execution cores, FERMI has a wider memory bus which accounts for the comparable performance. In rest of the experiments we use the FERMI GPU as the OpenCL device.
 \begin{figure}
\centering
\includegraphics[scale=.6]{comparisonConcat.eps}
\caption{Performance Comparison of various Hardware}
\label{fig:comparisonConcat}
\end{figure}

\subsection {Comparison of the Three KD-tree Algorithms}
\label{sec:expr4}
Figure~\ref{fig:comparisonKdtree} shows the comparison of the three KD-tree implementations of concatenated parallelism described in section~\ref{sec:kdconcat}.
Distributed mapped implementation gives the best performance due to reduced contention. The implementation with no conflicts is slower than distributed method due to more number of global memory accesses.
\begin{figure}
\centering
\includegraphics[scale=.6]{kdtree3impl.eps}
\caption{Comparison of the Three KD-tree Algorithms}
\label{fig:comparisonKdtree}
\end{figure}

\subsection {Comparison of KD-tree and Basic Implementation of Concatenated Parallelism}
\label{sec:expr5}
In this section, we compare the best KD-tree implementation with the basic implementation of concatenated parallelism. From section~\ref{sec:expr4}, we conclude that direct mapped implementation of KD-tree gives the best performance. As shown in Figure~\ref{fig:comparisonKdtreeWithNormal}, the KD-tree implementation gives an average of $2X$ performance improvement over the normal implementation. KD-tree method involves the additional overhead from tree traversal and intermediate data transfer between the host and the device.

\begin{figure}
\centering
\includegraphics[scale=.6]{kdtreewitnormal.eps}
\caption{KD-tree vs Basic Implementation of CP}
\label{fig:comparisonKdtreeWithNormal}
\end{figure}

\subsection {Comparison of KD-tree and Basic Implementation for Varying Dimensions}
In this section, we compare the performance of KD-tree and basic implementation when the dimensionality of data changes. The number of models is fixed as $10$. Results are shown in Figure~\ref{fig:comparisonKddimension}, the performance of KD-tree implementation decreases as dimensionality of data increases. This is because as the dimensionality of data increases the pruning obtained from KD-tree decreases. 
\begin{figure}
\centering
\includegraphics[scale=.6]{dim.eps}
\caption{KD-tree vs Basic Implementation for Varying Dimensions}
\label{fig:comparisonKddimension}
\end{figure}

\subsection{Discussion}
We conclude that concatenated parallelism gives better performance for data intensive applications. Our results show that memory access patterns is one of the key factors that  determines the overall performance of any algorithm running on many core hardware. Our description assumes that all the models have equal number of centroids. But with a small change in the algorithm we will be able to support models with different number of centroids as well. 

\section{Conclusion}
As the processors are evolving, the memory traffic is becoming a major bottleneck. In this paper, we presented an approach for reducing the memory traffic for ensemble clustering. Our experimental results clearly show that reducing memory traffic can lead to significant performance gains. The additional advantages of using complex indexing data structures to reduce the total amount of computation has benefits but the overall reduction is time is much smaller compared to the advantages of concatenated parallelism due to substantial reduction in the traffic. We believe that the concatenated parallelism approach presented in this paper is relatively general can be used to develop a framework for automatic parallelization of ensemble computing applications.

\section*{Acknowledgment}
We would like to thank Dr. Gordon Erlebacher, and the IT support staff of Florida State University for providing access to the machines with FERMI and ATI GPUs for our benchmarking.

\bibliographystyle{IEEEtran}
\bibliography{ref}

\end{document}


