% !TEX TS-program = pdflatex
% !TEX encoding = UTF-8 Unicode

% This is a simple template for a LaTeX document using the "article" class.
% See "book", "report", "letter" for other types of document.

\documentclass[11pt]{article} % use larger type; default would be 10pt

\usepackage[utf8]{inputenc} % set input encoding (not needed with XeLaTeX)

%%% Examples of Article customizations
% These packages are optional, depending whether you want the features they provide.
% See the LaTeX Companion or other references for full information.

%%% PAGE DIMENSIONS
\usepackage{geometry} % to change the page dimensions
\geometry{a4paper} % or letterpaper (US) or a5paper or....
% \geometry{margin=2in} % for example, change the margins to 2 inches all round
% \geometry{landscape} % set up the page for landscape
%   read geometry.pdf for detailed page layout information

\usepackage{graphicx} % support the \includegraphics command and options

% \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent

%%% PACKAGES
\usepackage{booktabs} % for much better looking tables
\usepackage{array} % for better arrays (eg matrices) in maths
\usepackage{paralist} % very flexible & customisable lists (eg. enumerate/itemize, etc.)
\usepackage{verbatim} % adds environment for commenting out blocks of text & for better verbatim
\usepackage{subfig} % make it possible to include more than one captioned figure/table in a single float
% These packages are all incorporated in the memoir class to one degree or another...

\usepackage{url}
\usepackage[lined,algonl,boxed]{algorithm2e}
\usepackage{cite}


%%% HEADERS & FOOTERS
\usepackage{fancyhdr} % This should be set AFTER setting up the page geometry
\pagestyle{fancy} % options: empty , plain , fancy
\renewcommand{\headrulewidth}{0pt} % customise the layout...
\lhead{}\chead{}\rhead{}
\lfoot{}\cfoot{\thepage}\rfoot{}

%%% SECTION TITLE APPEARANCE
\usepackage{sectsty}
\allsectionsfont{\sffamily\mdseries\upshape} % (See the fntguide.pdf for font help)
% (This matches ConTeXt defaults)

%%% ToC (table of contents) APPEARANCE
\usepackage[nottoc,notlof,notlot]{tocbibind} % Put the bibliography in the ToC
\usepackage[titles,subfigure]{tocloft} % Alter the style of the Table of Contents
\renewcommand{\cftsecfont}{\rmfamily\mdseries\upshape}
\renewcommand{\cftsecpagefont}{\rmfamily\mdseries\upshape} % No bold!

%%% END Article customizations

%%% The "real" document content comes below...

\title{GPU and Social Network Algorithms}
\author{Z. Mahomed  \hspace{20 mm} Dr. J. Burns}
\date{         2013} % Activate to display a given date or no date (if empty),
         % otherwise the current date is printed 


\begin{document}
\maketitle

\section{Introduction}

Modern Graphics Processing Units (GPU), have advanced both efficiency and  cost over the last decade. A GPU is a highly parallel computing device designed for rendering graphics, however the tools available now and the architecture of the these GPU's mean that they can be used to execute large data-parallel arithmetic problems. This makes the GPU an ideal tool to work with Social Network Algorithms where, the graphs and computations have a large memory footprint with number of non contiguous memory accesses to a shared memory. \\

\subsection{Parallelisation of Betweenness Centrality }


Traditionally computations to calculate betweenness centrality is to first calculate the number and length of the shortest paths between all pairs of vertices s and t. Then a pairwise dependency accumulation for each vertex v to obtain the betweenness centrality figure. This results in an $O(n^3)$ complexity and a space requirement of $O(n^2)$. \\

\begin{equation}  \label{eq:bc}
 C_B(v) = \sum_{s\ne b \ne t \in V}   \frac{\sigma_{st}(v)}{\sigma_{st}}  \\
\end{equation} 

Brandes presented a faster algorithm relying on the sparse nature of the graph network, that computes the betweenness centrality for all vertices in $O(mn)$ time for unweighed graphs and requires an $O(m+n)$ space.The algorithm is defined as follows, 

\begin{equation}  \label{eq:bc}
 \delta = \sum_{w:d(s,w) = d(s,v)+ 1}   \frac{\sigma_{sv}}{\sigma_{sw}} {(1 + \delta{s}(w))}  \\
\end{equation} 
\\
Brandes showed that $\delta_{s}(v)$ satisfies the following recursive relation, where $d(s,v)$ denotes the length of the shortest path $v$ from the source vertex s.Stage one of the algorithm, perform a breadth-first traversal starting from source s to compute the length and number of shortest paths.Stage two Revisit vertices starting from the farthest vertex from s and accumulate the dependencies. Where $BC(v) = \sum_{s\ne v e V}{(\delta_{s}(v)}$\\

Most of  parallel implementation of betweenness centrality is based on Brandes' algorithm. Using this insight Brandes' algorithm works as follows. Initially, V shortest-path computations are done, one for each $s in V$. In the case of unweighed graphs, these shortest path computations correspond to breadth-first search (BFS) explorations. The predecessor sets $pred(s,v)$, and $\sigma_sv$ values are computed during these shortest-path explorations. Next, for every possible source $s$, using the information from the shortest paths tree and the DAG that is induced on the graph by the predecessor sets, we compute the dependencies $\delta_s(v)$ for all other $v in V$. To compute the centrality value of a node v, we finally compute the sum of all dependency values.
\\ 
\\
This algorithm has parallelism at multiple levels. Firstly, we can perform the shortest path exploration from each source node in parallel. Additionally, each of the shortest path computations can be done in parallel. Our implementation tries to exploit parallelism using the former approach, that is, by having different threads execute single shortest-path computations independently, and by using that information to compute the contribution from each individual source to the betweenness of each node in the graph. The process is described in Figure Pseduo Code below.

\begin{algorithm}
\begin{enumerate}
\item	Graph g = /* read input graph */;
\item	Workset ws = new Workset(g.getNodes())
\item	foreach (Node s : ws) 

\item	  Perform forward BFS: 
\item	    compute BFS DAG 
\item	    for all nodes compute $\sigma_sv$
\item	  For all nodes v in the BFS DAG:
\item	    compute $\delta_s(v)$
\item	    $BC(v)+=\delta_s(v) $


\end{enumerate}
\end{algorithm}

\\
Breadth First Search (BFS)

As a core tool for graph traversal, parallel BFS algorithms on GPU are representative of data-dependant parallel computation with irregualr memory access. The work complexity is $O(V^2 + E) $ as the worst case requires V iterations where  V. The algorithm processes all the vertices at a particular level in parallel. This is where all threads work at a single level and wait till all threads in the level are finished before moving to the next level. This affects the performance on low degree graphs. This therefore limits the desired parallelism.
 

\bibliographystyle{plain}
\bibliography{Reading}{}




\end{document}
