% THIS IS SIGPROC-SP.TEX - VERSION 3.1
% WORKS WITH V3.2SP OF ACM_PROC_ARTICLE-SP.CLS
% APRIL 2009
%
% It is an example file showing how to use the 'acm_proc_article-sp.cls' V3.2SP
% LaTeX2e document class file for Conference Proceedings submissions.
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V3.2SP) *DOES NOT* produce:
%       1) The Permission Statement
%       2) The Conference (location) Info information
%       3) The Copyright Line with ACM data
%       4) Page numbering
% ---------------------------------------------------------------------------------------------------------------
% It is an example which *does* use the .bib file (from which the .bbl file
% is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission,
% you need to 'insert'  your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% Questions regarding SIGS should be sent to
% Adrienne Griscti ---> griscti@acm.org
%
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
% Gerald Murray ---> murray@hq.acm.org
%
% For tracking purposes - this is V3.1SP - APRIL 2009

\documentclass{acm_proc_article-sp}

%\usepackage{listings}
\usepackage{color}

\usepackage{fancyvrb}

\input setting
\input comm

\begin{document}

\title{Software Model Checking for GPGPU Programs, \\ Towards a Verification Tool}

\numberofauthors{2} 

\author{
\alignauthor
Unmesh Bordoloi\\
       \affaddr{Link\"{o}ping University}\\
       \affaddr{SE-581 83, Sweden}\\
       \email{unmesh.bordoloi@liu.se}
\alignauthor
Ahmed Rezine\\
       \affaddr{Link\"{o}ping University}\\
       \affaddr{SE-581 83, Sweden}\\
       \email{ahmed.rezine@liu.se}
}
\date{30 July 1999}

\maketitle

\begin{abstract}
The tremendous computing power 
GPUs are capable of makes of them the epicenter of
an unprecedented attention for applications other than 
graphics and gaming.
%
Apart from the highly parallel nature of 
the programs to be run on GPUs, the sought 
after gain in computing power is only achieved 
with low level tuning at threads level and is 
therefore very error prone. 
% 
In fact the level of intricacy involved when writing
such programs is already a problem and will 
become a major bottleneck in spreading the technology. 

Only very recent and rare works started looking into
using formal methods for helping GPU programmers avoiding 
errors like data races, incorrect synchronizations or assertions
violations. 
%
These are at their infancy and directly import
techniques adapted for other (sequential) systems
with simple under-approximations for concurrency \cite{cuda:smt}.
%
%
Besides that, the only help we are aware of right now \cite{dynamic:cuda} 
takes a concrete
input and explores a tiny portion of the possible thread scheduling
looking for such errors.
%
This easily misses common errors and makes of GPU programming a nightmare task.
% 
There is therefore still a lot of work to do in order to come up with 
helpful and scalable tools for today's and tomorrow's GPGPU software.


We state in this paper our intention in building 
in Link\"{o}ping  a flagship verification 
tool that will take CUDA code and track and report, with 
minimal assistance from 
the programmer, errors like data races, incorrect synchronizations or assertions
violations. 
%
In order to achieve this ambitious and vital goal for the widespread of GPU programming, 
we build on our experience using and implementing CUDA and GPU code and on our latest 
work in the verification of multicore and concurrent programs.
%
In fact, GPU programs like those written in CUDA are suitable for verification
as they typically neither manipulate pointer arithmetics nor allow recursion.
%
This restricts the focus to concurrency and array manipulation, combined
with intra and inter procedural analysis.
%
To give a flavor of where we start from, we report on our experiments 
in automatically verifying two synchronization algorithms that appeared 
in a recent paper \cite{gpu:barriers} 
proposing efficient barriers for inter-block synchronization.
%
Unlike any other verification approach for GPU programs, 
we can show that the algorithms neither deadlock nor violate the 
barrier condition regardless of the number of threads. 
We also capture bugs in case basic
relations are violated between the number of blocks and 
the number of threads per block.
%
\end{abstract}

\category{D.2.4}{Software/Program Verification}{Model Checking, Formal Methods}

\terms{Assertion Checkers, Verification}

\keywords{GPU, Software Model Checking, CUDA, Formal Verification} 


\section{CUDA Programming Model}

%Usage of GPU for general purpose computing is explodind.

%They need expert low level tuning that is easy to get wrong.

%for example data races and wrongly placed barriers.

%few tools available to help programmer

%there is a need for tool for such systems where no recursion or 
%pointer arithmetic, but a lot of concurrency and shared data.

%we propose to build on our experience in verifying 
%parametrized concurrent systems in order to build a tool.

GPUs are used to accelerate parallel phases in modern programs. 
Typically these phases deal with data intensive operations.
%
However, they are also more and more used for more general computing, 
like by exploring
parallelization possibilities in dynamic programming.
%
As an example of a GPU programming model, 
CUDA %programming model 
extends ANSI C and uses kernel functions 
to specify the code run by all threads in a parallel phase. This is an 
instance of the Single Program Multiple Data (SPMD) 
programming style.
%
When kernel functions are launched, 
threads are executed in a grid partitioned into a 
number of blocks. More precisely, 
executing a kernel function results in a one, two or three
dimensional grid consisting of a number of blocks each 
having the same number of threads. 
Each thread can obtain the identifier of its block by using 
CUDA supplied variables (\textrm{blockIdx.x}, etc).
%
Threads in the same block share a low latency
memory. Those belonging to different blocks 
would need to use a much slower memory or to pass through the host.
%
In addition to block identifiers, a thread has also identifiers
it can access using other CUDA variables (\textrm{threadIdx.x}, etc).
%
Based on these indices, each thread will run the 
kernel function differently as the latter can refer
to them. 


\section{Synchronization examples}
In the following, we describe two 
synchronization barriers from \cite{gpu:barriers}.
%
The two solutions propose inter-block barrier implementations.
%
Indeed, CUDA only supplies intra-block synchronization barriers using 
the ``\_\_synchthreads()'' function.
%
The authors in \cite{gpu:barriers} propose
a centralized mutex based solution together with 
a more efficient ``lock-free'' solution.
%
In fact they also propose a tree based solution that can be
regarded as a direct
extension of the simple solution. For lack of space,
we concentrate here on the two first solutions.

\begin{figure}
\begin{minipage}[h]{0.5 \textwidth}
  \VerbatimInput[fontsize=\small,frame=single,numbers=left,numbersep=-8pt,commandchars=\\\{\}]{cs.c}  
\end{minipage}
\caption{Code snapshot of a simple barrier \cite{gpu:barriers}.}
\label{code:cs}
\end{figure}


\paragraph{Centralized simple Synchronization}
A snapshot of the code for this solution is presented 
in Figure \ref{code:cs}.
%
When calling the ``\_\_gpu\_sync'' function implementing 
the inter-block barrier, the number of blocks is passed as the value 
``goalVal''. 
%
The idea is to use a global variable shared by all blocks, here ``g\_mutex''.
%
The solution assumes a ``leading thread'' in each block. 
%
After the block completes its computation in the current epoch, its leading 
thread atomically increments the shared variable ``g\_mutex'' (line 12). 
The leading thread starts then its active wait (line 14) until the
variable evaluates to ``goalVal'', in which case the leading thread can 
synchronize with the other threads in its block, hence proceeding to 
the next epoch.
%


\begin{figure}
\begin{minipage}[h]{0.5 \textwidth}
  \VerbatimInput[fontsize=\small,frame=single,numbers=left,numbersep=-8pt,commandchars=\\\{\}]{lfs.c}  
\end{minipage}
\caption{Code snapshot of a lock-free barrier \cite{gpu:barriers}.}
\label{code:lfs}
\end{figure}

\input lfspicture

\paragraph{Lock free synchronization}
A snapshot of this second solution is described in 
Figure \ref{code:lfs}.
%
Instead of having all blocks accessing the same 
shared variable, this solution proposes to share
two arrays (namely ``Ain'' and ``Aout'') 
with the number of blocks as their
respective sizes.
%
The idea is that each block that completed the computation in
the current epoch will have its leading thread assign 
``goalVal'' (passed as a parameter together with the arrays)
to the input array ``Ain'' in the slot corresponding
to its block. 
%
After that, the leading thread
waits for the slot corresponding to its block in ``Aout'' to 
become ``goalVal''.
%
After all blocks have assigned ``goalVal'' to their respective slot in 
``Ain'', the threads of a chosen block (here block 1)
 synchronize at line 25.
%
The threads belonging to the chosen block  are used to monitor,
for each block id, ``Ain[id]'' and to assign its value to ``Aout[id]'' in
case it evaluates to ``goalVal''.
%
As a result, all leading threads can proceed and synchronize with 
the threads in their own block (line 38) hence moving to the
next epoch.

\input cspicture


\section{Parameterized Verification}

Given models of the programs, we 
perform automatic parameterized verification, i.e., we verify
with minimal human interaction programs 
regardless of the number of concurrent threads and blocks in the system.
%
The problem can be shown to be undecidable in general and 
combinations with abstractions
and efficient symbolic representations play an important role.
%
For this reason, we build on our previous work with 
{\it monotonic abstraction} 
\cite{rmc:transducers} and its 
automatic refinement \cite{monotonic:cegar}.
%
Monotonic abstraction is based on the concept of {\em monotonic systems w.r.t.
a well-quasi ordering} $\preceq$ defined on the set of configurations
\cite{Parosh:Bengt:Karlis:Tsay:general,Finkel:Schnoebelen:everywhere:TCS}.
%
Since the abstract transition relation is an over-approximation
of the original one, proving a safety property in the
abstract system implies that the property is also satisfied in the original
system.
%
However, this also implies that {\it false-positives} may be 
generated in the abstract
model, we handle them by combining forward/backward analysis together
with widening or interpolation techniques. This is schematically
described in Figure.\ref{fig:cegar}.
%
More details are available in \cite{monotonic:cegar}.

\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{cegar}
\end{center}
\label{fig:cegar}
\caption{Counter Example Guided Abstraction Refine. for Concurrent Parameterized Systems \cite{monotonic:cegar}}
\end{figure}




\section{Experiments and Future Work}
At this stage, we manually build the models
in Figures \ref{model:cs} and \ref{model:lfs} to describe
behaviors of the programs in Figures \ref{code:cs} and \ref{code:lfs}.
%
Of course, building such models is both time consuming
and error prone.
%
We are working on automatically extracting such models
from CUDA source code without the need to manually supply them.
%
This model extraction step is also combined
with techniques like slicing or predicate abstraction to
boost the applicability and the scalability of the approach.
%
Assuming for now the models are given, our verification 
technique applied to the first simple synchronization
algorithm, captures that:
if ``goalVal = nBlocsNum'' then the algorithm of Figure \ref{code:cs}
respects the barrier property and does not deadlock. This is not the case
if ``goalVal$<$nBlocksNum'' (barrier property violated) 
or ``goalVal$>$nBlockNum'' (deadlock).
%
For the algorithm of Figure \ref{code:lfs}, our prototype 
automatically captures that if 
``nThreadsPerBlock $\geq$ nBlocksNum'', then the algorithm respects the
barrier property, and does not otherwise.

%
These results are relevant as
a typical number of threads per block on the 
latest generation of GPUs is 32. This is the same number 
as the size of the ``warp'' (often 
selected by the designers to hide latencies). A warp can
be seen as the smallest unit of threads that are scheduled together by a GPU 
multiprocessor. 
Hence, choosing this number as the thread block size has 
clear advantages. In future GPUs, we can expect this number to remain 
more or less the same.
%
On the other hand, the number of multiprocessors on GPUs can be
expected to increase rapidly, as has been the trend. For example, 
nVIDIA Tesla M2050 
already has 14 multiprocessors, and in future we can expect this number
to be easily more than the magic number 32. Given this and given the 
synchronization approach in the previous algorithms, we can easily 
have more thread blocks than  
threads per block. These algorithms would therefore not respect the 
barrier property if ported directly to other platforms.
%
This was a simple example
showing that capturing such errors and proving their absence 
while allowing for benign race conditions or other performance tricks is very 
important as it helps programming GPU platforms.

%
%
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c| }
\hline
\textbf{Model}   & \textbf{pass} & \textbf{seconds} \\
\hline
\hline
CS\cite{gpu:barriers}: {\scriptsize goalVal $=$ nBlocksNum} & $\surd$       &    0.05    \\
\hline
\hline
CS\cite{gpu:barriers}: {\scriptsize goalVal $<$ nBlocksNum}         & $\times$      &  <0.01    \\
\hline
\hline
CS\cite{gpu:barriers}: {\scriptsize goalVal $>$ nBlocksNum}         & $\times$      &  0.05    \\
\hline
\hline
LFS\cite{gpu:barriers}: {\scriptsize nThreadsPerBlock $\leq$ nBlocksNum}  & $\surd$       &  2.7     \\
\hline
\hline
LFS\cite{gpu:barriers}:  {\scriptsize nThreadsPerBlock $<$ nBlocksNum}     & $\times$       &    <0.01   \\
\hline
\end{tabular}
\end{center}
%\mbox{~}\\
\caption{We use our prototype for automatic parameterized 
verification \cite{monotonic:cegar}. $\surd$ stands for verified, and
$\times$ for supplying a concrete counter example.}
\label{tab:res}
\end{table}


%
\bibliographystyle{abbrv}
%
\bibliography{biblong}

\end{document}

