%CHAPTER 3
\chapter{Simulations with Cuda programming model}
\label{chap:GPUProgrammingModel}
This chapter describes Cuda programming model and its applications. One goal is to show a adequation of mapping between GPU architecture and cell network structure. Besides, it enables to solve the problems of both large cell networks and complicated behavior. Next, the performance tests on computation will be conducted in different scenarios due to the necessary considerations on the effectiveness of this approach.
\section{GPU and Cuda programming model}
\subsection{Introduction to GPU}
The Graphic Processing Unit (GPU)~\cite{GPU} is massively multithreaded - many core chips composed of hundreds of cores and thousands of threads. This provides the capacity for processing large data in parallel. Thus, it is widely used in parallel computations.\\
a simplified of a motherboard architecture is depicted in Figure~\ref{img:gpuarchitecture}. There are two parts, the left part for the CPU (host) and the right one for the GPU (device). They are connected together by a PCI bus. On the CPU, only host memory is considered in this model. Meanwhile, the GPU chip comes with a set of streaming multiprocessors (SM). Each consists of several scalar processors (SP), a set of registers, a shared memory. An on-chip shared memory is visible for all threads that executed on a SM. A global memory is shared for all SMs.
\begin{figure}[H]
	\begin{center}
		 \includegraphics[height=7cm]{img/gpuarchitecture.png}
		 \caption{A simplified motherboard architecture.}
		 \label{img:gpuarchitecture}
	\end{center}
\end{figure}
\subsection{Cuda programming model}
Cuda (Compute Unified Device Architecture) is created by NVIDIA. It provides a platform for parallel computing and programming model. It enables to increase computing performance by harnessing the power of the GPU. Cuda provides a set of extensions to C/C++ language, to express parallel programs.\\
The GPU has thousands of threads handing multiple tasks while a CPU consists of a few threads for sequential serial processing. Thus, a Cuda program typically consists of CPU code (host code) and one or more kernels (device code) running concurrently on the GPU. As shown in Figure~\ref{img:AnatomyCUDA}, the compute-intensive portions of the application will be sent to the GPU, while the remainder of the code still runs on the CPU.\\
Kernels are executed by many several threads with private local variables and shared memory. The executions of blocks are synchronous while those of threads in each block are independent.\\
In addition, each of the CPU and the GPU has its own separate memory. They cannot directly access the memory of each other. Thus, we need to explicit transfer data between the two memories via PCI bus.\\
\begin{figure}[H]
	\begin{center}
		 \includegraphics[width=12cm]{img/AnatomyCUDA.png}
		 \caption{Anatomy of a CUDA program.}
		 \label{img:AnatomyCUDA}
	\end{center}
\end{figure}
\section{Accelerating simulations by using Cuda}
Programming with Cuda, means programming a large number of threads with own shared memory and concurrent executing the same task. Therefore, if there is a need to address a large number of repeated works which are the same, it is convenient to apply this model.\\
In our case, each model owns a cell network, input data for each cell, and a common transition rule for entire cells. This makes sense that each cell has its local data and global behavior. Every cells must make the same computation on its own data at each step in order to achieve new states for the system. It is thus simple to map each cell to each thread being responsible for the processing of that cell, as illustrated in Figure~\ref{img:MappingCellNetwork_GPU}.
\begin{figure}[H]
	\begin{center}
		 \includegraphics[width=11cm]{img/MappingCellNetwork_GPU.png}
		 \caption{The mapping between the cell network structure and  the GPU architecture.}
		 \label{img:MappingCellNetwork_GPU}
	\end{center}
\end{figure}
\begin{comment}
Therefore, in the entire computing system, the total number of threads is bound to the number of network cells. However, the capacity of hardward and especially memory are finite. Another solution can be thus proposed. They are sequentially send to GPU at every step. 
\end{comment}
According to this model, data need to be moved on the global memory to share between threads. Figure~\ref{img:computation} shows the data flow of physical simulations in term of CUDA programming. This can be summarized into some main steps:
\begin{itemize}
	\item{Initializing initial states (input data) for all network cells.}
	\item{Transferring data (cells' states and network structure) to the GPU for computations. For each cycle, the new states of all nodes will be concurrently computed on the GPU. These states will updated with new values to prepare for the next cycle.}
	\item{Sending data back to to the CPU memory possibly to display and analyze the results. It is optional, if the result of each step is not considered for displaying and analyzing at run-time, these operations can be omitted.}
\end{itemize}
\begin{figure}[H]
	\begin{center}
		 \includegraphics[width=8cm]{img/computation.png}
		 \caption{Data flow in the system.}
		 \label{img:computation}
	\end{center}
\end{figure}
Obviously, if the phase of displaying and analyzing is ignored, the execution of simulation mostly is run on device. Hence, it is believed that the benefit of performance in this case will be proportional to the size of cell networks. It becomes more worthwhile in the case of simulating phenomena, which often appear with large sizes and very complicated transition functions.\\
Moreover, this proposition provides an opportunity to achieve computations and statistics in real time. This increasingly becomes important when the needs of predictions of many emergent cases increase, namely clouds of insects, flooding, traffic congestion, tsunami, fire. For those situations, the systems can directly access available data from the natural environment via observing systems. The simulations use input data to conduct useful information (directions of clouds of insects or the level of flood at a certain time in the future, for example). 
\section{Details of GPU implementation of simulations}
In this section, the details of GPU implementations of three main simulations will be presented: \textbf{pollution diffusion}, \textbf{forest fire} and \textbf{wireless sensor network}. All of them are developed by C programming language in accordance with Cuda model. These implementations are resulted from the analysis in the previous section. The formal presentations of implementations are described as the following.
\begin{framed}
	\textit{Host program implemented on the CPU}
		\begin{itemize}
			\item[] (1) Initializing the initial values for all cells.
			\item[] (2) Copying the cell network structure and data from the CPU host memory to the GPU device memory and launching the kernel.
		\end{itemize}	
		\begin{framed}
			\textit{Kernel program implemented on the GPU}
				\begin{itemize}
					\item[] (3) Looping each cycle.
					\item[] (4) Computing the new states for each cell.
					\item[] (5) Updating new states to each cell.
				\end{itemize}		
		\end{framed}
		\begin{itemize}
			\item[] (6) Reading back results to the CPU and output the results (once for each time step or more).
		\end{itemize}
\end{framed}
Apparently, the execution runs mostly on GPU (from (3) to (5)). Others do not much affect to global performance if line (6) is not considered. Then, line (1) is executed once and line (2) is run twice. Thus, as a comparison, the execution time on CPU can be omitted.\\
In the next section, some initial measurements will be performed for evaluating the effectiveness of using the massively parallel architecture GPU to accelerate the computation of phenomena simulations.
\section{Performance measurement principles}
In order to validate the performance of the proposal methodology, a few measurement tests were performed. The simulation of pollution diffusion in the river was chosen as a case. The description of the pollution diffusion model follows Section~\ref{section:PollutionDiffusion}. The implementation of the transition function presented in Listing~\ref{code:transition}. There are two data structures used. The \textbf{NodeState} structure contains states of cells, the \textbf{Canaux} structure consists of links to neighbours.
\begin{lstlisting}[caption=Transition function, label=code:transition]
__device__ NodeState computeState(NodeState *nowState, int nodeIndex, 
					Canaux *channels)
{
	NodeState myState;
	int nbIn, nodeIn;
	float receive;
	
	/// Getting pollution density of the cell
	myState = nowState[nodeIndex];
	/// Getting number of neighbours of the cell
	nbIn = channels[nodeIndex].nbIn;		
	receive = 0;	
	for (int i = 0; i < nbIn; i++)
	{
    		/// Getting id of the neighbours
   	    nodeIn = channels[nodeIndex].read[i].node;
    		receive = receive + 
	    			((nowState_d[nodeIn].density / 2.0) / (float) channels[nodeIn].nbIn);
	}
	/// Computing the new state
	myState.density = (myState.density / 2.0) + receive;
	return myState;
}
\end{lstlisting}
We have tested and have evaluated the computational efficiency in various studies. The concentration of these tests is to show how the GPU speeds up the simulations when comparing to the CPU. Therefore, the time for transferring data between CPU and GPU are omitted in most cases. The time execution of the simulation on the host is also ignored due to most of computation being moved on the device.\\
As mentioned earlier, the simulation execution costs depend on two main components: cell networks (size and type of CA pattern chosen) and the complexity of transition rules. Thus, many different aspects related to these components will be concerned.\\
All tests have been tried on a PC with hardware configuration shown in Table~\ref{table:CPUInfo}. Information about Graphics Device is presented in Table~\ref{table:TechnicalData} (more details, see \cite{GeForceGTX680}). We have used a profiling tool \textit{nvprof}~\cite{nvprof} to estimate time for GPU computation and the standard library \textit{time.h} for that on the CPU.\\
\begin{table}[H]
\begin{center}
\begin{tabular}{|l|l|}
\hline  
 \multicolumn{2}{|c|}{\textbf{Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz}} \\ 
\hline 
Num. CPUs & 8 \\ 
\hline 
Num. Cores/CPU & 4 \\ 
\hline
Architecture & i686 \\ 
\hline 
RAM & 16 GB \\ 
\hline 
\end{tabular} 
\caption{Technical data of PC used.}
\label{table:CPUInfo} 
\end{center}
\end{table}

\begin{table}[H]
\begin{center}
\begin{tabular}{|l|l|}
\hline  
\multicolumn{2}{|c|}{\textbf{GeForce GTX 680}} \\ 
\hline 
Num. cores & 1536 \\ 
\hline 
Maximum number of threads per block & 1024 \\ 
\hline 
Global memory & 4 GB \\ 
\hline 
\end{tabular} 
\caption{Technical data of NVidia graphics card used.}
\label{table:TechnicalData} 
\end{center}
\end{table}
\textbf{The first scenario:} The comparison of time computation between the CPU and the GPU was carried out. All tests follow the model of river pollution diffusion (Section~\ref{section:PollutionDiffusion}) with the pattern of 8 neighbourhoods and 1,000 cycle runs for each test. The transport time was considered in this case study.\\
The computation on both the CPU and the GPU are influenced by the size of cell networks (number of cells), but not by the size of cells. Since, the cell is a basic element in cell networks, the computations are careless about the pixels of cells. With the same studied region, as the size of cells is smaller, we can process a larger cell network. Otherwise, the cell network is small if a bigger size of cells is chosen. Thus, the sizes of cells were regardless the performance tests.\\
Table~\ref{table:ComparisonCPUGPU} shows the time executions of the pollution diffusion model on the CPU and the GPU with 1,000 cycles. The network sizes used between 1,220 and 83,661 cells. \\
Regarding the network size, the number of cells influence the performance for both the CPU and the GPU. On the CPU, the upward trend is very noticeable. The great increase starts from the size of \textbf{10,703} to \textbf{83,661} at a rate of \textbf{0.26(s)/1,000 cells}. It is projected that the trend anticipation will be maintained with bigger sizes. Whereas, the increase on the GPU is not dramatic. It gradually rises between \textbf{1,220} and \textbf{83,661} at a rate of \textbf{0.01(s)/1,000 cells}.\\
Table~\ref{table:ComparisonCPUGPU} presents that the GPU is overwhelmingly faster than the CPU. The gap increasingly becomes significant according to the rise of the number of cells. This is visually expressed in Figure~\ref{img:ComparisonCPUGPU}. As the size of cell network is \textbf{83,661}, the GPU is approximately 22 times faster than the CPU. It is that the use of GPU is very vital in the case of vast systems.
\begin{table}[H]
\begin{center}
\begin{tabular}{|l|c|l|l|}
\hline 
 &  \multicolumn{3}{c|}{\textbf{Time (seconds)/1,000 cycles}} \\ 
\hline 
 \textbf{Num. cells} & \textbf{Cell size (Pixel)} & \textbf{CPU} & \textbf{GPU}\\ 
\hline 
1,220 & 10x10 & 0.060 & 0.040\\ 
\hline 
10,703 & 5x5 & 0.590 & 0.103\\ 
\hline 
48,425 & 2x2 & 10.880 & 0.527 \\ 
\hline 
83,661 & 2x2 & 19.910 & 0.894\\ 
\hline 
\end{tabular} 
\caption{The computation comparison between the CPU and the GPU in the case of pollution diffusion model.}
\label{table:ComparisonCPUGPU} 
\end{center}
\end{table}
\begin{figure}[H]
	\begin{center}
		 \includegraphics[height=5cm]{img/ComparisonCPUGPU.png}
		 \caption{Demonstrating the accelerating time of using the GPU for physical simulation.}
		 \label{img:ComparisonCPUGPU}
	\end{center}
\end{figure}
Figure~\ref{img:cellriver} shows an example about physical simulation on GPU. The cell network of a river is generated by PickCell tool with the use of four neighbor pattern. Meanwhile, the model of pollution diffusion is referred from Section~\ref{section.SimulationSystem}. Initially, two polluted points are randomly created in the river. These points contain an amount of pollution density as their data states. At every step, system states are changed according to the transition function.
\begin{figure}[H]
	\begin{center}
		 \includegraphics[height=8cm]{img/cellriver.png}
		 \caption{Illustrating a simulation of diffusing pollution in a river following the model described in Section~\ref{section.SimulationSystem}. It is initialized with two polluted points (black points).}
		 \label{img:cellriver} 
	\end{center}
\end{figure}
\textbf{The second scenario:} Different sizes of cell networks are still taken into account. The two popular patterns of CA (Von Neumann 1 and Moore 1) and the difference of number of cycles are considered as well. The model are used as the previous case. The achieved results are presented in Table~\ref{table:Measurement1}. One of these attempts is shown in Figure~\ref{img:cellriver}.\\
The values shown in Table~\ref{table:Measurement1} indicate that the increase of cycles does not much affect to the execution time. It can be understood that the transition functions are very simple to generate major differences. \\
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline 
 \multirow{2}{*}{\textbf{Num. cells}} & \textbf{\pbox{10cm}{Cell size\\(Pixel)}} & \multirow{2}{*}{\textbf{CA Pattern}} & \multicolumn{5}{c|}{\textbf{Time (seconds) / Num. cycles}} \\ 
\hhline{~~~-----}
   &	&  & \textbf{100} 		& \textbf{1,000} 	  & \textbf{10,000}  & \textbf{100,000} & \textbf{1,000,000} \\ 
\hline 
1,220 & 10x10 	& VN 1 & 0.002 & 0.035 & 0.356 & 3.564 & 35.643 \\ 
\hline 
1,220 & 10x10 	& Moore 1 & 0.002 & 0.035 & 0.355 & 3.569 & 35.642 \\ 
\hline 
10,703 & 5x5 & VN 1 & 0.010 & 0.103 & 1.036 & 10.369 & 103.544 \\ 
\hline 
10,703 & 5x5 & Moore 1 & 0.017 & 0.170 & 1.704 & 17.058 & 170.544 \\ 
\hline 
48,425 & 2x2 & VN 1 & 0.052 & 0.527 & 5.268 & 52.704 & 527.008 \\ 
\hline 
48,425 & 2x2 & Moore 1 & 0.087 & 0.880 & 8.801 & 88.008 & 880.386 \\ 
\hline 
83,661 & 2x2 & VN 1 & 0.145 & 0.894 & 8.948 & 89.477 & 895.427 \\ 
\hline 
83,661 & 2x2 & Moore 1 & 0.219 & 1.454 & 14.566 & 145.661 & 1,002.105 \\ 
\hline 
\end{tabular} 
\caption{Measurements results.}
\label{table:Measurement1} 
\end{center}
\end{table}
Regarding CA patterns, for small networks, the differences between Von Neumann 1 and Moore 1 are not very remarkable. However, in the case of larger ones, Von Neumann 1 is significantly faster than the other. As a case, as running time is 10,000 cycles and network size is 83,661 cells, the Moore 1 takes \textbf{14.566(s)} while the Von Neumann 1 just takes \textbf{8.948(s)}. The former is about 1.6 times slower than the latter, as shown in Figure~\ref{img:ComparisonVN_Moore}. 
\begin{figure}[H]
	\begin{center}
		 \includegraphics[height=5cm]{img/ComparisonVN_Moore.png}
		 \caption{The graph displays the increase of the gap between two CA patterns with 10,000 cycles.}
		 \label{img:ComparisonVN_Moore} 
	\end{center}
\end{figure}

\textbf{The third scenario:} It aims to show that the execution time also depends on transition function. To do that, we modified a little on the previous version. Particularly, at every step, each cell loses an random amount of the pollution density. The implementation is shown as below. 
\vspace{2cm}
\begin{lstlisting}[caption=Transition function (version 2), label=code:transition]
__device__ NodeState computeState(NodeState *nowState, int nodeIndex, 
					Canaux *channels, curandState *devStates)
{
	NodeState myState;
	float lossPercentage, receive, loss;
	int nbIn, nodeIn;

	myState = nowState[nodeIndex];	
	/// Generating a random value in [0.0 - 1.0] by generateNumber function.
	lossPercentage = generateNumber(devStates, nodeIndex);
	/// Calculating an amount of loss.
	loss = lossPercentage * myState.density;
	/// Getting number of neighbour
	nbIn = channels[nodeIndex].nbIn;
	receive = 0;
	for (int i = 0; i < nbIn; i++)
	{
	    	/// Getting id of the neighbour
     	  nodeIn = channels[nodeIndex].read[i].node;
	    	receive = receive + ((nowState[nodeIn].density / 2.0) / 
				(float) channels[nodeIn].nbIn);
	}
	/// Computing the new state
	myState.density = (myState.density / 2.0) + receive - loss;	
	if (myState.density < 0.0) 
	{
	   myState.density = 0.0;
	}	
	return myState;
}
\end{lstlisting}
The graph~\ref{img:Version1_Version2} demonstrates the influences of transition rules on execution time in this approach. The version 2 is slower than version 1 due to the more complex behaviour. The increase of time is stable following the size of the networks.
\begin{figure}[H]
	\begin{center}
		 \includegraphics[height=5cm]{img/Version1_Version2.png}
		 \caption{Comparing the execution time between previous transition function (version 1) and the new one (version 2).}
		 \label{img:Version1_Version2}
	\end{center}
\end{figure}
%--------------COMMENT---------------------------------------------------------------------------------------------------------
\begin{comment}
Next, many modifies of initial values as well as behavior of the system are performed to demonstrate how the performance on mainly depends on the behavior. In the first version, we use a transition rule that is quite simple. And then, the its complexity is increasingly pushed up. There are three more scenarios for the simulations:
\begin{itemize}
	\item{Different resources being put in the river: In this case, the consideration is the differences of initial inputs, which can be affect to the performance of the system.}
	\item{Random emitting: At every step, the density of pollution will be randomly emitted an amount. In real world, the emitting depends on many others factors such as water temperature, light, humidity, etc.}
	\item{Random wind direction: In this case, the model is added wind directions, the alternative of direction is randomly among its neighbor. Therefore, the possibility is decided by the CA pattern chosen.}
\end{itemize}
\end{comment}

\begin{comment}
%-----------------------------------CHAPTER  4-------------------------------------
\chapter{Optimizing physical simulations with the pipeline approach}
\label{chap:pipeline}
For this chapter, we describe the pipeline approach for enhancing the performance of physical simulations, which were presented earlier. This approach also helps to deal with problems of very large systems. In order to achieve it, we need first to send data part by part to the GPU for processing instead of the whole data as the original version. In which, data are actually represented by cell network systems, one cell of which contains a local state and its links.\\
Therefore, in the first section, we will concern about diving a cell network into several smaller ones. Next, the pipeline approach will be applied for these parts. Finally, some experiments will be taken into account.
\section{Divided cell network model}
In the previous parallel simulation version, a cycle of simulation is sending the whole cell network towards the GPU, executing transition function on the kernel, and receiving results from the GPU. However, in the case of very large systems with complicated transition functions, it is likely cannot get a good performance. To reduce this limitation, we suggests the use of the pipeline approach. With this approach, we need to divide the cell network into several parts and to send them to the GPU part by part. For instance, a cell network with the size of 100,000 cells can be divide into 10 parts, each of which consists of 1,000 cells. An example is shown in Figure~\ref{img:DividingCellnetwork}.
\begin{figure}[H]
	\begin{center}
		 \includegraphics[height=8cm]{img/DividingCellnetwork.png}
		 \caption{A cell network (10 cells with their links) is divided into 3 parts, each of which has the size of 4 cells. The last part (part 3) is a special case as it only contains the last two cells of the cell network.}
		 \label{img:DividingCellnetwork}
	\end{center} 
\end{figure}
A cell network currently is organized as list of structures of cells in a program. The indexes of the list is corresponding to cells' identities of the cell network. It is thus simple to divide the cell network with a specified part size. A \textit{part index} can be used to determine lower bound and upper bound of each part. In addition, the neighbourhood of cells in each part also need to be send to the GPU since they will used by the transition to calculate new states, as demonstrated in Figure~\ref{img:PartAndNeighbour}. In which, the cells {1, 2, 3, 4} of the first part and their neighbourhood will both be sent to GPU. In this case, only the neighbourhood {5, 7, 8} are not included in the first part. It makes sense that the size of neighbourhood, which need to be sent, are mostly decided by the size of parts.
\begin{figure}[H]
	\begin{center}
		 \includegraphics[height=8cm]{img/PartAndNeighbour.png}
		 \caption{The neighbourhood which are not included in the first part will be sent to GPU.}
		 \label{img:PartAndNeighbour}
	\end{center} 
\end{figure}
In short, we have a set of parts, each of which will be computed by three separated tasks. Finally, result will be returned. Three functions according to that description are defined as the following.
\begin{itemize}
	\item{toGPU:} Sending data part to the GPU.
	\item{processGPU:} Invoking the kernel to execute the part.
	\item{fromGPU:} Returning the results back the CPU.
\end{itemize}
In order to process the whole data at time t, the system needs to perform three previous functions with the number of iteration being equal to the number of divided parts. After that, all results of the iterations will be used for the next step (at time t+1). In this version, for each part, time is required for one part cycle is:
\begin{itemize}
    \item[]\textbf{Time(part) = time(toGPU) + time(processGPU) + time(fromGPU)}
\end{itemize}
This often decreases the performance of systems in this case. But, we can reduce the time consummation by using the pipeline approach, which can produce a result of time equal to:
\begin{itemize}
	\item[]\textbf{Time(part) = Max(time(toGPU), time(processGPU), time(fromGPU))}
\end{itemize}
Obviously, it is less time consuming than the normal version. We will use the threads libraries available in C language to obtain this objective.
\section{Pipeline approach for physical simulation based on cell networks}
In pipeline approach, we propose the use of three threads, which will run in parallel (T1, T2, and T3). They are used for three tasks \textbf{toGPU}, \textbf{processGPU}, and \textbf{fromGPU}, respectively. In this pipeline architecture, the output from T1 will be the input for T2, the output from T2 is the input for T3, and the output from T3 is the final result. This cycle is shown in Figure~\ref{img:PipelineThread}.
\begin{figure}[H]
	\begin{center}
		 \includegraphics[height=4cm]{img/PipelineThread.png}
		 \caption{Pipeline threads.}
		 \label{img:PipelineThread}
	\end{center} 
\end{figure}
\end{comment}