{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before we begin, let us execute the below cell to display information about the NVIDIA® CUDA® driver and the GPUs running on the server by running the `nvidia-smi` command. To do this, execute the cell block below by clicking on it with your mouse, and pressing Ctrl+Enter, or pressing the play button in the toolbar above. You should see some output returned below the grey cell."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Learning objectives\n",
    "The **goal** of this lab is to:\n",
    "The goal of this lab is:\n",
    "- Learn how to use CUDA C and CUDA Fortran to parallelize our code.\n",
    "- Understand the basic terms and steps involved in making a sequential code parallel.\n",
    "\n",
    "We do not intend to cover:\n",
    "- Optimization techniques like memory access patterns and memory hierarchy.\n",
    "\n",
    "# Introduction\n",
    "Graphics Processing Units (GPUs) were initially designed to accelerate graphics processing, but in 2007 the release of CUDA introduced GPUs as General Purpose Processors. CUDA is a parallel computing platform and programming model that makes using a GPU for general-purpose computing simple and elegant. The developer still programs in the familiar C, C++, Fortran, or an ever-expanding list of supported languages and incorporates extensions of these languages in the form of a few basic keywords.\n",
    "\n",
    "- CUDA C/C++ is based on a standard C/C++, and CUDA Fortran is based on a standard Fortran\n",
    "- CUDA is a set of extensions to enable heterogeneous programming\n",
    "- CUDA is a straightforward API to manage devices, memory, etc.\n",
    "\n",
    "\n",
    "# CUDA \n",
    "\n",
    "\n",
    "**Heterogeneous Computing:** CUDA is a heterogeneous programming model that includes provisions for a CPU and GPU. The CUDA C/C++ programming interface consists of C language extensions, and the CUDA Fortran programming interface consists of Fortran language extensions. These enable you to target portions of source code for parallel execution on the device (GPU). CUDA provides a library of C/Fortran functions that can be executed on the host (CPU) to interact with the device. The two processors that work with each other are:\n",
    "\n",
    "- Host: CPU and its memory (Host Memory)\n",
    "- Device: GPU and its memory  (Device Memory)\n",
    "\n",
    "\n",
    "Let us look at a Hello World example in C and Fortran: \n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA C/C++</b></summary>\n",
    "    \n",
    "```cpp\n",
    "_global__ void print_from_gpu(void) {\n",
    "    printf(\"Hello World! from thread [%d,%d] From device\\n\", threadIdx.x,blockIdx.x);\n",
    "}\n",
    "\n",
    "int main(void) {\n",
    "    printf(\"Hello World from host!\\n\");\n",
    "    print_from_gpu<<<1,1>>>();\n",
    "    cudaDeviceSynchronize();\n",
    "    return 0;\n",
    "}\n",
    "\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA Fortran</b></summary>\n",
    "    \n",
    "\n",
    "```fortran\n",
    "module printgpu\n",
    "contains\n",
    "  attributes(global) subroutine print_form_gpu()\n",
    "    implicit none\n",
    "    integer :: i\n",
    "    i = blockDim%x * (blockIdx%x - 1) + threadIdx%x\n",
    "    print *, i\n",
    "  end subroutine saxpy \n",
    "end module printgpu\n",
    "\n",
    "program testPrint\n",
    "  use printgpu\n",
    "  use cudafor\n",
    "  implicit none\n",
    "\n",
    "  call print_form_gpu<<<1, 1>>>()\n",
    "  cudaDeviceSynchronize()\n",
    "end program testPrint\n",
    "\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "So you might have already observed that CUDA C is nothing but extensions/constructs to existing language. Let us look at   the additional constructs we introduced above:\n",
    "\n",
    "- ```__global__``` :This keyword, when added before the function, tells the compiler that this is a function that will run on the device and not on the host. \n",
    "- ``` <<<,>>> ``` : This keyword tells the compiler that this is a call to the device function and not the host function. Additionally, the 1,1 parameter dictates the number of threads to launch in the kernel. We will cover the parameters inside the angle brackets later.\n",
    "- ``` threadIdx.x, blockIdx.x ``` : This is a unique ID that's given to all threads. \n",
    "- ``` cudaDeviceSynchronize() ``` : All of the kernel(Function that runs on GPU) calls in CUDA are asynchronous in nature. This API will make sure that the host does not proceed until all device calls are over.\n",
    "\n",
    "\n",
    "## GPU Architecture\n",
    " \n",
    "This section will take an approach to describe the CUDA programming model by showing the relationship between the software programming concepts and how they get mapped to GPU hardware.\n",
    "\n",
    "The diagram below shows a higher level of abstraction of components of GPU hardware and its respective programming model mapping. \n",
    "\n",
    "<img src=\"../../_common/images/cuda_hw_sw.png\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "As shown in the diagram above CUDA programming model is tightly coupled with hardware design. This makes CUDA one of the most efficient parallel programming models for shared memory systems. Another way to look at the diagram shown above is given below: \n",
    "\n",
    "| Software | Executes  | Hardware |\n",
    "| --- | --- | --- |\n",
    "| CUDA thread  | on/as | CUDA Core | \n",
    "| CUDA block  | on/as | Streaming Multiprocessor |\n",
    "| GRID/Kernel  | on/as | GPU Device |\n",
    "\n",
    "We will understand the concept of blocks and threads in the upcoming section. But let us first look at the steps involved in writing CUDA code.\n",
    "\n",
    "\n",
    "## Steps in CUDA Programming\n",
    "\n",
    "The below table highlights the typical steps which are required to convert sequential code to CUDA code:\n",
    "\n",
    "| Sequential code | CUDA Code |\n",
    "| --- | --- |\n",
    "| **Step 1** Allocate memory on the CPU ( _malloc new_ ) | **Step 1** : Allocate memory on the CPU (_malloc, new_ )|\n",
    "| **Step 2** Populate/initialize the CPU data | **Step 2** Allocate memory on the GPU, using API like _cudaMalloc()_ |\n",
    "| **Step 3** Call the CPU function that has the crunching of data. | **Step 3**  Populate/initialize the CPU  |\n",
    "| **Step 4** Consume the crunched data on Host | **Step 4** Transfer the data from the host to the device with _cudaMemcpy()_ |\n",
    "| | **Step 5** Call the GPU function with _<<<,>>>_ brackets |\n",
    "| | **Step 6** Synchronize the device and host with _cudaDeviceSynchronize()_ |\n",
    "| | **Step 7** Transfer data from the device to the host with _cudaMemcpy()_ |\n",
    "| | **Step 8** Consume the crunched data on Host |\n",
    "\n",
    "CPU and GPU memory is different, and the developer needs to use additional CUDA API to allocate and free memory on GPU. The only device memory can be consumed inside the GPU function call (kernel).\n",
    "    \n",
    "In CUDA C/C++, linear memory on the device is typically allocated using ```cudaMalloc()``` and freed using ```cudaFree()``` and data transfer between host memory and device memory are typically done using ```cudaMemcpy()```.\n",
    "\n",
    "In CUDA Fortran, linear memory on Device is typically allocated by defining array as  ```allocatable, device``` type and data transfer between host memory and device memory are typically done using ```cudaMemcpy()```.\n",
    "    \n",
    "\n",
    "The API definition of these are as follows: \n",
    "\n",
    "**cudaError_t cudaMalloc (void ∗∗ devPtr, size_t size)** in CUDA C/C++ and **integer function cudaMalloc(devptr, size)**  in CUDA Fortran, allocate size bytes of linear memory on the device and returns a pointer to the allocated memory. The allocated memory is suitably aligned for any kind of variable. `cudaMalloc()` returns ```cudaErrorMemoryAllocation``` in case of failure or ```cudaSuccess```.\n",
    "    \n",
    "**cudaError_t cudaMemcpy (void ∗ dst, const void ∗ src, size_t count, enum cudaMemcpyKind kind)** in CUDA C/C++ and  **integer function cudaMemcpy(dst, src, count, kind)** in CUDA Fortran, copies count bytes from the memory area pointed to by `src` to the memory area pointed to by `dst`. `dst` and `src` may be any device or host, scalar or array.  `kind` is one of the defined enums `cudaMemcpyHostToDevice`, `cudaMemcpyDeviceToHost`, `cudaMemcpyDeviceToDevice` or `cudaMemcpyHostToHost` (this specifies the direction of the copy).\n",
    "\n",
    "Please note, calling `cudaMemcpy()` with `dst` and `src` pointers that do not match the direction of the copy results in an undefined behavior.\n",
    "\n",
    "**cudaError_t cudaFree (void ∗ devPtr)** Frees the memory space pointed to by `devPtr`, which must have been returned by a previous call to `cudaMalloc()` or another equivalent API. \n",
    "    \n",
    "Let us look at these steps in more detail for a simple vector addition code:\n",
    "\n",
    "    \n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA C/C++</b></summary>\n",
    "    \n",
    "```cpp\n",
    "int main(void) {\n",
    "\tint *a, *b, *c;\n",
    "        int *d_a, *d_b, *d_c; // device copies of a, b, c\n",
    "\n",
    "\tint size = N * sizeof(int);\n",
    "\n",
    "\t// Alloc space for host copies of a, b, c and setup input values\n",
    "\ta = (int *)malloc(size); fill_array(a);\n",
    "\tb = (int *)malloc(size); fill_array(b);\n",
    "\tc = (int *)malloc(size);\n",
    "\n",
    "        // Alloc space for device copies of a, b, c\n",
    "        cudaMalloc((void **)&d_a, size);\n",
    "        cudaMalloc((void **)&d_b, size);\n",
    "        cudaMalloc((void **)&d_c, size);\n",
    "\n",
    "       // Copy inputs to device\n",
    "        cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);\n",
    "        cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);\n",
    "\n",
    "\n",
    "\tdevice_add<<<N,1>>>(d_a,d_b,d_c);\n",
    "\n",
    "        // Copy result back to host\n",
    "        cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);\n",
    "\n",
    "\tprint_output(a,b,c);\n",
    "\n",
    "\tfree(a); free(b); free(c);\n",
    "        cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);\n",
    "\n",
    "\n",
    "\n",
    "\treturn 0;\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA Fortran</b></summary>\n",
    "    \n",
    "\n",
    "```fortran\n",
    "module kernel\n",
    "    contains\n",
    "    ! CUDA kernel. Each thread takes care of one element of c\n",
    "    attributes(global) subroutine vecAdd_kernel(n, a, b, c)\n",
    "        integer, value :: n\n",
    "        real(8), device :: a(n), b(n), c(n)\n",
    "        integer :: id\n",
    " \n",
    "        ! Get our global thread ID\n",
    "        id = (blockidx%x-1)*blockdim%x + threadidx%x\n",
    " \n",
    "        ! Make sure we do not go out of bounds\n",
    "        if (id <= n) then\n",
    "            c(id) = a(id) + b(id)\n",
    "        endif\n",
    "    end subroutine vecAdd_kernel\n",
    "end module kernel\n",
    " \n",
    "program main\n",
    "    use cudafor\n",
    "    use kernel\n",
    " \n",
    "    type(dim3) :: blockSize, gridSize\n",
    "    real(8) :: sum\n",
    "    integer :: i\n",
    " \n",
    "    ! Size of vectors\n",
    "    integer :: n = 1\n",
    " \n",
    "    ! Host input vectors\n",
    "    real(8),dimension(:),allocatable :: h_a\n",
    "    real(8),dimension(:),allocatable :: h_b\n",
    "    !Host output vector\n",
    "    real(8),dimension(:),allocatable :: h_c\n",
    " \n",
    "    ! Device input vectors\n",
    "    real(8),device,dimension(:),allocatable :: d_a\n",
    "    real(8),device,dimension(:),allocatable :: d_b\n",
    "    !Host output vector\n",
    "    real(8),device,dimension(:),allocatable :: d_c\n",
    " \n",
    "    ! Allocate memory for each vector on host\n",
    "    allocate(h_a(n))\n",
    "    allocate(h_b(n))\n",
    "    allocate(h_c(n))\n",
    " \n",
    "    ! Allocate memory for each vector on GPU\n",
    "    allocate(d_a(n))\n",
    "    allocate(d_b(n))\n",
    "    allocate(d_c(n))\n",
    " \n",
    "    ! Initialize content of input vectors, vector a[i] = sin(i)^2 vector b[i] = cos(i)^2\n",
    "    do i=1,n\n",
    "        h_a(i) = sin(i*1D0)*sin(i*1D0)\n",
    "        h_b(i) = cos(i*1D0)*cos(i*1D0)\n",
    "    enddo\n",
    " \n",
    "    ! Implicit copy of host vectors to device\n",
    "    d_a = h_a(1:n)\n",
    "    d_b = h_b(1:n)\n",
    " \n",
    "\n",
    "    ! Execute the kernel\n",
    "    call vecAdd_kernel<<<1, 1>>>(n, d_a, d_b, d_c)\n",
    " \n",
    "    ! Implicit copy of device array to host\n",
    "    h_c = d_c(1:n)\n",
    " \n",
    "    ! Sum up vector c and print result divided by n, this should equal 1 within error\n",
    "    sum = 0.0;\n",
    "    do i=1,n\n",
    "        sum = sum +  h_c(i)\n",
    "    enddo\n",
    "    sum = sum/real(n)\n",
    "    print *, 'final result: ', sum\n",
    " \n",
    "    ! Release device memory\n",
    "    deallocate(d_a)\n",
    "    deallocate(d_b)\n",
    "    deallocate(d_c)\n",
    " \n",
    "    ! Release host memory\n",
    "    deallocate(h_a)\n",
    "    deallocate(h_b)\n",
    "    deallocate(h_c)\n",
    " \n",
    "end program main\n",
    "```\n",
    "\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "### Unified Memory\n",
    "An easier way to allocate memory accessible by the GPU is to use *Unified Memory*. It provides a single memory space accessible by all GPUs and CPUs in the system. To allocate data in unified memory, we call `cudaMallocManaged()`, which returns a pointer that you can access from host (CPU) code or device (GPU) code. To free the data, just pass the pointer to `cudaFree()`. To read more about unified memory, please review the blog on [Unified Memory for CUDA beginners](https://developer.nvidia.com/blog/unified-memory-cuda-beginners/).\n",
    "\n",
    "<img src=\"../../_common/images/unified_memory.png\">\n",
    "\n",
    "Below is the example usage of how to use managed memory in the CUDA code:\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA C/C++</b></summary>\n",
    "\n",
    "```cpp\n",
    " // Allocate Unified Memory -- accessible from CPU or GPU\n",
    "  int *a, *b, *c;\n",
    "  cudaMallocManaged(&a, N*sizeof(int));\n",
    "  cudaMallocManaged(&b, N*sizeof(int));\n",
    "  cudaMallocManaged(&c, N*sizeof(int));\n",
    "  ...\n",
    "\n",
    "  // Free memory\n",
    "  cudaFree(a);\n",
    "  cudaFree(b);\n",
    "  cudaFree(c);\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA Fortran</b></summary>\n",
    "    \n",
    "```fortran\n",
    "!matrix data\n",
    "real, managed, allocatable, dimension(:,:) :: A, B, C\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "## Understanding Threads and Blocks\n",
    "We will be looking at understanding _thread_ and _block_ level parallelism in this section.The number of threads and blocks to be launched is passed as a parameter to ```<<<,>>>``` brackets in a kernel call.\n",
    "\n",
    "### Creating multiple blocks\n",
    "\n",
    "In order to create multiple blocks for the vector addition code above, you need to change two things:\n",
    "1. Change _<<<1,1>>>_ to <<<N,1>>>_ which basically launches N number of blocks\n",
    "2. Access the array with block index using private variable passed by default to CUDA kernel: _blockIdx.x_\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA C/C++</b></summary>\n",
    "    \n",
    "```cpp\n",
    "//changing from device_add<<<1,1>>> to\n",
    "device_add<<<N,1>>>\n",
    "//access the array using blockIdx.x private variable\n",
    "__global__ void device_add(int *a, int *b, int *c) {\n",
    "    c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA Fortran</b></summary>\n",
    "\n",
    "```fortran\n",
    "attributes(global) subroutine vecAdd_kernel(n, a, b, c)\n",
    "        integer, value :: n\n",
    "        real(8), device :: a(n), b(n), c(n)\n",
    "        integer :: id\n",
    " \n",
    "        ! Get our global thread ID\n",
    "        id = blockidx%x\n",
    " \n",
    "        ! Make sure we do not go out of bounds\n",
    "        if (id <= n) then\n",
    "            c(id) = a(id) + b(id)\n",
    "        endif\n",
    "    end subroutine vecAdd_kernel\n",
    "}\n",
    "```  \n",
    "\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "By using `blockIdx.x` to index the array, each block handles a different element of the array and may execute in parallel to each other.\n",
    "\n",
    "| Block Id | Performs |\n",
    "| --- | --- |\n",
    "| Block 0 | _c\\[0\\]=b\\[0\\]+a\\[0\\]_ |\n",
    "| Block 1 | _c\\[1\\]=b\\[1\\]+a\\[1\\]_ |\n",
    "| Block 2 | _c\\[2\\]=b\\[2\\]+a\\[2\\]_ |\n",
    "\n",
    "**Understand and analyze** the sample vector addition code [vector_addition_block.cu](../source_code/vector_addition_gpu_block_only.cu).Open the downloaded files for inspection. \n",
    "\n",
    "\n",
    "\n",
    "### Creating multiple threads\n",
    "\n",
    "In order to create multiple threads for vector addition code above. You need to change two things:\n",
    "1. change _<<<1,1>>>_ to <<<1,N>>>_ which basically launches N number of threads inside 1 block\n",
    "2. Access the array with thread index using private variable passed by default to CUDA kernel: _threadIdx.x_\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA C/C++</b></summary>\n",
    "    \n",
    "```cpp\n",
    "//changing from device_add<<<1,1>>> to\n",
    "device_add<<<1,N>>>\n",
    "//access the array using threadIdx.x private variable\n",
    "__global__ void device_add(int *a, int *b, int *c) {\n",
    "    c[threadIdx.x] = a[threadIdx.x] + b[threadIdx.x];\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA Fortran</b></summary>\n",
    "    \n",
    " ```fortran\n",
    "attributes(global) subroutine vecAdd_kernel(n, a, b, c)\n",
    "        integer, value :: n\n",
    "        real(8), device :: a(n), b(n), c(n)\n",
    "        integer :: id\n",
    " \n",
    "        ! Get our global thread ID\n",
    "        id = threadidx%x\n",
    " \n",
    "        ! Make sure we do not go out of bounds\n",
    "        if (id <= n) then\n",
    "            c(id) = a(id) + b(id)\n",
    "        endif\n",
    "    end subroutine vecAdd_kernel\n",
    "```   \n",
    "\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "By using `threadIdx.x` to index the array, each thread handles a different element of the array and can execute in parallel.\n",
    "\n",
    "| thread Id | Performs |\n",
    "| --- | --- |\n",
    "| Thread 0 | _c\\[0\\]=b\\[0\\]+a\\[0\\]_ |\n",
    "| Thread 1 | _c\\[1\\]=b\\[1\\]+a\\[1\\]_ |\n",
    "| Thread 2 | _c\\[2\\]=b\\[2\\]+a\\[2\\]_ |\n",
    "\n",
    "**Understand and analyze** the sample vector addition code [vector_addition_thread.cu](../source_code/vector_addition_gpu_thread_only.cu).\n",
    "    \n",
    "### Creating multiple blocks each having many threads\n",
    "\n",
    "So far, we've looked at parallel vector addition through the use of several blocks with one thread and one block with several\n",
    "threads. Now let us look at creating multiple blocks, each block containing multiple threads.\n",
    "\n",
    "To understand it lets take a scenario where the total number of vector elements is 32 which needs to be added in parallel. Total number of parallel execution unit required is 32. As a first step let us define that each block contains eight threads(we are not saying this is optimal configuration and is just for explanation purpose). Next we define the number of blocks. The simplest calculation is No_Of_Blocks = 32/8 where 8 is number of threads per blocks. The code changes required to launch 4 blocks with 8 thread each is as shown below: \n",
    "1. Change _<<<1,1>>>_ to <<<4,8>>>_ which basically launches 4  threads per block and 8 total blocks\n",
    "2. Access the array with both thread index and block index using private variable passed by default to call CUDA kernel: _threadIdx.x_ and _blockIdx.x_ and _bloxkDim.x_ which tells how many threads are allocated per block. \n",
    "\n",
    "    \n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA C/C++</b></summary>\n",
    "\n",
    "```cpp\n",
    "threads_per_block = 8;\n",
    "no_of_blocks = N/threads_per_block;\n",
    "device_add<<<no_of_blocks,threads_per_block>>>(d_a,d_b,d_c);\n",
    "\n",
    "__global__ void device_add(int *a, int *b, int *c) {\n",
    "    int index = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "    c[index] = a[index] + b[index];\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "<details>\n",
    "<summary markdown=\"span\"><b>CUDA Fortran</b></summary>\n",
    "    \n",
    "```fortran\n",
    "! Number of threads in each thread block\n",
    "     blockSize = dim3(8,1,1)\n",
    "     ! Number of thread blocks in grid\n",
    "     gridSize = dim3(ceiling(real(n)/real(blockSize%x)) ,1,1)\n",
    "     call vecAdd_kernel<<<gridSize, blockSize>>>(n, d_a, d_b, d_c)\n",
    "\n",
    "    ! CUDA kernel. Each thread takes care of one element of c\n",
    "    attributes(global) subroutine vecAdd_kernel(n, a, b, c)\n",
    "        integer, value :: n\n",
    "        real(8), device :: a(n), b(n), c(n)\n",
    "        integer :: id\n",
    " \n",
    "        ! Get our global thread ID\n",
    "        id = (blockidx%x-1)*blockdim%x + threadidx%x\n",
    " \n",
    "        ! Make sure we do not go out of bounds\n",
    "        if (id <= n) then\n",
    "            c(id) = a(id) + b(id)\n",
    "        endif\n",
    "    end subroutine vecAdd_kernel\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "The diagram below shows the launch configuration that we have discussed so far:\n",
    "\n",
    "<img src=\"../../_common/images/cuda_indexing.png\">\n",
    "\n",
    "Modern GPU Architectures consist of multiple SM, each consisting of several cores. To utilize the whole GPU, it is important to use both threads and blocks.\n",
    "\n",
    "**Understand and analyze** the sample vector addition code [vector_addition_block_thread.cu](../source_code/vector_addition_gpu_thread_block.cu).Open the downloaded files for inspection. \n",
    "\n",
    "\n",
    "The more important question may arise: why bother with threads altogether? What do we gain by adding an additional level of parallelism? The short answer is CUDA programming model defines that, unlike parallel blocks, threads have mechanisms to efficiently communicate and synchronize.\n",
    "    \n",
    "    \n",
    "This is necessary to implement certain algorithms where threads needs to communicate with each other.We do not require synchronization across threads in **Pair Calculation** so we will not be going into details of concept of synchronization across threads and usage of specialized memory like _shared_ memory in this tutorial.  \n",
    "\n",
    "# Atomic Construct\n",
    "\n",
    "In the code, you will also require one more construct, which will help you get the right results.  OpenACC atomic construct ensures that a particular variable is accessed and/or updated atomically to prevent indeterminate results and race conditions. In other words, it prevents one thread from stepping on the toes of other threads due to accessing a variable simultaneously, resulting in different results run-to-run. For example, if I want to count the number of elements that have a value greater than zero, we could write the following:\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA C/C++</b></summary>\n",
    "    \n",
    "```cpp\n",
    "__global__ void countMoreThanZero( ... )\n",
    "{\n",
    "    if ( val > 0 )\n",
    "    {\n",
    "        atomicAdd(&cnt[0],1);\n",
    "    }\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>CUDA Fortran</b></summary>\n",
    "\n",
    "```fortran\n",
    "if(r<cut)then\n",
    "         oldvalue = atomicadd(g(ind),1.0d0)\n",
    "endif\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "# A Quick Recap\n",
    "We saw the definition of CUDA and briefly covered CUDA architecture and introduced CUDA C and CUDA Fortran constructs. We also played with block and thread configurations for a simple vector addition code. All this was done under the following restrictions:\n",
    "1. **Multiple Dimension**: We launched threads and blocks in one dimension. We have been using `threadIdx.x` and `blockIdx.x`, so what is `.x` ? This statement  says that we are launching threads and blocks in one dimension only. CUDA allows to launch threads in 3 dimensions. You can also have `.y` and `.z` for index calculation. For example, you can launch threads and blocks in 2 dimensions to  divide work for a 2D image. Also the maximum number of threads per block and number of blocks allowed per dimension is restricted based on the GPU that the code runs on.\n",
    "2. **GPU Memory**: What we have not covered is that GPU has different hierarchy of memory, e.g. GPU has a read only memory which provides high bandwidth for 2D and 3D locality access called _texture_. Also, GPU provides a scratch pad with limited memory called  _shared memory_\n",
    "3. **Optimization** : What we did not cover so far is the right way to access the compute and memory to get max performance. \n",
    "\n",
    "**One key characteristic of CUDA is that a user can control the access pattern of data for each thread. The user can decide which part of the memory the data can sit on.  While we are covering some part of this in this lab, which is required for us to port our code, we do not intend to cover all optimizations**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Compile and Run for NVIDIA GPU\n",
    "Now, let's start modifying the original code and add the CUDA constructs. You can either explicitly transfer the allocated data between the CPU and GPU or use unified memory, which creates a pool of managed memory shared between the CPU and GPU.\n",
    "\n",
    "Click on the <b>[C/C++ version](../source_code/rdf.cu)</b> or the <b>[Fortran version](../source_code/rdf.f90)</b> links, and <mark>start modifying the C or Fortran version of the RDF code. Without changing the orginal code, you will not get the expected outcome after running the below cells.</mark> Remember to **SAVE** your code after changes, before running the below cells.\n",
    "\n",
    "**Note:** When `-arch=native` compiled option is used, `nvcc` detects the visible GPUs on the system and generates codes for them. It is a warning if there is no visible supported GPU on the system, and the default architecture will be used.\n",
    "\n",
    "Moreover, for the CUDA Fortran version, we are targeting the NVTX v3 API, a header-only C library, and added Fortran-callable wrappers to the code, we added `-lnvhpcwrapnvtx` at the compile time to do the link to the library."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (C/C++)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU (C/C++)\n",
    "!cd ../source_code && echo \"compiling C/C++ version .. \" && nvcc -arch=native -o rdf_c rdf.cu && echo \"Running the executable and validating the output\" && ./rdf_c && cat Pair_entropy.dat"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The output should be the following:\n",
    "\n",
    "```\n",
    "s2 value is -2.43191\n",
    "s2bond value is -3.87014\n",
    "```\n",
    "\n",
    "Now, let's profile the code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output of nvptx (C/C++)\n",
    "!cd ../source_code && nsys profile -t nvtx,cuda --stats=true --force-overwrite true -o rdf_cuda_c ./rdf_c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the [C/C++ version](../source_code/rdf_cuda_c.nsys-rep) then choosing <mark>save Link As</mark> Once done, open it via the GUI. Have a look at the example expected profiler report below:\n",
    "\n",
    "**Example screenshot (C/C++ code)**\n",
    "\n",
    "<img src=\"../../_common/images/cuda_profile_timeline.png\">\n",
    "\n",
    "Nsight systems is capable of capturing information about CUDA execution in the profiled process.CUDA API row in the _Timeline View_ shows traces of CUDA Runtime and Driver calls made by the application. If you hover your mouse over it, you will see more information about the calls.\n",
    "\n",
    "   \n",
    "<img src=\"../../_common/images/cuda_profile_api.png\">\n",
    "\n",
    "\n",
    "Near the bottom of the timeline row tree, the GPU node will appear and contain a CUDA node. Within the CUDA node, each CUDA context used within the process will be shown along with its corresponding CUDA streams. Streams will contain memory operations and kernel launches on the GPU. In the example screenshot below, you can see kernel launches are represented in blue, while memory transfers are displayed in red and green. In this example screenshot, unified memory was used rather than explicitly transferring data between CPU and GPU.\n",
    "\n",
    "<img src=\"../../_common/images/cuda_profile.png\">\n",
    "\n",
    "\n",
    "Feel free to checkout the solutions for [C/C++ solution (with managed memory)](../source_code/SOLUTION/rdf_unified_memory.cu) and [C/C++ solution (without managed memory)](../source_code/SOLUTION/rdf_malloc.cu) versions to help you understand better."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (Fortran)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU (Fortran)\n",
    "!cd ../source_code && echo \"compiling Fortran version .. \" && nvfortran -cuda -o rdf_f rdf.f90 -lnvhpcwrapnvtx && echo \"Running the executable and validating the output\" && ./rdf_f && cat Pair_entropy.dat"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The output should be the following:\n",
    "\n",
    "```\n",
    "s2      :    -2.452690945278331     \n",
    "s2bond  :    -24.37502820694527  \n",
    "```\n",
    "\n",
    "Now, let's profile the code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output of nvptx (Fortran)\n",
    "!cd ../source_code && nsys profile -t nvtx,cuda --stats=true --force-overwrite true -o rdf_cuda_f ./rdf_f"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the [Fortran version](../source_code/rdf_cuda_f.nsys-rep) then choosing <mark>save Link As</mark> Once done, open it via the GUI. Have a look at the example expected profiler report below:\n",
    "\n",
    "\n",
    "**Example screenshot (Fortran code)**\n",
    "    \n",
    "<img src=\"../../_common/images/cuda_profile_timeline.jpg\">\n",
    "\n",
    "Nsight systems is capable of capturing information about CUDA execution in the profiled process.CUDA API row in the _Timeline View_ shows traces of CUDA Runtime and Driver calls made by the application. If you hover your mouse over it, you will see more information about the calls.\n",
    "\n",
    "   \n",
    "<img src=\"../../_common/images/cuda_profile_api.png\">\n",
    "\n",
    "\n",
    "Near the bottom of the timeline row tree, the GPU node will appear and contain a CUDA node. Within the CUDA node, each CUDA context used within the process will be shown along with its corresponding CUDA streams. Streams will contain memory operations and kernel launches on the GPU. In the example screenshot below, you can see kernel launches are represented in blue, while memory transfers are displayed in red and green. In this example screenshot, unified memory was used rather than explicitly transferring data between CPU and GPU.\n",
    "\n",
    "<img src=\"../../_common/images/cuda_profile.png\">\n",
    "\n",
    "\n",
    "Feel free to checkout the solutions for [Fortran solution (with managed memory)](../source_code/SOLUTION/rdf_unified_memory.f90) version to help you understand better.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "# Analysis\n",
    "\n",
    "**Usage Scenarios**\n",
    "\n",
    "Using language  extensions like CUDA C, CUDA Fortran helps developers get the best performance out of their code on an NVIDIA GPU. CUDA C and other language construct exposes the GPU architecture and programming model which gives more control to developers with respect to memory storage, access and thread control. Based on the type of application it may provide an improvement over say compiler generated codes with the help of directives. \n",
    "\n",
    "**How is CUDA different from other GPU programming models like OpenACC and OpenMP?**\n",
    "\n",
    "CUDA should not be considered an alternative to OpenMP or OpenACC. In fact CUDA complements directive-based programming models and there are defined interoperability strategies between them. You can always start accelerating your code with OpenACC and use CUDA to optimize the most performance critical kernels. For example use OpenACC for data transfer and then pass a device pointer to one of critical CUDA kernels which are written in CUDA. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Post-Lab Summary\n",
    "\n",
    "If you would like to download this lab for later viewing, it is recommended you go to your browser's file menu (not the Jupyter notebook file menu) and save the complete web page.  This will ensure the images are copied down as well. You can also execute the following cell block to create a zip file of the files you have been working on, and download it with the link below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "cd ..\n",
    "rm -f _files.zip\n",
    "zip -r _files.zip *"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**After** executing the above zip command, you should be able to download and save the zip file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> [Here](../_files.zip) then choosing <mark>save Link As</mark>.\n",
    "\n",
    "-----\n",
    "\n",
    "\n",
    "# Links and Resources\n",
    "[Introduction to CUDA](https://devblogs.nvidia.com/even-easier-introduction-cuda/)\n",
    "\n",
    "[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)\n",
    "\n",
    "[CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)\n",
    "\n",
    "**NOTE**: To be able to see the Nsight Systems profiler output, please download the latest version of Nsight Systems from [here](https://developer.nvidia.com/nsight-systems).\n",
    "\n",
    "Don't forget to check out additional [Open Hackathons Resources](https://www.openhackathons.org/s/technical-resources) and join our [OpenACC and Hackathons Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.\n",
    "\n",
    "--- "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Licensing \n",
    "\n",
    "Copyright © 2022 OpenACC-Standard.org.  This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0). These materials may include references to hardware and software developed by other entities; all applicable licensing and copyrights apply."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
