{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before we begin, let us execute the below cell to display information about the NVIDIA® CUDA® driver and the GPUs running on the server by running the `nvidia-smi` command. To do this, execute the cell block below by clicking on it with your mouse, and pressing Ctrl+Enter, or pressing the play button in the toolbar above. You should see some output returned below the grey cell."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Learning objectives\n",
    "The **goal** of this lab is to:\n",
    "\n",
    "- Learn how to run the same code on both a multicore CPU and a GPU using the OpenACC programming model\n",
    "- Understand the key directives and steps involved in making a sequential code parallel\n",
    "- Learn how to interpret the compiler feedback\n",
    "- Learn and understand the Nsight Systems profiler report\n",
    "\n",
    "We do not intend to cover:\n",
    "- Optimization techniques in details\n",
    "\n",
    "\n",
    "# OpenACC Directives\n",
    "Using OpenACC directives will allow us to parallelize our code without explicitly alter our code. What this means is that, by using OpenACC directives, we can have a single code that will function as both a sequential code and a parallel code.\n",
    "\n",
    "### OpenACC Syntax\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>C/C++ syntax</b></summary>\n",
    "    \n",
    "```#pragma acc <directive> <clauses> ```\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Fortran syntax</b></summary>\n",
    "    \n",
    "```!$acc <directive> <clauses> ```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "**#pragma** in C/C++ and **!$acc** in Fortran are known as a \"compiler hint.\" These are very similar to programmer comments. However, the compiler will read our pragmas. Pragmas are a way for the programmer to \"guide\" the compiler without running the chance of damaging the code. If the compiler does not understand the pragma, it can ignore it rather than throw a syntax error.\n",
    "\n",
    "**acc** specifies that this is an OpenACC related directive that will follow. Any non-OpenACC compiler will ignore this pragma. \n",
    "\n",
    "**directives** are commands in OpenACC that will tell the compiler to do some action. For now, we will only use directives that allow the compiler to parallelize our code.\n",
    "\n",
    "**clauses** are additions/alterations to our directives. These include (but are not limited to) optimizations. One way to think about it: directives describe a general action for our compiler to do (such as, paralellize our code), and clauses allow the programmer to be more specific (such as how we specifically want the code to be parallelized).\n",
    "\n",
    "## 3 Key Directives\n",
    "\n",
    "OpenACC consists of 3 key types of directives responsible for **parallel execution**, **managing data movement** and **optimization** as shown in the diagram below (example uses C/C++ syntax):\n",
    "\n",
    "<img src=\"../../_common/images/openacc_3_directives.png\" width=\"70%\" height=\"70%\">\n",
    "\n",
    "We will be covering the parallel execution directive in this lab. The data directive is part of the additional section and can be tried out in the end.\n",
    "\n",
    "### Parallel and Loop Directives\n",
    "\n",
    "\n",
    "There are three directives we will cover in this lab: `parallel`, `loop`, and `parallel loop`. Once we understand all three of them, you will be tasked with parallelizing **Pair Calculation** with your preferred directive \n",
    "\n",
    "The parallel directive may be the most straightforward of the directives. It will mark a region of the code for parallelization (this usually only includes parallelizing a single **for** loop.) Let's take a look:\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>C/C++ syntax</b></summary>\n",
    "    \n",
    "```cpp\n",
    "#pragma acc parallel loop\n",
    "for (int i = 0; i < N; i++ )\n",
    "{\n",
    "    < loop code >\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Fortran syntax</b></summary>\n",
    "    \n",
    "```fortran\n",
    "!$acc parallel loop\n",
    "    do i=1,N\n",
    "        < loop code >\n",
    "    enddo\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "    \n",
    "\n",
    "\n",
    "We may also define a \"parallel region\". The parallel region may have multiple loops (though this is often not recommended!) The parallel region is everything contained within the outer-most curly braces.\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>C/C++ syntax</b></summary>\n",
    "    \n",
    "    \n",
    "```cpp\n",
    "#pragma acc parallel\n",
    "{\n",
    "    #pragma acc loop\n",
    "    for (int i = 0; i < N; i++ )\n",
    "    {\n",
    "        < loop code >\n",
    "    }\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Fortran syntax</b></summary>\n",
    "    \n",
    "```fortran\n",
    "!$acc parallel\n",
    "    !$acc loop\n",
    "    do i=1,N\n",
    "        < loop code >\n",
    "    enddo\n",
    "!$acc end parallel\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "`#pragma acc parallel loop` in C/C++ or `!$acc parallel loop` in Fortran will mark the next loop for parallelization. It is extremely important to include the `loop`, otherwise you will not be parallelizing the loop properly. The parallel directive tells the compiler to \"redundantly parallelize\" the code. The `loop` directive specifically tells the compiler that we want the loop parallelized. Let's look at an example of why the loop directive is so important. The `parallel` directive tells the compiler to create somewhere to run parallel code. OpenACC calls that somewhere a `gang`, which might be a thread on the CPU or maying a CUDA threadblock or OpenCL workgroup. It will choose how many gangs to create based on where you're running, only a few on a CPU (like 1 per CPU core) or lots on a GPU (1000's possibly). Gangs allow OpenACC code to scale from small CPUs to large GPUs because each one works completely independently of the other gang. That's why there's a space between gangs in the images below.\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example diagram (C/C++ syntax)</b></summary>\n",
    "<img src=\"../../_common/images/openacc_parallel.png\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "---\n",
    "\n",
    "<img src=\"../../_common/images/openacc_parallel2.png\" width=\"80%\" height=\"80%\">\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example diagram (Fortran syntax)</b></summary>\n",
    "<img src=\"../../_common/images/parallel1f.png\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "---\n",
    "\n",
    "<img src=\"../../_common/images/parallel2f.png\" width=\"80%\" height=\"80%\">\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "\n",
    "\n",
    "There's a good chance that we don't want my loop to be run redundantly in every gang though, that seems wasteful and potentially dangerous. Instead we want to instruct the compiler to break up the iterations of my loop and to run them in parallel on the gangs. To do that, we simply can add a `loop` directive to the interesting loops. This instructs the compiler that we want my loop to be parallelized and promises to the compiler that it's safe to do so. Now that we have both `parallel` and `loop`, things loop a lot better (and run a lot faster). Now the compiler is spreading my loop iterations to all of my gangs, but also running multiple iterations of the loop at the same time within each gang as a *vector*. Think of a vector like this, we have 10 numbers that I want to add to 10 other numbers (in pairs). Rather than looking up each pair of numbers, adding them together, storing the result, and then moving on to the next pair in-order, modern computer hardware allows me to add all 10 pairs together all at once, which is a lot more efficient. \n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>C/C++ syntax</b></summary>\n",
    "    \n",
    "<img src=\"../../_common/images/openacc_parallel_loop.png\" width=\"80%\" height=\"80%\">\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Fortran syntax</b></summary>\n",
    "    \n",
    "<img src=\"../../_common/images/parallel3f.png\" width=\"80%\" height=\"80%\">\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "The `acc parallel loop` directive is both a promise and a request to the compiler. The programmer is promising that the loop can safely be parallelized and asks the compiler to do so in a way that makes sense for the machine we target. The compiler may make completely different decisions if we are compiling for a multicore CPU than it would for a GPU and that's the idea. OpenACC enables programmers to parallelize their codes without having to worry about the details of how best to do so for every possible machine. \n",
    "\n",
    "\n",
    "\n",
    "### Atomic Construct\n",
    "\n",
    "In the code, you will also require one more construct, which will help you get the right results. OpenACC atomic construct ensures that a particular variable is accessed and/or updated atomically to prevent indeterminate results and race conditions. In other words, it prevents one thread from stepping on the toes of other threads due to accessing a variable simultaneously, resulting in different results run-to-run. For example, if we want to count the number of elements that have a value greater than zero, we could write the following:\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>C/C++ syntax</b></summary>\n",
    "    \n",
    "```cpp\n",
    "if ( val > 0 )\n",
    "{\n",
    "  #pragma acc atomic\n",
    "  {\n",
    "    cnt++;\n",
    "  }\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Fortran syntax</b></summary>\n",
    "    \n",
    "```fortran\n",
    "if(r<cut)then\n",
    "  !$acc atomic\n",
    "    cnt = cnt + 1\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, let's start modifying the original code and adding the OpenACC directives. Click on the <b>[C/C++ version](../source_code/rdf.cpp)</b> or the <b>[Fortran version](../source_code/rdf.f90)</b> links, and <mark>start modifying the C or Fortran version of the RDF code. Without changing the orginal code, you will not get the expected outcome after running the below cells.</mark> Remember to **SAVE** your code after changes, before running the below cells."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Compile and Run for Multicore\n",
    "\n",
    "After adding OpenACC directives now let us try to compile the code. For compiling, we will be making use of these additional flags:\n",
    "\n",
    "**-Minfo** : This flag will give us feedback from the compiler about code optimizations and restrictions.\n",
    "\n",
    "**-Minfo=accel** will only give us feedback regarding our OpenACC parallelizations/optimizations.\n",
    "\n",
    "**-Minfo=all** will give us all possible feedback, including our parallelization/optimizations, sequential code optimizations, and sequential code restrictions.\n",
    "\n",
    "**-ta** : This flag allows us to compile our code for a specific target parallel hardware. Without this flag, the code will be compiled for sequential execution.\n",
    "\n",
    "          -ta=multicore will allow us to compile our code for a multicore CPU.\n",
    "          \n",
    "          -ta=tesla will allow us to compile our code for an NVIDIA GPU\n",
    "\n",
    "After running the cells, you can inspect part of the compiler feedback for C or Fortran version and see what it's telling us (your compiler feedback will be similar to the below)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for multicore (C/C++)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#Compile the code for multicore (C/C++)\n",
    "!cd ../source_code && echo \"compiling C/C++ version .. \" && nvc++ -acc -ta=multicore -Minfo=all -o rdf_c rdf.cpp -I/opt/nvidia/hpc_sdk/Linux_x86_64/23.5/cuda/11.8/include -L/opt/nvidia/hpc_sdk/Linux_x86_64/23.5/cuda/11.8/lib64 -lnvToolsExt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compiler Feedback (C/C++ version):\n",
    "\n",
    "<img src=\"../../_common/images/openacc_multicore_feedback.png\">\n",
    "\n",
    "You can see from *Line 177*, it is generating a multicore code `177, Generating Multicore code`. It is very important to inspect the feedback to make sure the compiler is doing what you have asked of it. \n",
    "\n",
    "Let's run the executable and validate the output first. \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# check the output (C/C++ version)\n",
    "!cd ../source_code && ./rdf_c && cat Pair_entropy.dat"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The output should be the following:\n",
    "\n",
    "```\n",
    "s2 value is -2.43191\n",
    "s2bond value is -3.87014\n",
    "```\n",
    "\n",
    "Now, let's profile the code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output of nvptx (C/C++ version)\n",
    "!cd ../source_code && nsys profile -t nvtx --stats=true --force-overwrite true -o rdf_multicore_c ./rdf_c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check out the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the [C/C++ version](../source_code/rdf_multicore_c.nsys-rep) then choosing <mark>save Link As</mark>. Once done, open it via the GUI. From the _Timeline View_, checkout the NVTX markers displays as part of threads. **Why are we using NVTX?** Please see the section on [Using NVIDIA Tools Extension (NVTX)](../../_common/jupyter_notebook/nsight_systems.ipynb#Using-NVIDIA-Tools-Extension-(NVTX)).\n",
    "\n",
    "From the _Timeline View_, right click on the nvtx row and click the \"show in events view\". Now you can see the nvtx statistic at the bottom of the window which shows the duration of each range. \n",
    "\n",
    "**Example screenshot (C/C++ code)**\n",
    "\n",
    "<img src=\"../../_common/images/nvtx_multicore.png\" width=\"100%\" height=\"100%\">\n",
    "\n",
    "\n",
    "You can also checkout NVTX statistic from the terminal console once the profiling session ended. From the NVTX statistics, you can see most of the execution time is spend in `Pair_Calculation`. This is a function worth checking out.\n",
    "\n",
    "You can also compare the NVTX ranges with the serial version (see [screenshot](../../_common/jupyter_notebook/rdf_overview.ipynb))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for multicore (Fortran)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#Compile the code for multicore (Fortran)\n",
    "!cd ../source_code && echo \"compiling Fortran version .. \" && nvfortran -acc -ta=multicore -Minfo=all -o rdf_f rdf.f90 -lnvhpcwrapnvtx"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compiler Feedback (Fortran version):\n",
    "   \n",
    "```\n",
    "\trdf:\n",
    "     97, Generating Multicore code\n",
    "         98, !$acc loop gang\n",
    "     99, Loop carried dependence of g prevents parallelization\n",
    "         Loop carried backward dependence of g prevents vectorization\n",
    "```\n",
    "\n",
    "You can see from *Line 97*, it is generating a multicore code `97, Generating Multicore code`. It is very important to inspect the feedback to make sure the compiler is doing what you have asked of it. \n",
    "\n",
    "Let's run the executable and validate the output first. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# check the output (Fortran version)\n",
    "!cd ../source_code && ./rdf_f && cat Pair_entropy.dat"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The output should be the following:\n",
    "\n",
    "```\n",
    "s2      :    -2.452690945278331     \n",
    "s2bond  :    -24.37502820694527    \n",
    "```\n",
    "\n",
    "Now, let's profile the code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output of nvptx (Fortran version)\n",
    "!cd ../source_code && nsys profile -t nvtx --stats=true --force-overwrite true -o rdf_multicore_f ./rdf_f"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check out the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the [Fortran version](../source_code/rdf_multicore_f.nsys-rep) then choosing <mark>save Link As</mark>. Once done, open it via the GUI. From the _Timeline View_, checkout the NVTX markers displays as part of threads. **Why are we using NVTX?** Please see the section on [Using NVIDIA Tools Extension (NVTX)](../../_common/jupyter_notebook/nsight_systems.ipynb#Using-NVIDIA-Tools-Extension-(NVTX)).\n",
    "\n",
    "From the _Timeline View_, right click on the nvtx row and click the \"show in events view\". Now you can see the nvtx statistic at the bottom of the window which shows the duration of each range. \n",
    "\n",
    "**Example screenshot (Fortran code)**\n",
    "    \n",
    "<img src=\"../../_common/images/nvtx_multicore.jpg\" width=\"100%\" height=\"100%\">\n",
    "\n",
    "\n",
    "You can also checkout NVTX statistic from the terminal console once the profiling session ended. From the NVTX statistics, you can see most of the execution time is spend in `Pair_Calculation`. This is a function worth checking out.\n",
    "\n",
    "You can also compare the NVTX ranges with the serial version (see [screenshot](../../_common/jupyter_notebook/rdf_overview.ipynb))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Compile and Run on a GPU\n",
    "\n",
    "Without changing the code, now let us try to recompile the code for NVIDIA GPU and rerun. The only difference is now we set **-ta=tesla:managed** instead of **-ta=multicore** . **Understand and analyze** the code present at <b>[C/C++ version](../source_code/rdf.cpp)</b> and/or the <b>[Fortran version](../source_code/rdf.f90)</b> .\n",
    "\n",
    "Open the downloaded files for inspection. Once done, compile the code by running the below cell. View the compiler feedback (enabled by adding `-Minfo=accel` flag) and investigate the compiler feedback for the OpenACC code. The compiler feedback provides useful information about applied optimizations.\n",
    "\n",
    "After running the cells, make sure to check the output first. You can inspect part of the compiler feedback for C or Fortran version and see what it's telling us (your compiler feedback will be similar to the below)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (C/C++)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU (C/C++)\n",
    "!cd ../source_code && echo \"compiling C/C++ version .. \" &&  nvc++ -acc -ta=tesla:managed,lineinfo  -Minfo=accel -o rdf_c rdf.cpp "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compiler Feedback (C/C++ version):\n",
    "\n",
    "<img src=\"../../_common/images/gpu_feedback.png\">\n",
    "\n",
    "- Using `-ta=tesla:managed`, instruct the compiler to build for an NVIDIA Tesla GPU using \"CUDA Managed Memory\"\n",
    "- Using `-Minfo` command-line option, we will see all output from the compiler. In this example, we use `-Minfo=accel` to only see the output corresponding to the accelerator (in this case an NVIDIA GPU).\n",
    "- The first line of the output, `round(float)`, tells us which function the following information is in reference to.\n",
    "- The line starting with 157, shows that the function is built for the GPU and it will be called by each thread sequentially. When the `#pragma acc routine` is used, the compiler generate a device copy of the function.\n",
    "- The line starting with 177, shows we created a parallel OpenACC loop. This loop is made up of gangs (a grid of blocks in CUDA language) and vector parallelism (threads in CUDA language) with the vector size being 128 per gang. `179, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */`\n",
    "- The rest of the information concerns data movement. Compiler detected possible need to move data and handled it for us. We will get into this later in this lab.\n",
    "\n",
    "It is very important to inspect the feedback to make sure the compiler is doing what you have asked of it. Now, let's profile the code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output of nvptx (C/C++ version)\n",
    "!cd ../source_code && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_gpu_c ./rdf_c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's checkout the profiler's report.  Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the [C/C++ version](../source_code/rdf_gpu_c.nsys-rep)  then choosing <mark>save Link As</mark> Once done, open it via the GUI. \n",
    "\n",
    "From the \"_Timeline View_\" on the top pane, double click on the \"CUDA\" from the function table on the left and expand it. Zoom in on the timeline and you can see a pattern similar to the screenshot below. The blue boxes are the compute kernels and each of these groupings of kernels is surrounded by green and red/purple boxes (annotated with purple color) representing data movements.\n",
    "\n",
    "**Example screenshot (C/C++ code)**\n",
    "\n",
    "<img src=\"../../_common/images/parallel_timeline.png\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "\n",
    "Let's hover your mouse over the CUDA row (underlined with blue color in the below screenshot) and expand it till you see both kernels and memory row. In the below screenshot you can see the NVTX ranges in the \"Events View\" at the bottom of the _Timeline View_ window. You can right click on each row from the function table on the left (top window) and click on \"Show in Events View\" and check out the detail related to that row (similar to the NVTX example in the below screenshot).\n",
    "\n",
    "**Example screenshot (C/C++ code)**\n",
    "\n",
    "<img src=\"../../_common/images/parallel_expand.png\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "\n",
    "NVIDIA Nsight systems captures information about OpenACC execution in the profiled process. From the timeline tree, each thread that uses OpenACC shows the OpenACC trace information. To view this, you would need to click on the OpenACC API call to see the correlation with the underlying CUDA API calls. If the OpenACC API results in GPU works, that will also be highlighted.\n",
    "\n",
    "**Example screenshot (C/C++ code)**\n",
    "\n",
    "<img src=\"../../_common/images/openacc correlation.png\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "\n",
    "Moreover, if you hover over a particular OpenACC construct, you can see details about that construct.\n",
    "\n",
    "**Example screenshot (C/C++ code)**\n",
    "\n",
    "<img src=\"../../_common/images/openacc_construct.png\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "\n",
    "Feel free to review the solutions for [C/C++](../source_code/SOLUTION/rdf_parallel_directive.cpp) version to help you understand better."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (Fortran)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU (Fortran)\n",
    "!cd ../source_code && echo \"compiling Fortran version .. \" && nvfortran -acc -ta=tesla:managed,lineinfo  -Minfo=accel -o rdf_f  rdf.f90 -lnvhpcwrapnvtx"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compiler Feedback (OpenACC Fortran code):\n",
    "    \n",
    "```\n",
    "\trdf:\n",
    "     97, Generating Tesla code\n",
    "         98, !$acc loop gang, vector(128) ! blockidx%x threadidx%x\n",
    "         99, !$acc loop seq\n",
    "     97, Generating implicit copyin(y(iconf,1:natoms),z(iconf,1:natoms),x(iconf,1:natoms)) [if not already present]\n",
    "         Generating implicit copy(g(:)) [if not already present]\n",
    "     99, Complex loop carried dependence of g prevents parallelization\n",
    "         Loop carried dependence of g prevents parallelization\n",
    "         Loop carried backward dependence of g prevents vectorization \n",
    "```\n",
    "\n",
    "- Using `-ta=tesla:managed`, instruct the compiler to build for an NVIDIA Tesla GPU using \"CUDA Managed Memory\"\n",
    "- Using `-Minfo` command-line option, we will see all output from the compiler. In this example, we use `-Minfo=accel` to only see the output corresponding to the accelerator (in this case an NVIDIA GPU).\n",
    "- The line starting with 97, shows we created a parallel OpenACC loop. This loop is made up of gangs (a grid of blocks in CUDA language) and vector parallelism (threads in CUDA language) with the vector size being 128 per gang. `98, $acc loop gang, vector(128) ! blockidx%x threadidx%x`\n",
    "- The rest of the information concerns data movement. Compiler detected possible need to move data and handled it for us. We will get into this later in this lab.\n",
    "\n",
    "It is very important to inspect the feedback to make sure the compiler is doing what you have asked of it. Now, let's profile the code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output of nvptx (Fortran version)\n",
    "!cd ../source_code && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_gpu_f ./rdf_f"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's checkout the profiler's report.  Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the  [Fortran version](../source_code/rdf_gpu_f.nsys-rep) then choosing <mark>save Link As</mark> Once done, open it via the GUI. \n",
    "\n",
    "From the \"_Timeline View_\" on the top pane, double click on the \"CUDA\" from the function table on the left and expand it. Zoom in on the timeline and you can see a pattern similar to the screenshot below. The blue boxes are the compute kernels and each of these groupings of kernels is surrounded by green and red/purple boxes (annotated with purple color) representing data movements.\n",
    "\n",
    "\n",
    "**Example screenshot (Fortran code)**\n",
    "    \n",
    "<img src=\"../../_common/images/parallel_timeline.jpg\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "Let's hover your mouse over the CUDA row (underlined with blue color in the below screenshot) and expand it till you see both kernels and memory row. In the below screenshot you can see the NVTX ranges in the \"Events View\" at the bottom of the _Timeline View_ window. You can right click on each row from the function table on the left (top window) and click on \"Show in Events View\" and check out the detail related to that row (similar to the NVTX example in the below screenshot).\n",
    "\n",
    "\n",
    "**Example screenshot (Fortran code)**\n",
    "    \n",
    "<img src=\"../../_common/images/parallel_expand.jpg\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "\n",
    "\n",
    "NVIDIA Nsight systems captures information about OpenACC execution in the profiled process. From the timeline tree, each thread that uses OpenACC shows the OpenACC trace information. To view this, you would need to click on the OpenACC API call to see the correlation with the underlying CUDA API calls. If the OpenACC API results in GPU works, that will also be highlighted.\n",
    "\n",
    "\n",
    "**Example screenshot (Fortran code)**\n",
    "    \n",
    "<img src=\"../../_common/images/openacc correlation.jpg\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "\n",
    "Moreover, if you hover over a particular OpenACC construct, you can see details about that construct.\n",
    "\n",
    "\n",
    "**Example screenshot (Fortran code)**\n",
    "    \n",
    "<img src=\"../../_common/images/openacc_construct.jpg\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "\n",
    "Feel free to review the solutions for [Fortran](../source_code/SOLUTION/rdf_parallel_directive.f90) version to help you understand better."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## OpenACC Analysis\n",
    "\n",
    "**Usage Scenarios**\n",
    "\n",
    "There are multiple reasons to use Directive-based programming, but from an application developer's point of view, the key usage motivation is that it keeps the code readable/maintainable.  Below are some usage scenarios under which OpenACC can be preferred:\n",
    "- Legacy codes with a sizeable codebase needs to be ported to GPUs with minimal code changes to sequential code. If a compiler doesn’t understand your directives, the code will still compile sequentially. So you have the benefit of maintaining a single code base.\n",
    "- Developers want to see if the code structure favors GPU SIMD/SIMT style or as we say test the waters before moving a large piece of code to a GPU.\n",
    "- Portable performance is an important feature for directive programming approach and OpenACC specification has rich features to achieve the same for target accelerators like GPU.\n",
    "\n",
    "Applications like Ansys Fluent, Gaussian, and VASP make use of OpenACC for adding parallelism. These applications are listed among the top 5 applications which consume most of the compute clock cycles on supercomputers worldwide, according to a report by [Intersect 360](http://www.intersect360.com/features-1/new-reports-on-gpu-and-accelerated-computing-from-intersect360).\n",
    "\n",
    "**Limitations/Constraints**\n",
    "\n",
    "Directive based programming model like OpenACC depends on a compiler to understand and convert your sequential code to CUDA constructs. OpenACC does not provide the same low-level control that CUDA provides, but the NVIDIA HPC SDK  compiler does a good job with optimization on NVIDIA GPUs, and it a lot of cases applications achieve comparable performance to using CUDA. \n",
    "\n",
    "It is key to understand that OpenACC is not an alternative to CUDA. OpenACC can be seen as the first step in GPU porting, with the opportunity to port only the most critical kernel to CUDA. Developers can use interoperability techniques for combining OpenACC and CUDA in codes. For more details you can refer to [Interoperability](https://devblogs.nvidia.com/3-versatile-openacc-interoperability-techniques/)\n",
    "\n",
    "**Compilers Support for OpenACC**\n",
    "\n",
    "As of March 2020 here are the compilers that support OpenACC:\n",
    "\n",
    "| Compiler | Latest Version | Maintained by | Full or Partial Support |\n",
    "| --- | --- | --- | --- |\n",
    "| HPC SDK| 22.11 | NVIDIA HPC SDK | Full 2.6 spec, Partial 2.7 spec |\n",
    "| GCC | 12 | Mentor Graphics, SUSE | 2.6 spec, Limited Kernel directive support, No Unified Memory |\n",
    "| CCE| latest | Cray | 2.7 Spec | \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## Optional Exercise\n",
    "\n",
    "### Kernel Directive \n",
    "\n",
    "The parallel directive leaves a lot of decisions up to the programmer. The programmer will decide what is and isn't parallelizable. The programmer will also have to provide all of the optimizations - the compiler assumes nothing. If any mistakes happen while parallelizing the code ( ignoring the data races etc.), it will be up to the programmer to identify and correct them.\n",
    "\n",
    "Another directive \"kernels\" is the exact opposite in all of these regards. The key difference between the two is as follows:\n",
    "\n",
    "The **parallel directive** gives a lot of control to the programmer. The programmer decides what to parallelize, and how it will be parallelized. Any mistakes made by the parallelization are  the fault of the programmer. It is recommended to use a parallel directive for each loop you want to parallelize.\n",
    "\n",
    "The **kernels directive** leaves a majority of the control to the compiler. The compiler will analyze the loops, and decide which ones to parallelize. It may refuse to parallelize certain loops, but the programmer can override this decision. You may use the kernels directive to parallelize large portions of code, and these portions may include multiple loops.\n",
    "We do not plan to cover this directive in details in the current lab.\n",
    "\n",
    "Use the kernels directive and observe any performance difference between **parallel** and **kernels** directives.\n",
    "Sample usage of kernel directives is given as follows:\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC C/C++ code</b></summary>\n",
    "    \n",
    "```cpp\n",
    "#pragma acc kernels\n",
    "for (int i = 0; i < N; i++ )\n",
    "{\n",
    "    for (int j = 0; j < N; j++ )\n",
    "    {\n",
    "        < loop code >\n",
    "    }\n",
    "} \n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC Fortran code</b></summary>\n",
    "    \n",
    "```fortran\n",
    "!$acc kernels\n",
    "    do i=1,N\n",
    "        < loop code >\n",
    "    enddo\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[C/C++ version](../source_code/rdf.cpp)</b> or the <b>[Fortran version](../source_code/rdf.f90)</b> links, and <mark>start modifying the C or Fortran version of the RDF code. Without changing the orginal code, you will not get the expected outcome after running the below cells.</mark>Remember to **SAVE** your code after changes, before running below cells.\n",
    "\n",
    "\n",
    "After running the cells, make sure to check the output first. You can inspect part of the compiler feedback for C or Fortran version and see what it's telling us (your compiler feedback will be similar to the below)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (C/C++)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU (C/C++)\n",
    "!cd ../source_code && echo \"compiling C/C++ version .. \" && nvc++ -acc -ta=tesla:managed,lineinfo  -Minfo=accel -o rdf_c rdf.cpp && echo \"Running the executable and validating the output\" && ./rdf_c && cat Pair_entropy.dat "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The output should be the following:\n",
    "\n",
    "```\n",
    "    \n",
    "s2 value is -2.43191\n",
    "s2bond value is -3.87014\n",
    "    \n",
    "```\n",
    "Compiler Feedback (OpenACC C/C++ code):\n",
    "    \n",
    "If you only replaced the parallel directive with kernels (meaning only wrapping the loop with `#pragma acc kernels`), then the compiler feedback will look similar to below:\n",
    "\n",
    "    \n",
    "<img src=\"../../_common/images/kernel_feedback.png\">\n",
    "\n",
    "The line starting with 179, shows we created a serial kernel and the following loops will run in serial. When we use kernel directives, we let the compiler make decisions for us. In this case, the compiler thinks loop are not safe to parallelise due to dependency."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (Fortran)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU (Fortran)\n",
    "!cd ../source_code && echo \"compiling Fortran version .. \" && nvfortran -acc -ta=tesla:managed,lineinfo  -Minfo=accel -o rdf_f rdf.f90 -lnvhpcwrapnvtx && echo \"Running the executable and validating the output\" && ./rdf_f && cat Pair_entropy.dat "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The output should be the following:\n",
    "\n",
    "```\n",
    "    \n",
    "s2      :    -2.452690945278331     \n",
    "s2bond  :    -24.37502820694527 \n",
    "    \n",
    "```\n",
    "Compiler Feedback (OpenACC Fortran code):\n",
    "\n",
    "If you only replaced the parallel directive with kernels (meaning only wrapping the loop with `!$acc kernels`), then the compiler feedback will look similar to below:\n",
    "\n",
    "```\n",
    "rdf:\n",
    "     97, Generating implicit copyin(y(iconf,:),z(iconf,:),x(iconf,:)) [if not already present]\n",
    "         Generating implicit copy(g(:)) [if not already present]\n",
    "     99, Loop carried dependence due to exposed use of g(:) prevents parallelization\n",
    "         Accelerator serial kernel generated\n",
    "         Generating Tesla code\n",
    "         99, !$acc loop seq\n",
    "        101, !$acc loop seq\n",
    "    101, Loop carried dependence due to exposed use of g(:) prevents parallelization\n",
    "```\n",
    "\n",
    "The line starting with 99, shows we created a serial kernel and the following loops will run in serial. When we use kernel directives, we let the compiler make decisions for us. In this case, the compiler thinks loop are not safe to parallelise due to dependency."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### OpenACC Independent Clause\n",
    "\n",
    "In cases as such, we need to inform the compiler that the loop is safe to parallelise so it can generate kernels. To specify that loop iterations are data independent, we need to overwrite the compiler dependency analysis (Note: this is implied for *parallel loop*).\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>C/C++ syntax</b></summary>\n",
    "    \n",
    "```cpp\n",
    "#pragma acc kernels\n",
    "for (int i = 0; i < N; i++ )\n",
    "{\n",
    "    #pragma acc loop independent\n",
    "    for (int j = 0; j < N; j++ )\n",
    "    {\n",
    "        < loop code >\n",
    "    }\n",
    "} \n",
    "```\n",
    "\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Fortran syntax</b></summary>\n",
    "    \n",
    "```fortran\n",
    "\n",
    "!$acc kernels\n",
    "    do i=1,N\n",
    "       !$ acc loop independent\n",
    "       do j=1,N\n",
    "          < loop code >\n",
    "       end do\n",
    "    enddo\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[C/C++ version](../source_code/rdf.cpp)</b> or the <b>[Fortran version](../source_code/rdf.f90)</b> links, and <mark>start modifying the C or Fortran version of the RDF code. Without changing the orginal code, you will not get the expected outcome after running the below cells.</mark> Remember to **SAVE** your code after changes, before running below cells.\n",
    "    \n",
    "After running the cells, you can inspect part of the compiler feedback for C or Fortran version and see what it's telling us (your compiler feedback will be similar to the below)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (C/C++)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU (C/C++)\n",
    "!cd ../source_code && echo \"compiling C/C++ version .. \" && nvc++ -acc -ta=tesla:managed,lineinfo -Minfo=accel -o rdf_c rdf.cpp && echo \"Running the executable and validating the output\" && ./rdf_c && cat Pair_entropy.dat "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compiler Feedback (OpenACC C/C++ code):\n",
    "\n",
    "<img src=\"../../_common/images/kernel_indep_feedback.png\">\n",
    "\n",
    "We can see that the compiler knows that the loop is parallelisable (`182, Loop is parallelizable`). Note that the loop is parallelized using vector(128) which that the compiler generated instructions for chunk of data of length 128 (vector size being 128 per gang) `182, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */`\n",
    "\n",
    "Let's profile the code now."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output of nvptx (C/C++ version)\n",
    "!cd ../source_code && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_kernel_c ./rdf_c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the [C/C++ version](../source_code/rdf_kernel_c.nsys-rep) then choosing <mark>save Link As</mark> Once done, open it via the GUI. \n",
    " \n",
    "Checkout the OpenACC row and hover over OpenACC constructs to see if the detail looks different from when you use parallel directives. Compare the profiler report with the previous section.\n",
    "\n",
    "Feel free to checkout the solutions for [C/C++](../source_code/SOLUTION/rdf_kernel_directive.cpp) version to help you understand better."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (Fortran)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU (Fortran)\n",
    "!cd ../source_code && echo \"compiling Fortran version .. \" && nvfortran -acc -ta=tesla:managed,lineinfo  -Minfo=accel -o rdf_f rdf.f90 -lnvhpcwrapnvtx && echo \"Running the executable and validating the output\" && ./rdf_f && cat Pair_entropy.dat"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compiler Feedback (OpenACC Fortran code):\n",
    "\n",
    "```\n",
    "rdf:\n",
    "     97, Generating implicit copyin(y(iconf,:),z(iconf,:),x(iconf,:)) [if not already present]\n",
    "         Generating implicit copy(g(:)) [if not already present]\n",
    "     99, Loop is parallelizable\n",
    "    101, Loop is parallelizable\n",
    "         Generating Tesla code\n",
    "         99, !$acc loop gang, vector(128) collapse(2) ! blockidx%x threadidx%x\n",
    "        101,   ! blockidx%x threadidx%x auto-collapsed\n",
    "```\n",
    "\n",
    "We can see that the compiler knows that the loop is parallelisable (`99, Loop is parallelizable`). Note that the loop is parallelized using vector(128) which that the compiler generated instructions for chunk of data of length 128 (vector size being 128 per gang) `99, acc loop gang, vector(128) /* blockIdx%x threadIdx%x */`\n",
    "\n",
    "Let's profile the code now."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output of nvptx (Fortran version)\n",
    "!cd ../source_code && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_kernel_f ./rdf_f"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the [Fortran version](../source_code/rdf_kernel_f.nsys-rep) then choosing <mark>save Link As</mark> Once done, open it via the GUI. \n",
    " \n",
    "Checkout the OpenACC row and hover over OpenACC constructs to see if the detail looks different from when you use parallel directives. Compare the profiler report with the previous section.\n",
    "\n",
    "Feel free to checkout the solutions for [Fortran](../source_code/SOLUTION/rdf_kernel_directive.f90) versions to help you understand better."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data Directive \n",
    "\n",
    "In this lab, we added OpenACC parallel and loop directives and relied on a feature called [CUDA Managed Memory](../../_common/jupyter_notebook/GPU_Architecture_Terminologies.ipynb) to deal with the separate CPU & GPU memories for us. Just adding OpenACC to our  loop, we achieved a considerable performance boost. However, managed memory is not compatible with all GPUs or all compilers and it sometimes performs worse than programmer-defined memory management. Also when programming for a GPU, based on the application type handling data management explicitly between the CPU and GPU may result into better performance.\n",
    "\n",
    "Let's inspect the profiler report from the previous section when we used managed memory with parallel directives. From the \"_Timeline View_\" on the top pane, double click on the \"CUDA\" from the function table on the left and expand it. Zoom in on the timeline and you can see a pattern similar to the screenshot below. The blue boxes are the compute kernels and each of these groupings of kernels is surrounded by purple and teal boxes (annotated with green color) representing data movements.\n",
    "\n",
    "<img src=\"../../_common/images/parallel_unified.png\">\n",
    "\n",
    "\n",
    "<img src=\"../../_common/images/parallel_unified.jpg\">\n",
    "\n",
    "\n",
    "What this graph is showing us is that we're doing a lot of data movement between GPU and CPU. The compiler feedback we collected earlier tells us quite a bit about data movement too. If we look again at the compiler feedback from earlier, we see the following.\n",
    "\n",
    "<img src=\"../../_common/images/parallel_data_feedback.png\">\n",
    "\n",
    "The compiler feedback is telling us that the compiler has inserted data movement around our parallel region at line 177 which copies the `d_g2` array in and out of the GPU memory and also copies `d_x`, `d_y` and `d_z` to the GPU memory. \n",
    "\n",
    "The compiler can only work with the information we provide. It knows we need all those arrays on on the GPU for the accelerated section within the  `pair_gpu` function, but we didn't tell the compiler anything about what happens to the data outside of those sections. Without this knowledge, the compiler has to copy the full arrays to the GPU and back to the CPU for each accelerated section. This is a lot of unnecessary data transfers. \n",
    "\n",
    "Ideally, we would want to move the data to the GPU at the beginning, and only transfer it back to the CPU at the end (if needed). If we do not need to copy any data back to the CPU, then we only need to create space on the device (GPU) for an array. \n",
    "\n",
    "We need to give the compiler information about how to reduce the extra and unnecessary data movement. By adding OpenACC `data` directive to a structured code block, the compiler will know how to manage data according to the clauses. The following sections explains how to use data clauses in your program. For information on the data directive clauses, please visit [OpenACC 3.0 Specification](https://www.openacc.org/sites/default/files/inline-images/Specification/OpenACC.3.0.pdf).\n",
    "\n",
    "\n",
    "**Using OpenACC Data Clauses**\n",
    "\n",
    "Data clauses allow the programmer to specify data transfers between the host and device (or in our case, the CPU and the GPU). Let's look at an example where we do not use a data clause.\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC C/C++ code</b></summary>\n",
    "    \n",
    "```cpp\n",
    "int *A = (int*) malloc(N * sizeof(int));\n",
    "\n",
    "#pragma acc parallel loop\n",
    "for( int i = 0; i < N; i++ )\n",
    "{\n",
    "    A[i] = 0;\n",
    "} \n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC Fortran code</b></summary>\n",
    "    \n",
    "```fortran\n",
    "allocate(A(N))\n",
    "\n",
    "  !$acc parallel loop\n",
    "  do i=1,100\n",
    "    A(i) = 0\n",
    "  enddo\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "We have allocated an array A outside of our parallel region. This means that A is allocated in the CPU memory. However, we access A inside of our loop, and that loop is contained within a parallel region. Within that parallel region, A[i] is attempting to access a memory location within the GPU memory. We didn't explicitly allocate A on the GPU, so one of two things will happen.\n",
    "\n",
    "1. The compiler will understand what we are trying to do, and automatically copy A from the CPU to the GPU.\n",
    "2. The program will check for an array A in GPU memory, it won't find it, and it will throw an error.\n",
    "Instead of hoping that we have a compiler that can figure this out, we could instead use a data clause.\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC C/C++ code</b></summary>\n",
    "    \n",
    "```cpp\n",
    "int *A = (int*) malloc(N * sizeof(int));\n",
    "\n",
    "#pragma acc parallel loop copy(A[0:N])\n",
    "for( int i = 0; i < N; i++ )\n",
    "{\n",
    "    A[i] = 0;\n",
    "}\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC Fortran code</b></summary>\n",
    "    \n",
    "```fortran\n",
    "allocate(A(N))\n",
    "\n",
    "  !$acc parallel loop copy(A(1:N))\n",
    "  do i=1,100\n",
    "    A(i) = 0\n",
    "  enddo\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "    \n",
    "The image below offers step-by-step example of using the copy clause.\n",
    "\n",
    "<img src=\"../../_common/images/openacc_copyclause.png\" width=\"80%\" height=\"80%\">\n",
    "\n",
    "Of course, we might not want to copy our data both to and from the GPU memory. Maybe we only need the array's values as inputs to the GPU region, or maybe it's only the final results we care about, or perhaps the array is only used temporarily on the GPU and we don't want to copy it either directive. The following OpenACC data clauses provide a bit more control than just the `copy` clause.\n",
    "\n",
    "* `copyin` - Create space for the array and copy the input values of the array to the device. At the end of the region, the array is deleted without copying anything back to the host.\n",
    "* `copyout` - Create space for the array on the device, but don't initialize it to anything. At the end of the region, copy the results back and then delete the device array.\n",
    "* `create` - Create space of the array on the device, but do not copy anything to the device at the beginning of the region, nor back to the host at the end. The array will be deleted from the device at the end of the region.\n",
    "* `present` - Don't do anything with these variables. I've put them on the device somewhere else, so just assume they're available.\n",
    "\n",
    "You may also use them to operate on multiple arrays at once, by including those arrays as a comma separated list.\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC C/C++ code</b></summary>\n",
    "    \n",
    "```cpp\n",
    "#pragma acc parallel loop copy( A[0:N], B[0:M], C[0:Q] )\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC Fortran code</b></summary>\n",
    "    \n",
    "```fortran\n",
    "!$acc parallel loop copy( A(1:N), B(1:M), C(1:Q) )\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "You may also use more than one data clause at a time.\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC C/C++ code</b></summary>\n",
    "    \n",
    "```cpp\n",
    "#pragma acc parallel loop create( A[0:N] ) copyin( B[0:M] ) copyout( C[0:Q] )\n",
    "```\n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "\n",
    "<details>\n",
    "    <summary markdown=\"span\"><b>Example OpenACC Fortran code</b></summary>\n",
    "    \n",
    "```fortran\n",
    "!$acc parallel loop create( A(1:N) ) copyin( B(1:M) ) copyout( C(1:Q) )\n",
    "```   \n",
    "</details>\n",
    "<br/>\n",
    "\n",
    "Let us try adding a data clause to our code and observe any performance differences between the two.   **Note: We have removed the managed clause in order to handle data management explicitly.**\n",
    "\n",
    "    \n",
    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[C/C++ version](../source_code/rdf.cpp)</b> or the <b>[Fortran version](../source_code/rdf.f90)</b> links, and <mark>start modifying the C or Fortran version of the RDF code. Without changing the orginal code, you will not get the expected outcome after running the below cells.</mark> Remember to **SAVE** your code after changes, before running below cells.\n",
    "    \n",
    "\n",
    "After running the cells, make sure to check the output first. You can inspect part of the compiler feedback for C or Fortran version and see what it's telling us (your compiler feedback will be similar to the below)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (C/C++)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU without managed memory (C/C++)\n",
    "!cd ../source_code && echo \"compiling C/C++ version .. \" && nvc++ -acc -ta=tesla,lineinfo -Minfo=accel -o rdf_c rdf.cpp && echo \"Running the executable and validating the output\" && ./rdf_c && cat Pair_entropy.dat "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compiler Feedback (OpenACC C/C++ code):\n",
    "\n",
    "<img src=\"../../_common/images/data_feedback.png\">\n",
    "\n",
    "You can see that on line 182, compiler is generating default present for `d_g2`, `d_x`,`d_z`, and `d_y` arrays. In other words, it is assuming that data is present on the GPU and it only copies data to the GPU only if the data do not exist.\n",
    "\n",
    "Let's profile the code now."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output without managed memory (C/C++)\n",
    "!cd ../source_code && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_no_managed_c ./rdf_c"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the [C/C++ version](../source_code/rdf_no_managed_c.nsys-rep) then choosing <mark>save Link As</mark> Once done, open it via the GUI. Have a look at the example expected profiler report below:\n",
    "\n",
    "\n",
    "**Example screenshot (C/C++ code)**\n",
    "\n",
    "<img src=\"../../_common/images/parallel_data.png\">\n",
    "\n",
    "\n",
    "Have a look at the data movements annotated with green color and compare it with the previous versions. We have accelerated the application and reduced the execution time by eliminating the unnecessary data transfers between CPU and GPU.\n",
    "\n",
    "Feel free to checkout the solutions for [C/C++](../source_code/SOLUTION/rdf_data_directive.cpp) version to help you understand better."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "### <mark>Compile the code for GPU (Fortran)</mark>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#compile for Tesla GPU without managed memory (Fortran)\n",
    "!cd ../source_code && echo \"compiling Fortran version .. \" && nvfortran -acc -ta=tesla,lineinfo -Minfo=accel -o rdf_f rdf.f90 -lnvhpcwrapnvtx && echo \"Running the executable and validating the output\" && ./rdf_f && cat Pair_entropy.dat "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compiler Feedback (OpenACC Fortran code):\n",
    "\n",
    "```\n",
    "rdf:\n",
    "     95, Generating copy(g(:)) [if not already present]\n",
    "         Generating copyin(y(y$sd8:(y$sd8-1)+y$sd8,y$sd8:(y$sd8-1)+y$sd8),z(z$sd7:(z$sd7-1)+z$sd7,z$sd7:(z$sd7-1)+z$sd7),x(x$sd9:(x$sd9-1)+x$sd9,x$sd9:(x$sd9-1)+x$sd9)) [if not already present]\n",
    "     98, Generating Tesla code\n",
    "         99, !$acc loop gang, vector(128) ! blockidx%x threadidx%x\n",
    "        100, !$acc loop seq\n",
    "    100, Loop carried dependence of g prevents parallelization\n",
    "         Loop carried backward dependence of g prevents vectorization\n",
    "```\n",
    "\n",
    "You can see that on line 95, compiler is generating default present for `g2`, `x`,`z`, and `y` arrays. In other words, it is assuming that data is present on the GPU and it only copies data to the GPU only if the data do not exist. Another key observation also is removal of the work *implicity copy* as we have added the data clauses. Also the data sizes are automatically calculated by the compiler here which we can also addionally give to compiler if needed. \n",
    "\n",
    "Let's profile the code now."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#profile and see output without managed memory (Fortran)\n",
    "!cd ../source_code && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_no_managed_f ./rdf_f"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Download and save the report file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> the  [Fortran version](../source_code/rdf_no_managed_f.nsys-rep) then choosing <mark>save Link As</mark> Once done, open it via the GUI. Have a look at the example expected profiler report below:\n",
    "\n",
    "**Example screenshot (Fortran code)**\n",
    "    \n",
    "<img src=\"../../_common/images/f_openacc_data_directive.png\">\n",
    "\n",
    "\n",
    "Have a look at the data movements annotated with green color and compare it with the previous versions. We have accelerated the application and reduced the execution time by eliminating the unnecessary data transfers between CPU and GPU.\n",
    "\n",
    "Feel free to checkout the solutions for [Fortran](../source_code/SOLUTION/rdf_data_directive.f90) versions to help you understand better."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Post-Lab Summary\n",
    "\n",
    "If you would like to download this lab for later viewing, it is recommended you go to your browser's file menu (not the Jupyter notebook file menu) and save the complete web page.  This will ensure the images are copied down as well. You can also execute the following cell block to create a zip file of the files you have been working on, and download it with the link below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "cd ..\n",
    "rm -f _files.zip\n",
    "zip -r _files.zip *"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**After** executing the above zip command, you should be able to download and save the zip file by holding down <mark>Shift</mark> and <mark>right-clicking</mark> [Here](../_files.zip) then choosing <mark>save Link As</mark>.\n",
    "<!--.Let us now go back to parallelizing our code using other approaches\n",
    "\n",
    "**IMPORTANT**: If you would like to continue and optimize this application further with OpenACC, please click on the **NEXT** button, otherwise click on **HOME** to go back to the main notebook for *N ways of GPU programming for MD* code.\n",
    "\n",
    "\n",
    "**IMPORTANT**: Please click on the **HOME** button to go back to the main notebook for *N ways of GPU programming for MD* code.\n",
    "\n",
    "-----\n",
    "\n",
    "# <p style=\"text-align:center;border:3px; border-style:solid; border-color:#FF0000  ; padding: 1em\"> <a href=../../../nways_MD_start.ipynb>HOME</a></p>\n",
    "\n",
    "-----\n",
    "-->\n",
    "\n",
    "\n",
    "# Links and Resources\n",
    "[OpenACC API guide](https://www.openacc.org/sites/default/files/inline-files/OpenACC%20API%202.6%20Reference%20Guide.pdf)\n",
    "\n",
    "[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)\n",
    "\n",
    "[NVIDIA Nsight Compute](https://developer.nvidia.com/nsight-compute)\n",
    "\n",
    "[CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)\n",
    "\n",
    "**NOTE**: To be able to see the Nsight Systems profiler output, please download the latest version of Nsight Systems from [here](https://developer.nvidia.com/nsight-systems).\n",
    "\n",
    "Don't forget to check out additional [Open Hackathons Resources](https://www.openhackathons.org/s/technical-resources) and join our [OpenACC and Hackathons Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.\n",
    "\n",
    "--- \n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Licensing \n",
    "\n",
    "Copyright © 2022 OpenACC-Standard.org.  This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0). These materials may include references to hardware and software developed by other entities; all applicable licensing and copyrights apply."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
