{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "统一内存管理\n",
    "1. 使用 Nsight Systems 命令行工具 （nsys） 分析加速的应用程序性能。\n",
    "2. 利用对流式处理多处理器的了解来优化执行配置。\n",
    "3. 了解统一内存在页面错误和数据迁移方面的行为。\n",
    "4. 使用异步内存预取来减少页面错误和数据迁移，从而提高性能。\n",
    "5. 采用迭代开发周期来快速加速和部署应用程序。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "nsys profile将生成一个可以以各种方式使用的报告文件。\n",
    "\n",
    "1. 配置文件配置详细信息\n",
    "2. 报告文件生成详细信息\n",
    "3. CUDA API Statistics\n",
    "4. CUDA 内核统计信息\n",
    "5. CUDA 内存操作统计信息（时间和大小）\n",
    "6. 操作系统运行时 API 统计信息"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "向量相加函数的优化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#include <stdio.h>\n",
    "\n",
    "/*\n",
    " * Host function to initialize vector elements. This function\n",
    " * simply initializes each element to equal its index in the\n",
    " * vector.\n",
    " */\n",
    "\n",
    "void initWith(float num, float *a, int N)\n",
    "{\n",
    "  for(int i = 0; i < N; ++i)\n",
    "  {\n",
    "    a[i] = num;\n",
    "  }\n",
    "}\n",
    "\n",
    "/*\n",
    " * Device kernel stores into `result` the sum of each\n",
    " * same-indexed value of `a` and `b`.\n",
    " */\n",
    "\n",
    "__global__\n",
    "void addVectorsInto(float *result, float *a, float *b, int N)\n",
    "{\n",
    "  int index = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "  int stride = blockDim.x * gridDim.x;\n",
    "\n",
    "  for(int i = index; i < N; i += stride)\n",
    "  {\n",
    "    result[i] = a[i] + b[i];\n",
    "  }\n",
    "}\n",
    "\n",
    "/*\n",
    " * Host function to confirm values in `vector`. This function\n",
    " * assumes all values are the same `target` value.\n",
    " */\n",
    "\n",
    "void checkElementsAre(float target, float *vector, int N)\n",
    "{\n",
    "  for(int i = 0; i < N; i++)\n",
    "  {\n",
    "    if(vector[i] != target)\n",
    "    {\n",
    "      printf(\"FAIL: vector[%d] - %0.0f does not equal %0.0f\\n\", i, vector[i], target);\n",
    "      exit(1);\n",
    "    }\n",
    "  }\n",
    "  printf(\"Success! All values calculated correctly.\\n\");\n",
    "}\n",
    "\n",
    "int main()\n",
    "{\n",
    "  const int N = 2<<24;\n",
    "  size_t size = N * sizeof(float);\n",
    "\n",
    "  float *a;\n",
    "  float *b;\n",
    "  float *c;\n",
    "\n",
    "  cudaMallocManaged(&a, size);\n",
    "  cudaMallocManaged(&b, size);\n",
    "  cudaMallocManaged(&c, size);\n",
    "\n",
    "  initWith(3, a, N);\n",
    "  initWith(4, b, N);\n",
    "  initWith(0, c, N);\n",
    "\n",
    "  size_t threadsPerBlock;\n",
    "  size_t numberOfBlocks;\n",
    "\n",
    "  /*\n",
    "   * nsys should register performance changes when execution configuration\n",
    "   * is updated.\n",
    "   */\n",
    "\n",
    "  threadsPerBlock = 1;\n",
    "  numberOfBlocks = 1;\n",
    "\n",
    "  cudaError_t addVectorsErr;\n",
    "  cudaError_t asyncErr;\n",
    "\n",
    "  addVectorsInto<<<numberOfBlocks, threadsPerBlock>>>(c, a, b, N);\n",
    "\n",
    "  addVectorsErr = cudaGetLastError();\n",
    "  if(addVectorsErr != cudaSuccess) printf(\"Error: %s\\n\", cudaGetErrorString(addVectorsErr));\n",
    "\n",
    "  asyncErr = cudaDeviceSynchronize();\n",
    "  if(asyncErr != cudaSuccess) printf(\"Error: %s\\n\", cudaGetErrorString(asyncErr));\n",
    "\n",
    "  checkElementsAre(7, c, N);\n",
    "\n",
    "  cudaFree(a);\n",
    "  cudaFree(b);\n",
    "  cudaFree(c);\n",
    "}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "CUDA从Streaming Multiprocessors（SM）中创建、管理、调度和执行 32 个线程的分组（warp），线程数为32的倍数有利于性能提升"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "统一内存迁移\n",
    "分配 UM 时，内存尚未驻留在主机或设备上。当主机或设备尝试访问内存时，将发生页面错误，此时主机或设备将批量迁移所需的数据。同样，在任何时候，当CPU或加速系统中的任何GPU尝试访问尚未驻留在其中的内存时，都会发生页面错误并触发其迁移。\n",
    "\n",
    "按需分页故障和迁移内存的能力对于在加速应用程序中轻松开发非常有帮助。此外，当使用具有稀疏访问模式的数据时，例如，在应用程序实际运行之前无法知道需要处理哪些数据时，以及对于数据可能由具有多个 GPU 的加速系统中的多个 GPU 设备访问的情况，按需内存迁移非常有益。\n",
    "\n",
    "有时- 例如，当数据需求在运行时之前已知，并且需要大量连续的内存块时 - 当页面错误和按需迁移数据的开销产生开销成本时，可以更好地避免。\n",
    "\n",
    "数据一开始不在hosts或者device结点上，尝试去取时就会发生页错误，这时才批量迁移所需数据。任何时候去访问没有驻留在对应结点内存空间中的数据时，都会发生页错误，并进行数据迁移。很多情况没有办法静态地明确所需要的数据，以及同样数据可能被多个GPU去访问时，按需内存迁移就非常有用。而当以及能静态地明确所需数据时，这里就能通过预取来避免页错误，减小迁移的开销。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#include <stdio.h>\n",
    "\n",
    "__global__\n",
    "void initWith(float num, float *a, int N)\n",
    "{\n",
    "\n",
    "  int index = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "  int stride = blockDim.x * gridDim.x;\n",
    "\n",
    "  for(int i = index; i < N; i += stride)\n",
    "  {\n",
    "    a[i] = num;\n",
    "  }\n",
    "}\n",
    "\n",
    "__global__\n",
    "void addVectorsInto(float *result, float *a, float *b, int N)\n",
    "{\n",
    "  int index = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "  int stride = blockDim.x * gridDim.x;\n",
    "\n",
    "  for(int i = index; i < N; i += stride)\n",
    "  {\n",
    "    result[i] = a[i] + b[i];\n",
    "  }\n",
    "}\n",
    "\n",
    "void checkElementsAre(float target, float *vector, int N)\n",
    "{\n",
    "  for(int i = 0; i < N; i++)\n",
    "  {\n",
    "    if(vector[i] != target)\n",
    "    {\n",
    "      printf(\"FAIL: vector[%d] - %0.0f does not equal %0.0f\\n\", i, vector[i], target);\n",
    "      exit(1);\n",
    "    }\n",
    "  }\n",
    "  printf(\"Success! All values calculated correctly.\\n\");\n",
    "}\n",
    "\n",
    "int main()\n",
    "{\n",
    "  int deviceId;\n",
    "  int numberOfSMs;\n",
    "\n",
    "  cudaGetDevice(&deviceId);\n",
    "  cudaDeviceGetAttribute(&numberOfSMs, cudaDevAttrMultiProcessorCount, deviceId);\n",
    "\n",
    "  const int N = 2<<24;\n",
    "  size_t size = N * sizeof(float);\n",
    "\n",
    "  float *a;\n",
    "  float *b;\n",
    "  float *c;\n",
    "\n",
    "  cudaMallocManaged(&a, size);\n",
    "  cudaMallocManaged(&b, size);\n",
    "  cudaMallocManaged(&c, size);\n",
    "\n",
    "  cudaMemPrefetchAsync(a, size, deviceId);\n",
    "  cudaMemPrefetchAsync(b, size, deviceId);\n",
    "  cudaMemPrefetchAsync(c, size, deviceId);\n",
    "\n",
    "  size_t threadsPerBlock;\n",
    "  size_t numberOfBlocks;\n",
    "\n",
    "  threadsPerBlock = 256;\n",
    "  numberOfBlocks = 32 * numberOfSMs;\n",
    "\n",
    "  cudaError_t addVectorsErr;\n",
    "  cudaError_t asyncErr;\n",
    "\n",
    "  initWith<<<numberOfBlocks, threadsPerBlock>>>(3, a, N);\n",
    "  initWith<<<numberOfBlocks, threadsPerBlock>>>(4, b, N);\n",
    "  initWith<<<numberOfBlocks, threadsPerBlock>>>(0, c, N);\n",
    "\n",
    "  addVectorsInto<<<numberOfBlocks, threadsPerBlock>>>(c, a, b, N);\n",
    "\n",
    "  addVectorsErr = cudaGetLastError();\n",
    "  if(addVectorsErr != cudaSuccess) printf(\"Error: %s\\n\", cudaGetErrorString(addVectorsErr));\n",
    "\n",
    "  asyncErr = cudaDeviceSynchronize();\n",
    "  if(asyncErr != cudaSuccess) printf(\"Error: %s\\n\", cudaGetErrorString(asyncErr));\n",
    "\n",
    "  cudaMemPrefetchAsync(c, size, cudaCpuDeviceId);\n",
    "\n",
    "  checkElementsAre(7, c, N);\n",
    "\n",
    "  cudaFree(a);\n",
    "  cudaFree(b);\n",
    "  cudaFree(c);\n",
    "}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "最终练习：改bug并在20ns下运行成功"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#include <stdio.h>\n",
    "\n",
    "#define N 2048 * 2048 // Number of elements in each vector\n",
    "\n",
    "__global__\n",
    "void initWith(int* array, int num)\n",
    "{\n",
    "    int grid = blockDim.x * gridDim.x;\n",
    "    int tid = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "\n",
    "    while ( tid < N )\n",
    "    {\n",
    "        array[tid] = num;\n",
    "        tid += grid;\n",
    "    }\n",
    "}\n",
    "\n",
    "/*\n",
    " * Optimize this already-accelerated codebase. Work iteratively,\n",
    " * and use nsys to support your work.\n",
    " *\n",
    " * Aim to profile `saxpy` (without modifying `N`) running under\n",
    " * 20us.\n",
    " *\n",
    " * Some bugs have been placed in this codebase for your edification.\n",
    " */\n",
    "\n",
    "__global__ \n",
    "void saxpy(int * a, int * b, int * c)\n",
    "{\n",
    "    int grid = blockDim.x * gridDim.x;\n",
    "    int tid = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "\n",
    "    while ( tid < N )\n",
    "    {\n",
    "        c[tid] = 2 * a[tid] + b[tid];\n",
    "        tid += grid;\n",
    "    }\n",
    "}\n",
    "\n",
    "int main()\n",
    "{\n",
    "    int deviceId;\n",
    "    int numberOfSMs;\n",
    "\n",
    "    cudaGetDevice(&deviceId);\n",
    "    cudaDeviceGetAttribute(&numberOfSMs, cudaDevAttrMultiProcessorCount, deviceId);\n",
    "    \n",
    "    float *a, *b, *c;\n",
    "\n",
    "    int size = N * sizeof (int); // The total number of bytes per vector\n",
    "\n",
    "    cudaMallocManaged(&a, size);\n",
    "    cudaMallocManaged(&b, size);\n",
    "    cudaMallocManaged(&c, size);\n",
    "\n",
    "    cudaMemPrefetchAsync(a, size, deviceId);\n",
    "    cudaMemPrefetchAsync(b, size, deviceId);\n",
    "    cudaMemPrefetchAsync(c, size, deviceId);\n",
    "\n",
    "    int threads_per_block = 1024;\n",
    "    int number_of_blocks = 32 * numberOfSMs;\n",
    "    \n",
    "    initWith<<<number_of_blocks, threads_per_block>>>(a, 2);\n",
    "    initWith<<<number_of_blocks, threads_per_block>>>(b, 1);\n",
    "    initWith<<<number_of_blocks, threads_per_block>>>(c, 0);\n",
    "\n",
    "    saxpy <<< number_of_blocks, threads_per_block >>> ( a, b, c );\n",
    "\n",
    "    // Print out the first and last 5 values of c for a quality check\n",
    "    for( int i = 0; i < 5; ++i )\n",
    "        printf(\"c[%d] = %d, \", i, c[i]);\n",
    "    printf (\"\\n\");\n",
    "    for( int i = N-5; i < N; ++i )\n",
    "        printf(\"c[%d] = %d, \", i, c[i]);\n",
    "    printf (\"\\n\");\n",
    "\n",
    "    cudaFree( a ); cudaFree( b ); cudaFree( c );\n",
    "}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "N体问题"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#include <math.h>\n",
    "#include <stdio.h>\n",
    "#include <stdlib.h>\n",
    "#include \"timer.h\"\n",
    "#include \"files.h\"\n",
    "\n",
    "#define SOFTENING 1e-9f\n",
    "\n",
    "typedef struct { float x, y, z, vx, vy, vz; } Body;\n",
    "\n",
    "__global__ void bodyForceGPU(Body *p, float dt, int n)\n",
    "{\n",
    "    int i = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "    int grid = blockDim.x * gridDim.x;\n",
    "    while(i < n)\n",
    "    {\n",
    "        float Fx = 0.0f; float Fy = 0.0f; float Fz = 0.0f;\n",
    "        for (int j = 0; j < n; j++) \n",
    "        {\n",
    "            float dx = p[j].x - p[i].x;\n",
    "            float dy = p[j].y - p[i].y;\n",
    "            float dz = p[j].z - p[i].z;\n",
    "            float distSqr = dx*dx + dy*dy + dz*dz + SOFTENING;\n",
    "            float invDist = rsqrtf(distSqr);\n",
    "            float invDist3 = invDist * invDist * invDist;\n",
    "\n",
    "            Fx += dx * invDist3; Fy += dy * invDist3; Fz += dz * invDist3;\n",
    "        }\n",
    "        p[i].vx += dt*Fx; p[i].vy += dt*Fy; p[i].vz += dt*Fz;\n",
    "        i += grid;\n",
    "    }\n",
    "}\n",
    "\n",
    "int main(const int argc, const char** argv) {\n",
    "\n",
    "  int deviceId;\n",
    "  int numberOfSMs;\n",
    "\n",
    "  cudaGetDevice(&deviceId);\n",
    "  cudaDeviceGetAttribute(&numberOfSMs, cudaDevAttrMultiProcessorCount, deviceId);\n",
    "\n",
    "  // The assessment will test against both 2<11 and 2<15.\n",
    "  // Feel free to pass the command line argument 15 when you generate ./nbody report files\n",
    "  int nBodies = 2<<11;\n",
    "  if (argc > 1) nBodies = 2<<atoi(argv[1]);\n",
    "\n",
    "  // The assessment will pass hidden initialized values to check for correctness.\n",
    "  // You should not make changes to these files, or else the assessment will not work.\n",
    "  const char * initialized_values;\n",
    "  const char * solution_values;\n",
    "\n",
    "  if (nBodies == 2<<11) {\n",
    "    initialized_values = \"09-nbody/files/initialized_4096\";\n",
    "    solution_values = \"09-nbody/files/solution_4096\";\n",
    "  } else { // nBodies == 2<<15\n",
    "    initialized_values = \"09-nbody/files/initialized_65536\";\n",
    "    solution_values = \"09-nbody/files/solution_65536\";\n",
    "  }\n",
    "\n",
    "  if (argc > 2) initialized_values = argv[2];\n",
    "  if (argc > 3) solution_values = argv[3];\n",
    "\n",
    "  const float dt = 0.01f; // Time step\n",
    "  const int nIters = 10;  // Simulation iterations\n",
    "\n",
    "  int bytes = nBodies * sizeof(Body);\n",
    "  float *buf;\n",
    "\n",
    "  cudaMallocManaged(&buf, bytes);\n",
    "\n",
    "  Body *p = (Body*)buf;\n",
    "\n",
    "  read_values_from_file(initialized_values, buf, bytes);\n",
    "\n",
    "  double totalTime = 0.0;\n",
    "\n",
    "  /*\n",
    "   * This simulation will run for 10 cycles of time, calculating gravitational\n",
    "   * interaction amongst bodies, and adjusting their positions to reflect.\n",
    "   */\n",
    "\n",
    "  for (int iter = 0; iter < nIters; iter++) {\n",
    "\n",
    "    StartTimer();\n",
    "    cudaMemPrefetchAsync(p, bytes, deviceId);\n",
    "\n",
    "  /*\n",
    "   * You will likely wish to refactor the work being done in `bodyForce`,\n",
    "   * and potentially the work to integrate the positions.\n",
    "   */\n",
    "    int threadsPerBlock = 1024;\n",
    "    int blocksPerGrid = 32 * numberOfSMs;\n",
    "    bodyForceGPU<<<blocksPerGrid, threadsPerBlock>>>(p, dt, nBodies);\n",
    "    cudaDeviceSynchronize();\n",
    "    // bodyForce(p, dt, nBodies); // compute interbody forces\n",
    "\n",
    "  /*\n",
    "   * This position integration cannot occur until this round of `bodyForce` has completed.\n",
    "   * Also, the next round of `bodyForce` cannot begin until the integration is complete.\n",
    "   */\n",
    "\n",
    "    cudaMemPrefetchAsync(p, bytes, cudaCpuDeviceId);\n",
    "    for (int i = 0 ; i < nBodies; i++) { // integrate position\n",
    "      p[i].x += p[i].vx*dt;\n",
    "      p[i].y += p[i].vy*dt;\n",
    "      p[i].z += p[i].vz*dt;\n",
    "    }\n",
    "\n",
    "    const double tElapsed = GetTimer() / 1000.0;\n",
    "    totalTime += tElapsed;\n",
    "  }\n",
    "\n",
    "  double avgTime = totalTime / (double)(nIters);\n",
    "  float billionsOfOpsPerSecond = 1e-9 * nBodies * nBodies / avgTime;\n",
    "  write_values_to_file(solution_values, buf, bytes);\n",
    "\n",
    "  // You will likely enjoy watching this value grow as you accelerate the application,\n",
    "  // but beware that a failure to correctly synchronize the device might result in\n",
    "  // unrealistically high values.\n",
    "  printf(\"%0.3f Billion Interactions / second\\n\", billionsOfOpsPerSecond);\n",
    "\n",
    "  cudaFree(buf);\n",
    "}\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#include <math.h>\n",
    "#include <stdio.h>\n",
    "#include <stdlib.h>\n",
    "#include \"timer.h\"\n",
    "#include \"files.h\"\n",
    "\n",
    "#define SOFTENING 1e-9f\n",
    "\n",
    "typedef struct { float x, y, z, vx, vy, vz; } Body;\n",
    "\n",
    "__global__ void bodyForceGPU(Body *p, float dt, int n, int streamId, int streamNumber)\n",
    "{\n",
    "    int taskGrid = n / streamNumber;\n",
    "    int taskStart = taskGrid * streamId;\n",
    "    int taskEnd = taskStart + taskGrid;\n",
    "    int i = threadIdx.x + blockIdx.x * blockDim.x + taskStart;\n",
    "    int grid = blockDim.x * gridDim.x;\n",
    "    while(i < taskEnd)\n",
    "    {\n",
    "        float Fx = 0.0f; float Fy = 0.0f; float Fz = 0.0f;\n",
    "        for (int j = 0; j < n; j++) \n",
    "        {\n",
    "            float dx = p[j].x - p[i].x;\n",
    "            float dy = p[j].y - p[i].y;\n",
    "            float dz = p[j].z - p[i].z;\n",
    "            float distSqr = dx*dx + dy*dy + dz*dz + SOFTENING;\n",
    "            float invDist = rsqrtf(distSqr);\n",
    "            float invDist3 = invDist * invDist * invDist;\n",
    "\n",
    "            Fx += dx * invDist3; Fy += dy * invDist3; Fz += dz * invDist3;\n",
    "        }\n",
    "        p[i].vx += dt*Fx; p[i].vy += dt*Fy; p[i].vz += dt*Fz;\n",
    "        i += grid;\n",
    "    }\n",
    "}\n",
    "\n",
    "int main(const int argc, const char** argv) {\n",
    "\n",
    "  int deviceId;\n",
    "  int numberOfSMs;\n",
    "\n",
    "  cudaGetDevice(&deviceId);\n",
    "  cudaDeviceGetAttribute(&numberOfSMs, cudaDevAttrMultiProcessorCount, deviceId);\n",
    "\n",
    "  int nBodies = 2<<11;\n",
    "  if (argc > 1) nBodies = 2<<atoi(argv[1]);\n",
    "\n",
    "  const char * initialized_values;\n",
    "  const char * solution_values;\n",
    "\n",
    "  if (nBodies == 2<<11) {\n",
    "    initialized_values = \"09-nbody/files/initialized_4096\";\n",
    "    solution_values = \"09-nbody/files/solution_4096\";\n",
    "  } else { // nBodies == 2<<15\n",
    "    initialized_values = \"09-nbody/files/initialized_65536\";\n",
    "    solution_values = \"09-nbody/files/solution_65536\";\n",
    "  }\n",
    "\n",
    "  if (argc > 2) initialized_values = argv[2];\n",
    "  if (argc > 3) solution_values = argv[3];\n",
    "\n",
    "  const float dt = 0.01f; // Time step\n",
    "  const int nIters = 10;  // Simulation iterations\n",
    "\n",
    "  int bytes = nBodies * sizeof(Body);\n",
    "  float *buf;\n",
    "\n",
    "  cudaMallocManaged(&buf, bytes);\n",
    "\n",
    "  Body *p = (Body*)buf;\n",
    "\n",
    "  read_values_from_file(initialized_values, buf, bytes);\n",
    "\n",
    "  double totalTime = 0.0;\n",
    "\n",
    "  for (int iter = 0; iter < nIters; iter++) {\n",
    "\n",
    "    StartTimer();\n",
    "    cudaMemPrefetchAsync(p, bytes, deviceId);\n",
    "\n",
    "    int threadsPerBlock = 1024;\n",
    "    int blocksPerGrid = 32 * numberOfSMs;\n",
    "    int streamNumber = 4;\n",
    "    for (int i = 0; i < streamNumber; ++i)\n",
    "    {\n",
    "        cudaStream_t stream;\n",
    "        cudaStreamCreate(&stream);\n",
    "        bodyForceGPU<<<blocksPerGrid, threadsPerBlock, 0, stream>>>(p, dt, nBodies, i, streamNumber);\n",
    "        cudaStreamDestroy(stream);\n",
    "    }\n",
    "    //bodyForceGPU<<<blocksPerGrid, threadsPerBlock>>>(p, dt, nBodies);\n",
    "    cudaDeviceSynchronize();\n",
    "\n",
    "    cudaMemPrefetchAsync(p, bytes, cudaCpuDeviceId);\n",
    "    for (int i = 0 ; i < nBodies; i++) { // integrate position\n",
    "      p[i].x += p[i].vx*dt;\n",
    "      p[i].y += p[i].vy*dt;\n",
    "      p[i].z += p[i].vz*dt;\n",
    "    }\n",
    "\n",
    "    const double tElapsed = GetTimer() / 1000.0;\n",
    "    totalTime += tElapsed;\n",
    "  }\n",
    "\n",
    "  double avgTime = totalTime / (double)(nIters);\n",
    "  float billionsOfOpsPerSecond = 1e-9 * nBodies * nBodies / avgTime;\n",
    "  write_values_to_file(solution_values, buf, bytes);\n",
    "\n",
    "  printf(\"%0.3f Billion Interactions / second\\n\", billionsOfOpsPerSecond);\n",
    "\n",
    "  cudaFree(buf);\n",
    "}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "手动管理内存\n",
    "1. cudaMalloc将内存直接分配给活动的 GPU。这可以防止所有 GPU 页面错误。作为交换，它返回的指针不可用于主机代码访问。\n",
    "2. cudaMallocHost将内存直接分配给 CPU。它还“固定”内存，或页面锁定它，这将允许将内存异步复制到GPU或从GPU复制内存。过多的固定内存可能会干扰 CPU 性能，因此请仅有意使用。应使用 释放固定内存。cudaFreeHost\n",
    "3. cudaMemcpy可以复制（而不是传输）内存，从主机到设备或从设备到主机。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "int *host_a, *device_a;        // Define host-specific and device-specific arrays.\n",
    "cudaMalloc(&device_a, size);   // `device_a` is immediately available on the GPU.\n",
    "cudaMallocHost(&host_a, size); // `host_a` is immediately available on CPU, and is page-locked, or pinned.\n",
    "\n",
    "initializeOnHost(host_a, N);   // No CPU page faulting since memory is already allocated on the host.\n",
    "\n",
    "// `cudaMemcpy` takes the destination, source, size, and a CUDA-provided variable for the direction of the copy.\n",
    "cudaMemcpy(device_a, host_a, size, cudaMemcpyHostToDevice);\n",
    "\n",
    "kernel<<<blocks, threads, 0, someStream>>>(device_a, N);\n",
    "\n",
    "// `cudaMemcpy` can also copy data from device to host.\n",
    "cudaMemcpy(host_a, device_a, size, cudaMemcpyDeviceToHost);\n",
    "\n",
    "verifyOnHost(host_a, N);\n",
    "\n",
    "cudaFree(device_a);\n",
    "cudaFreeHost(host_a);          // Free pinned memory like this."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Exercise: Manually Allocate Host and Device Memory\n",
    "\n",
    "需要注意的是，使用手动分配后没有预取功能可以使用了，且必须在特定设备单独创建可使用的内存"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#include <stdio.h>\n",
    "\n",
    "__global__\n",
    "void initWith(float num, float *a, int N)\n",
    "{\n",
    "\n",
    "  int index = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "  int stride = blockDim.x * gridDim.x;\n",
    "\n",
    "  for(int i = index; i < N; i += stride)\n",
    "  {\n",
    "    a[i] = num;\n",
    "  }\n",
    "}\n",
    "\n",
    "__global__\n",
    "void addVectorsInto(float *result, float *a, float *b, int N)\n",
    "{\n",
    "  int index = threadIdx.x + blockIdx.x * blockDim.x;\n",
    "  int stride = blockDim.x * gridDim.x;\n",
    "\n",
    "  for(int i = index; i < N; i += stride)\n",
    "  {\n",
    "    result[i] = a[i] + b[i];\n",
    "  }\n",
    "}\n",
    "\n",
    "void checkElementsAre(float target, float *vector, int N)\n",
    "{\n",
    "  for(int i = 0; i < N; i++)\n",
    "  {\n",
    "    if(vector[i] != target)\n",
    "    {\n",
    "      printf(\"FAIL: vector[%d] - %0.0f does not equal %0.0f\\n\", i, vector[i], target);\n",
    "      exit(1);\n",
    "    }\n",
    "  }\n",
    "  printf(\"Success! All values calculated correctly.\\n\");\n",
    "}\n",
    "\n",
    "int main()\n",
    "{\n",
    "  int deviceId;\n",
    "  int numberOfSMs;\n",
    "\n",
    "  cudaGetDevice(&deviceId);\n",
    "  cudaDeviceGetAttribute(&numberOfSMs, cudaDevAttrMultiProcessorCount, deviceId);\n",
    "\n",
    "  const int N = 2<<24;\n",
    "  size_t size = N * sizeof(float);\n",
    "\n",
    "  float *a;\n",
    "  float *b;\n",
    "  float *c;\n",
    "  float *result;\n",
    "\n",
    "  cudaMalloc(&a, size);\n",
    "  cudaMalloc(&b, size);\n",
    "  cudaMalloc(&c, size);\n",
    "  cudaMallocHost(&result, size);\n",
    "\n",
    "  size_t threadsPerBlock;\n",
    "  size_t numberOfBlocks;\n",
    "\n",
    "  threadsPerBlock = 256;\n",
    "  numberOfBlocks = 32 * numberOfSMs;\n",
    "\n",
    "  cudaError_t addVectorsErr;\n",
    "  cudaError_t asyncErr;\n",
    "\n",
    "  /*\n",
    "   * Create 3 streams to run initialize the 3 data vectors in parallel.\n",
    "   */\n",
    "\n",
    "  cudaStream_t stream1, stream2, stream3;\n",
    "  cudaStreamCreate(&stream1);\n",
    "  cudaStreamCreate(&stream2);\n",
    "  cudaStreamCreate(&stream3);\n",
    "\n",
    "  /*\n",
    "   * Give each `initWith` launch its own non-standard stream.\n",
    "   */\n",
    "\n",
    "  initWith<<<numberOfBlocks, threadsPerBlock, 0, stream1>>>(3, a, N);\n",
    "  initWith<<<numberOfBlocks, threadsPerBlock, 0, stream2>>>(4, b, N);\n",
    "  initWith<<<numberOfBlocks, threadsPerBlock, 0, stream3>>>(0, c, N);\n",
    "\n",
    "  addVectorsInto<<<numberOfBlocks, threadsPerBlock>>>(c, a, b, N);\n",
    "  // cudaDeviceSynchronize();\n",
    "\n",
    "  addVectorsErr = cudaGetLastError();\n",
    "  if(addVectorsErr != cudaSuccess) printf(\"Error: %s\\n\", cudaGetErrorString(addVectorsErr));\n",
    "\n",
    "  asyncErr = cudaDeviceSynchronize();\n",
    "  if(asyncErr != cudaSuccess) printf(\"Error: %s\\n\", cudaGetErrorString(asyncErr));\n",
    "\n",
    "  cudaMemcpy(result, c, size, cudaMemcpyDeviceToHost);\n",
    "\n",
    "  checkElementsAre(7, result, N);\n",
    "\n",
    "  /*\n",
    "   * Destroy streams when they are no longer needed.\n",
    "   */\n",
    "\n",
    "  cudaStreamDestroy(stream1);\n",
    "  cudaStreamDestroy(stream2);\n",
    "  cudaStreamDestroy(stream3);\n",
    "\n",
    "  cudaFree(a);\n",
    "  cudaFree(b);\n",
    "  cudaFree(c);\n",
    "  cudaFreeHost(result);\n",
    "\n",
    "  return 0;\n",
    "}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用流让数据传输和代码执行重叠\n",
    "\n",
    "只要主机内存是固定的，就可以使用cudaMemcpyAsync完成异步的内存复制\n",
    "\n",
    "与内核执行类似，默认情况下仅相对于主机是异步的。默认情况下，它在默认流中执行，因此对于 GPU 上发生的其他 CUDA 操作，它是一个阻塞操作。但是，该函数将非默认流作为可选的第 5 个参数。通过向其传递非默认流，内存传输可以与其他非默认流中发生的其他 CUDA 操作并发。\n",
    "\n",
    "一种常见且有用的模式是使用固定主机内存、非默认流中的异步内存副本和非默认流中的内核执行的组合，以将内存传输与内核执行重叠。\n",
    "\n",
    "在下面的示例中，不是等到整个内存副本完成之后再开始处理内核工作，而是复制和处理所需数据段，每个副本/工作段在其自己的非默认流中运行。使用这种技术，可以开始处理部分数据，而后续段的内存传输可以同时进行。使用此技术计算操作数的段特定值以及数组内的偏移位置时，必须格外小心，如下所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "int N = 2<<24;\n",
    "int size = N * sizeof(int);\n",
    "\n",
    "int *host_array;\n",
    "int *device_array;\n",
    "\n",
    "cudaMallocHost(&host_array, size);               // Pinned host memory allocation.\n",
    "cudaMalloc(&device_array, size);                 // Allocation directly on the active GPU device.\n",
    "\n",
    "initializeData(host_array, N);                   // Assume this application needs to initialize on the host.\n",
    "\n",
    "const int numberOfSegments = 4;                  // This example demonstrates slicing the work into 4 segments.\n",
    "int segmentN = N / numberOfSegments;             // A value for a segment's worth of `N` is needed.\n",
    "size_t segmentSize = size / numberOfSegments;    // A value for a segment's worth of `size` is needed.\n",
    "\n",
    "// For each of the 4 segments...\n",
    "for (int i = 0; i < numberOfSegments; ++i)\n",
    "{\n",
    "  // Calculate the index where this particular segment should operate within the larger arrays.\n",
    "  segmentOffset = i * segmentN;\n",
    "\n",
    "  // Create a stream for this segment's worth of copy and work.\n",
    "  cudaStream_t stream;\n",
    "  cudaStreamCreate(&stream);\n",
    "\n",
    "  // Asynchronously copy segment's worth of pinned host memory to device over non-default stream.\n",
    "  cudaMemcpyAsync(&device_array[segmentOffset],  // Take care to access correct location in array.\n",
    "                  &host_array[segmentOffset],    // Take care to access correct location in array.\n",
    "                  segmentSize,                   // Only copy a segment's worth of memory.\n",
    "                  cudaMemcpyHostToDevice,\n",
    "                  stream);                       // Provide optional argument for non-default stream.\n",
    "\n",
    "  // Execute segment's worth of work over same non-default stream as memory copy.\n",
    "  kernel<<<number_of_blocks, threads_per_block, 0, stream>>>(&device_array[segmentOffset], segmentN);\n",
    "\n",
    "  // `cudaStreamDestroy` will return immediately (is non-blocking), but will not actually destroy stream until\n",
    "  // all stream operations are complete.\n",
    "  cudaStreamDestroy(stream);\n",
    "}"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.10.2 64-bit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "name": "python",
   "version": "3.10.2"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "a42ccb73e7d9bfdf27e036f1d2b8b681e55fc0743cc5586bc2474d4a60f4b886"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
