/*!
  \file
  This file contains documentation for the 'cudapinnedmemory' example.
  It contains some C++ code, to help Doxygen find its way around.
*/
namespace SciGPU {
  namespace Legion {


    /*!
      \page cudapinnedmem Allocating Pinned Memory with CUDA Legions

      \dontinclude cudapinnedmem.cpp

      When using GPUs via CUDA, one generally wants to allocate
      pinned memory on the host, in order to accelerate
      transfers between the host and device.
      However, pinned memory is attached to a CUDA context,
      and so the allocation (and release) of such memory
      has to be done by the thread controlling that context.
      In this example, we will discuss the
      CUDAhostAllocTask and CUDAhostFreeTask classes, which
      allow for such control.
      The CUDAhostAllocTask can allocate pinned memory for a
      particular context, or for any context.


      \section manipids Saving Maniple IDs

      If we only want to pin memory for a particular context
      (that is, Maniple), we need to be able to keep track
      of the Maniple IDs returned by Legion::AddManiple.
      For our simple program, we can use \c std::map
      \skip // Map for maniples and threads
      \until }
      As each maniple is created, we store it in the map
      according to its creation order.


      \section cpmalloc Allocating Memory

      To allocate memory, we first have to declare where
      we want the pointers to go.
      In this simple example, we can simply
      create a bunch of \c char* pointers
      \skip // Declare a list of host pointers
      \until vector<char*>
      We then declare a list of
      \ref CUDAhostAllocTask "CUDAhostAllocTasks", and
      fill it appropriately
      \skip // Create the list of allocation tasks
      \until } // Task list complete
      Note that we have introduced an \c int variable \c selectGPU.
      If its value is negative, then the allocations are to
      be pinned for all CUDA contexts.
      Otherwise, it uses the \c manipleIDs map declared above
      to set the \c tid field of the CUDAhostAllocTask to
      select a particular Maniple.
      
      We then enqueue the tasks as usual
      \skip // Enqueue the tasks
      \until }


      \section cpfree Releasing Memory

      Memory allocated via \c cudaHostAlloc can only
      be passed as an argument to \c cudaFree by the context
      which allocated it.
      So, we have to go through a similar procedure
      to free the memory when we're done
      \skip // Create the tasks to free memory
      \until } // Task list complete
      Real programs will need a better means of keeping
      track of the context which allocated particular
      pieces of memory.
      
    */
  }
}
