<!-- HTML header for doxygen 1.8.13-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.14"/>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<title>Taskflow Handbook</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<link rel="icon" type="image/x-icon" href="favicon.ico" />
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtreedata.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
  $(document).ready(initResizable);
/* @license-end */</script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/searchdata.js"></script>
<script type="text/javascript" src="search/search.js"></script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectalign" style="padding-left: 0.5em;">
   <div id="projectname"><a href="https://taskflow.github.io/">Taskflow</a>
   &#160;<span id="projectnumber">3.0.0-Master-Branch</span>
   </div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.14 -->
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
var searchBox = new SearchBox("searchBox", "search",false,'Search');
/* @license-end */
</script>
<script type="text/javascript" src="menudata.js"></script>
<script type="text/javascript" src="menu.js"></script>
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
$(function() {
  initMenu('',true,false,'search.php','Search');
  $(document).ready(function() { init_search(); });
});
/* @license-end */</script>
<div id="main-nav"></div>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
  <div id="nav-tree">
    <div id="nav-tree-contents">
      <div id="nav-sync" class="sync"></div>
    </div>
  </div>
  <div id="splitbar" style="-moz-user-select:none;" 
       class="ui-resizable-handle">
  </div>
</div>
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
$(document).ready(function(){initNavTree('matrix_multiplication.html','');});
/* @license-end */
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div class="header">
  <div class="headertitle">
<div class="title">Matrix Multiplication </div>  </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><p>We study the classic problem, <em>2D matrix multiplication</em>. We will start with a short introduction about the problem and then discuss how to solve it using CPU and GPU parallel computing.</p>
<h1><a class="anchor" id="MatrixMultiplicationProblem"></a>
Problem Formulation</h1>
<p>We are multiplying two matrices, A (MxK) and B (KxN). The numbers of columns of A must match the number of rows of B. The output matrix C has the shape of (MxN) where M is the rows of A and N the columns of B. The following example multiplies a 3x3 matrix with a 3x2 matrix to derive a 3x2 matrix.</p>
<div class="image">
<img src="matrix_multiplication_1.png" alt="matrix_multiplication_1.png" width="50%"/>
</div>
<p>As a general view, for each element of C we iterate a complete row of A and a complete column of B, multiplying each element and summing them.</p>
<div class="image">
<img src="matrix_multiplication_2.png" alt="matrix_multiplication_2.png" width="50%"/>
</div>
<p>We can implement matrix multiplication using three nested loops.</p>
<div class="fragment"><div class="line"><span class="keywordflow">for</span>(<span class="keywordtype">int</span> m=0; m&lt;M; m++) {</div><div class="line">  <span class="keywordflow">for</span>(<span class="keywordtype">int</span> n=0; n&lt;N; n++) {</div><div class="line">    C[m][n] = 0;</div><div class="line">    <span class="keywordflow">for</span>(<span class="keywordtype">int</span> k=0; k&lt;K; k++) {</div><div class="line">      C[m][n] += A[m][k] * B[k][n];</div><div class="line">    }</div><div class="line">  }</div><div class="line">}</div></div><!-- fragment --><h1><a class="anchor" id="MatrixMultiplicationParallelPattern"></a>
Parallel Patterns</h1>
<p>At a fine-grained level, computing each element of C is independent of each other. Similarly, computing each row of C or each column of C is also independent of one another. With task parallelism, we prefer <em>coarse-grained</em> model to have each task perform rather large computation to amortize the overhead of creating and scheduling tasks. In this case, we avoid intensive tasks each working on only a single element. by creating a task per row of C to multiply a row of A by every column of B.</p>
<div class="fragment"><div class="line"><span class="comment">// C = A * B</span></div><div class="line"><span class="comment">// A is a MxK matrix, B is a KxN matrix, and C is a MxN matrix</span></div><div class="line"><span class="keywordtype">void</span> matrix_multiplication(<span class="keywordtype">int</span>** A, <span class="keywordtype">int</span>** B, <span class="keywordtype">int</span>** C, <span class="keywordtype">int</span> M, <span class="keywordtype">int</span> K, <span class="keywordtype">int</span> N) {</div><div class="line"></div><div class="line">  <a class="code" href="classtf_1_1Taskflow.html">tf::Taskflow</a> taskflow;</div><div class="line">  <a class="code" href="classtf_1_1Executor.html">tf::Executor</a> executor;</div><div class="line">  </div><div class="line">  <span class="keywordflow">for</span>(<span class="keywordtype">int</span> m=0; m&lt;M; ++m) {</div><div class="line">    taskflow.<a class="code" href="classtf_1_1FlowBuilder.html#a60d7a666cab71ecfa3010b2efb0d6b57">emplace</a>([m, &amp;] () {</div><div class="line">      <span class="keywordflow">for</span>(<span class="keywordtype">int</span> n=0; n&lt;N; n++) {</div><div class="line">        <span class="keywordflow">for</span>(<span class="keywordtype">int</span> k=0; k&lt;K; k++) {</div><div class="line">          C[m][n] += A[m][k] * B[k][n];</div><div class="line">        }</div><div class="line">      }</div><div class="line">    });</div><div class="line">  }</div><div class="line"></div><div class="line">  executor.<a class="code" href="classtf_1_1Executor.html#a81f35d5b0a20ac0646447eb80d97c0aa">run</a>(taskflow).wait();</div><div class="line">}</div></div><!-- fragment --><p>Instead of creating tasks one-by-one over a loop, you can leverage Taskflow::parallel_for to create a <em>parallel-for</em> task. A parallel-for task spawns a subflow to perform parallel iterations over the given range.</p>
<div class="fragment"><div class="line"><span class="comment">// perform parallel iterations on the range [0, M) with the step size of 1</span></div><div class="line"><a class="code" href="classtf_1_1Task.html">tf::Task</a> task = taskflow.parallel_for(0, M, 1, [&amp;] (<span class="keywordtype">int</span> m) {</div><div class="line">  <span class="keywordflow">for</span>(<span class="keywordtype">int</span> n=0; n&lt;N; n++) {</div><div class="line">    <span class="keywordflow">for</span>(<span class="keywordtype">int</span> k=0; k&lt;K; k++) {</div><div class="line">      C[m][n] += A[m][k] * B[k][n];</div><div class="line">    }   </div><div class="line">  }   </div><div class="line">}); </div></div><!-- fragment --><p>Please visit <a class="el" href="A1ForEach.html">Parallel Iterations</a> for more details.</p>
<h1><a class="anchor" id="GPUAcceleratedMatrixMultiplication"></a>
GPU-based Acceleration</h1>
<p>GPU is able to do a lot of parallel computations more than CPUs. It is especially useful for data-intensive computing such as matrix multiplication. With GPU, we express the parallel patterns at a fine-grained level. The kernel, written in CUDA, is described as follows:</p>
<div class="fragment"><div class="line"><span class="comment">// CUDA kernel to perform matrix multiplication</span></div><div class="line">__global__ <span class="keywordtype">void</span> matmul(<span class="keywordtype">int</span> *A, <span class="keywordtype">int</span> *B, <span class="keywordtype">int</span> *C, <span class="keywordtype">int</span> M, <span class="keywordtype">int</span> K, <span class="keywordtype">int</span> N) {</div><div class="line">  <span class="keywordtype">int</span> row = blockIdx.y * blockDim.y + threadIdx.y;</div><div class="line">  <span class="keywordtype">int</span> col = blockIdx.x * blockDim.x + threadIdx.x;</div><div class="line">  <span class="keywordtype">int</span> sum = 0;</div><div class="line">  <span class="keywordflow">if</span>(col &lt; N &amp;&amp; row &lt; M) {</div><div class="line">    <span class="keywordflow">for</span>(<span class="keywordtype">int</span> i = 0; i &lt; K; i++) {</div><div class="line">      sum += a[row * K + i] * b[i * N + col];</div><div class="line">    }</div><div class="line">    c[row * N + col] = sum;</div><div class="line">  }</div><div class="line">}</div></div><!-- fragment --><p>Each CUDA thread corresponds to an element of C and compute its result. Instead of storing each matrix in a 2D array, we use 1D layout to ease the data transfer between CPU and GPU. In a row-major layout, an element <code>(x, y)</code> in the 2D matrix can be addressed at <code>x * width + y</code> in the transformed 1D layout.</p>
<div class="image">
<img src="matrix_multiplication_4.png" alt="matrix_multiplication_4.png" width="50%"/>
</div>
<p>The next step is to allocate memory for A, B, and C at a GPU. We create three tasks each calling <code>cudaMalloc</code> to allocate space for one matrix. Then, we create a cudaFlow to offload matrix multiplication to a GPU. The entire code is described as follows:</p>
<div class="fragment"><div class="line"><span class="keywordtype">void</span> matrix_multiplication(<span class="keywordtype">int</span>* A, <span class="keywordtype">int</span>* B, <span class="keywordtype">int</span>* C, <span class="keywordtype">int</span> M, <span class="keywordtype">int</span> K, <span class="keywordtype">int</span> N) {</div><div class="line">  </div><div class="line">  <a class="code" href="classtf_1_1Taskflow.html">tf::Taskflow</a> taskflow;</div><div class="line">  <a class="code" href="classtf_1_1Executor.html">tf::Executor</a> executor;</div><div class="line"></div><div class="line">  <span class="comment">// allocate the host and gpu storage for A</span></div><div class="line">  <a class="code" href="classtf_1_1Task.html">tf::Task</a> allocate_a = taskflow.<a class="code" href="classtf_1_1FlowBuilder.html#a60d7a666cab71ecfa3010b2efb0d6b57">emplace</a>([&amp;](){</div><div class="line">    cudaMalloc(&amp;da, M*K*<span class="keyword">sizeof</span>(<span class="keywordtype">int</span>));</div><div class="line">  }).name(<span class="stringliteral">&quot;allocate_a&quot;</span>);</div><div class="line">  </div><div class="line">  <span class="comment">// allocate the host and gpu storage for B</span></div><div class="line">  <a class="code" href="classtf_1_1Task.html">tf::Task</a> allocate_b = taskflow.<a class="code" href="classtf_1_1FlowBuilder.html#a60d7a666cab71ecfa3010b2efb0d6b57">emplace</a>([&amp;](){</div><div class="line">    cudaMalloc(&amp;db, K*N*<span class="keyword">sizeof</span>(<span class="keywordtype">int</span>));</div><div class="line">  }).name(<span class="stringliteral">&quot;allocate_b&quot;</span>);</div><div class="line">  </div><div class="line">  <span class="comment">// allocate the host and gpu storage for C</span></div><div class="line">  <a class="code" href="classtf_1_1Task.html">tf::Task</a> allocate_c = taskflow.<a class="code" href="classtf_1_1FlowBuilder.html#a60d7a666cab71ecfa3010b2efb0d6b57">emplace</a>([&amp;](){</div><div class="line">    cudaMalloc(&amp;dc, M*N*<span class="keyword">sizeof</span>(<span class="keywordtype">int</span>));</div><div class="line">  }).name(<span class="stringliteral">&quot;allocate_c&quot;</span>);</div><div class="line">  </div><div class="line">  <span class="comment">// create a cudaFlow to run the matrix multiplication</span></div><div class="line">  <a class="code" href="classtf_1_1Task.html">tf::Task</a> cudaFlow = taskflow.<a class="code" href="classtf_1_1FlowBuilder.html#a60d7a666cab71ecfa3010b2efb0d6b57">emplace</a>([&amp;](<a class="code" href="classtf_1_1cudaFlow.html">tf::cudaFlow</a>&amp; cf){</div><div class="line">  </div><div class="line">    <span class="comment">// copy data to da, db, and dc</span></div><div class="line">    <a class="code" href="classtf_1_1cudaTask.html">tf::cudaTask</a> copy_da = cf.<a class="code" href="classtf_1_1cudaFlow.html#af03e04771b655f9e629eb4c22e19b19f">copy</a>(da, A, M*K).<a class="code" href="classtf_1_1cudaTask.html#ab81b4f71a44af8d61758524f0c274962">name</a>(<span class="stringliteral">&quot;H2D_A&quot;</span>);</div><div class="line">    <a class="code" href="classtf_1_1cudaTask.html">tf::cudaTask</a> copy_db = cf.<a class="code" href="classtf_1_1cudaFlow.html#af03e04771b655f9e629eb4c22e19b19f">copy</a>(db, B, K*N).<a class="code" href="classtf_1_1cudaTask.html#ab81b4f71a44af8d61758524f0c274962">name</a>(<span class="stringliteral">&quot;H2D_B&quot;</span>);</div><div class="line">    <a class="code" href="classtf_1_1cudaTask.html">tf::cudaTask</a> copy_hc = cf.<a class="code" href="classtf_1_1cudaFlow.html#af03e04771b655f9e629eb4c22e19b19f">copy</a>(C, dc, M*N).<a class="code" href="classtf_1_1cudaTask.html#ab81b4f71a44af8d61758524f0c274962">name</a>(<span class="stringliteral">&quot;D2H_C&quot;</span>);</div><div class="line">  </div><div class="line">    dim3 grid  ((K+16-1)/16, (M+16-1)/16);</div><div class="line">    dim3 block (16, 16);</div><div class="line">  </div><div class="line">    <a class="code" href="classtf_1_1cudaTask.html">tf::cudaTask</a> kmatmul = cf.<a class="code" href="classtf_1_1cudaFlow.html#adb731be71bdd436dfb5e36e6213a9a17">kernel</a>(grid, block, 0, matmul, da, db, dc, M, K, N)</div><div class="line">                             .<a class="code" href="classtf_1_1cudaTask.html#ab81b4f71a44af8d61758524f0c274962">name</a>(<span class="stringliteral">&quot;matmul&quot;</span>);</div><div class="line">  </div><div class="line">    kmatmul.<a class="code" href="classtf_1_1cudaTask.html#a4a9ca1a34bac47e4c9b04eb4fb2f7775">succeed</a>(copy_da, copy_db)</div><div class="line">           .<a class="code" href="classtf_1_1cudaTask.html#abdd68287ec4dff4216af34d1db44d1b4">precede</a>(copy_hc);</div><div class="line">  </div><div class="line">  }).name(<span class="stringliteral">&quot;cudaFlow&quot;</span>);</div><div class="line">  </div><div class="line">  <span class="comment">// free the gpu storage</span></div><div class="line">  <span class="keyword">auto</span> <a class="codeRef" doxygen="/Users/twhuang/PhD/Code/taskflow/doxygen/cppreference-doxygen-web.tag.xml:http://en.cppreference.com/w/" href="http://en.cppreference.com/w/cpp/memory/c/free.html">free</a> = taskflow.<a class="code" href="classtf_1_1FlowBuilder.html#a60d7a666cab71ecfa3010b2efb0d6b57">emplace</a>([&amp;](){</div><div class="line">    cudaFree(da);</div><div class="line">    cudaFree(db);</div><div class="line">    cudaFree(dc);</div><div class="line">  }).name(<span class="stringliteral">&quot;free&quot;</span>);</div><div class="line">  </div><div class="line">  <span class="comment">// create dependency</span></div><div class="line">  cudaFlow.<a class="code" href="classtf_1_1Task.html#a331b1b726555072e7c7d10941257f664">succeed</a>(allocate_a, allocate_b, allocate_c)</div><div class="line">          .<a class="code" href="classtf_1_1Task.html#a8c78c453295a553c1c016e4062da8588">precede</a>(free);</div><div class="line">  </div><div class="line">  <span class="comment">// dump the graph without unfolding the cudaFlow</span></div><div class="line">  taskflow.<a class="code" href="classtf_1_1Taskflow.html#a4725d8ea5ff7595d9d71593360538e00">dump</a>(<a class="codeRef" doxygen="/Users/twhuang/PhD/Code/taskflow/doxygen/cppreference-doxygen-web.tag.xml:http://en.cppreference.com/w/" href="http://en.cppreference.com/w/cpp/io/basic_ostream.html">std::cout</a>);</div><div class="line"></div><div class="line">  <span class="comment">// run the taskflow</span></div><div class="line">  executor.<a class="code" href="classtf_1_1Executor.html#a81f35d5b0a20ac0646447eb80d97c0aa">run</a>(taskflow).wait();</div><div class="line"></div><div class="line">  <span class="comment">// dump the entire execution graph including unfolded cudaFlow</span></div><div class="line">  taskflow.<a class="code" href="classtf_1_1Taskflow.html#a4725d8ea5ff7595d9d71593360538e00">dump</a>(<a class="codeRef" doxygen="/Users/twhuang/PhD/Code/taskflow/doxygen/cppreference-doxygen-web.tag.xml:http://en.cppreference.com/w/" href="http://en.cppreference.com/w/cpp/io/basic_ostream.html">std::cout</a>);</div><div class="line">}</div></div><!-- fragment --><p>Within the cudaFlow, we create two host-to-device (H2D) tasks that copy data from <code>A</code> and <code>B</code> to <code>da</code> and <code>db</code>, one device-to-host (D2H) task that copies the result from <code>dc</code> to <code>C</code>, and one kernel task that launches <code>matmul</code> on the GPU (by default, GPU 0). H2D tasks precede the kernel and the kernel precedes the D2H task. These GPU operations form a GPU task graph managed by a cudaFlow. The first dump of the taskflow gives the following graph:</p>
<div class="image">
<object type="image/svg+xml" data="matrix_multiplication_5.svg" width="40%">matrix_multiplication_5.svg</object>
</div>
<p>A cudaFlow encapsulates a GPU task dependency graph similar to a subflow (see <a class="el" href="chapter3.html">C3: Dynamic Tasking</a>). In order to visualize it, we need to execute the graph first and then dump the taskflow.</p>
<div class="image">
<object type="image/svg+xml" data="matrix_multiplication_6.svg" width="50%">matrix_multiplication_6.svg</object>
</div>
<h1><a class="anchor" id="MatrixMultiplicationBenchmarking"></a>
Benchmarking</h1>
<p>We run three versions of matrix multiplication, sequential CPU, parallel CPUs, and one GPU, on a machine of 6 Intel i7-8700 CPUs at 3.20GHz and a Nvidia RTX 2080 GPU using various matrix sizes of A, B, and C.</p>
<div align="center"> <table class="markdownTable">
<tr class="markdownTableHead">
<th class="markdownTableHeadCenter">A  </th><th class="markdownTableHeadCenter">B  </th><th class="markdownTableHeadCenter">C  </th><th class="markdownTableHeadCenter">CPU Sequential  </th><th class="markdownTableHeadCenter">CPU Parallel  </th><th class="markdownTableHeadCenter">GPU Parallel   </th></tr>
<tr class="markdownTableBody" class="markdownTableRowOdd">
<td class="markdownTableBodyCenter">10x10  </td><td class="markdownTableBodyCenter">10x10  </td><td class="markdownTableBodyCenter">10x10  </td><td class="markdownTableBodyCenter">0.142 ms  </td><td class="markdownTableBodyCenter">0.414 ms  </td><td class="markdownTableBodyCenter">82 ms   </td></tr>
<tr class="markdownTableBody" class="markdownTableRowEven">
<td class="markdownTableBodyCenter">100x100  </td><td class="markdownTableBodyCenter">100x100  </td><td class="markdownTableBodyCenter">100x100  </td><td class="markdownTableBodyCenter">1.641 ms  </td><td class="markdownTableBodyCenter">0.733 ms  </td><td class="markdownTableBodyCenter">83 ms   </td></tr>
<tr class="markdownTableBody" class="markdownTableRowOdd">
<td class="markdownTableBodyCenter">1000x1000  </td><td class="markdownTableBodyCenter">1000x1000  </td><td class="markdownTableBodyCenter">1000x1000  </td><td class="markdownTableBodyCenter">1532 ms  </td><td class="markdownTableBodyCenter">504 ms  </td><td class="markdownTableBodyCenter">85 ms   </td></tr>
<tr class="markdownTableBody" class="markdownTableRowEven">
<td class="markdownTableBodyCenter">2000x2000  </td><td class="markdownTableBodyCenter">2000x2000  </td><td class="markdownTableBodyCenter">2000x2000  </td><td class="markdownTableBodyCenter">25688 ms  </td><td class="markdownTableBodyCenter">4387 ms  </td><td class="markdownTableBodyCenter">133 ms   </td></tr>
<tr class="markdownTableBody" class="markdownTableRowOdd">
<td class="markdownTableBodyCenter">3000x3000  </td><td class="markdownTableBodyCenter">3000x3000  </td><td class="markdownTableBodyCenter">3000x3000  </td><td class="markdownTableBodyCenter">104838 ms  </td><td class="markdownTableBodyCenter">16170 ms  </td><td class="markdownTableBodyCenter">214 ms   </td></tr>
<tr class="markdownTableBody" class="markdownTableRowEven">
<td class="markdownTableBodyCenter">4000x4000  </td><td class="markdownTableBodyCenter">4000x4000  </td><td class="markdownTableBodyCenter">4000x4000  </td><td class="markdownTableBodyCenter">250133 ms  </td><td class="markdownTableBodyCenter">39646 ms  </td><td class="markdownTableBodyCenter">427 ms   </td></tr>
</table>
</div><p>With the matrix size going up to 1000, the speed-up of GPU over CPUs becomes prominent. </p>
</div></div><!-- contents -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
  <ul>
    <li class="navelem"><a class="el" href="Examples.html">Learning from Examples</a></li>
    <li class="footer">Generated by
    <a href="http://www.doxygen.org/index.html">
    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.14 </li>
  </ul>
</div>
</body>
</html>
